Vous êtes sur la page 1sur 1066

Lecture Notes in Networks and Systems 22

Michael E. Auer
Danilo G. Zutin Editors

Online Engineering
& Internet of
Things
Proceedings of the 14th International
Conference on Remote Engineering and
Virtual Instrumentation REV 2017, held
15–17 March 2017, Columbia University,
New York, USA
Lecture Notes in Networks and Systems

Volume 22

Series editor
Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland
e-mail: kacprzyk@ibspan.waw.pl

zamfira@unitbv.ro
The series “Lecture Notes in Networks and Systems” publishes the latest develop-
ments in Networks and Systems—quickly, informally and with high quality. Original
research reported in proceedings and post-proceedings represents the core of LNNS.
Volumes published in LNNS embrace all aspects and subfields of, as well as new
challenges in, Networks and Systems.
The series contains proceedings and edited volumes in systems and networks,
spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor
Networks, Control Systems, Energy Systems, Automotive Systems, Biological
Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems,
Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems,
Robotics, Social Systems, Economic Systems and other. Of particular value to both
the contributors and the readership are the short publication timeframe and the
world-wide distribution and exposure which enable both a wide and rapid
dissemination of research output.
The series covers the theory, applications, and perspectives on the state of the art
and future developments relevant to systems and networks, decision making,
control, complex processes and related areas, as embedded in the fields of
interdisciplinary and applied sciences, engineering, computer science, physics,
economics, social, and life sciences, as well as the paradigms and methodologies
behind them.

Advisory Board
Fernando Gomide, Department of Computer Engineering and Automation—DCA, School
of Electrical and Computer Engineering—FEEC, University of Campinas—UNICAMP,
São Paulo, Brazil
e-mail: gomide@dca.fee.unicamp.br
Okyay Kaynak, Department of Electrical and Electronic Engineering, Bogazici University,
Istanbul, Turkey
e-mail: okyay.kaynak@boun.edu.tr
Derong Liu, Department of Electrical and Computer Engineering, University of Illinois
at Chicago, Chicago, USA and Institute of Automation, Chinese Academy of Sciences,
Beijing, China
e-mail: derong@uic.edu
Witold Pedrycz, Department of Electrical and Computer Engineering, University of Alberta,
Alberta, Canada and Systems Research Institute, Polish Academy of Sciences, Warsaw,
Poland
e-mail: wpedrycz@ualberta.ca
Marios M. Polycarpou, KIOS Research Center for Intelligent Systems and Networks,
Department of Electrical and Computer Engineering, University of Cyprus, Nicosia, Cyprus
e-mail: mpolycar@ucy.ac.cy
Imre J. Rudas, Óbuda University, Budapest Hungary
e-mail: rudas@uni-obuda.hu
Jun Wang, Department of Computer Science, City University of Hong Kong
Kowloon, Hong Kong
e-mail: jwang.cs@cityu.edu.hk

More information about this series at http://www.springer.com/series/15179

zamfira@unitbv.ro
Michael E. Auer Danilo G. Zutin

Editors

Online Engineering &


Internet of Things
Proceedings of the 14th International
Conference on Remote Engineering
and Virtual Instrumentation REV 2017, held
15–17 March 2017, Columbia University,
New York, USA

123
zamfira@unitbv.ro
Editors
Michael E. Auer Danilo G. Zutin
Carinthia University of Applied Sciences Carinthia University of Applied Sciences
Villach Villach
Austria Austria

ISSN 2367-3370 ISSN 2367-3389 (electronic)


Lecture Notes in Networks and Systems
ISBN 978-3-319-64351-9 ISBN 978-3-319-64352-6 (eBook)
DOI 10.1007/978-3-319-64352-6
Library of Congress Control Number: 2017953532

© Springer International Publishing AG 2018


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained herein or
for any errors or omissions that may have been made. The publisher remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.

Printed on acid-free paper

This Springer imprint is published by Springer Nature


The registered company is Springer International Publishing AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

zamfira@unitbv.ro
Preface

The REV conference is the annual conference of the International Association of


Online Engineering (IAOE) and the Global Online Laboratory Consortium
(GOLC).
REV2017 was the 14th in a series of annual events concerning the area of
Remote Engineering and Virtual Instrumentation. The general objective of this
conference is to contribute and discuss fundamentals, applications, and experiences
in the field of Remote Engineering and Virtual Instrumentation and related new
technologies like Internet of Things, Industry 4.0, Cyber-security, M2M, and Smart
Objects. Another objective of the conference is to discuss guidelines and new
concepts for education at different levels for above-mentioned topics including
emerging technologies in learning, MOOCs & MOOLs, Open Resources, and
STEM pre-university education.
REV2017 has been organized in cooperation with Columbia University, New
York, and the International E-Learning Association (IELA) from March 15 to 17,
2017, in New York.
REV2017 offered again an exciting technical program as well as networking
opportunities. Outstanding scientists accepted the invitation for keynote speeches:
• George Giakos
IEEE Fellow, Professor and Chair, Director of Graduate Program, Manhattan
College, ECE Department, Riverdale NY, USA
• Greg Dixson
Director of Industrial Electronics, Phoenix Contact USA
• Helmut Krcmar
Chair of Information Systems, Academic Director of SAP University
Competence Center Munich, Technical University of Munich, Germany
• Robert J. Rencher
Sr. Systems Engineer and Co-Leader Boeing enterprise Internet of Things/
Digital Business strategy team, The Boeing Company, Chicago IL, USA

zamfira@unitbv.ro
vi Preface

• Tarek M. Sobh
Senior Vice President for Graduate Studies and Research and Dean of the
School of Engineering, University of Bridgeport, USA
It was in 2004 when we started this conference series in Villach, Austria,
together with some visionary colleagues and friends from around the world. When
we started our REV endeavor, the Internet was just 10 years old! Since then, the
situation regarding Online Engineering and Virtual Instrumentation has radically
changed. Both are today typical working areas of most of the engineers, and are
inseparably connected with
• Internet of Things
• Cyber-physical Systems
• Collaborative Networks and Grids
• Cyber-cloud Technologies
• Service Architectures
to name only a few.
With our conference in 2004 (thirteen years ago), we tried to focus on the
upcoming use of the Internet for engineering tasks and the problems around it –
with big success as we can see today.
The following main themes have been discussed in detail:
• Online Engineering
• Cyber-physical Systems
• Internet of Things
• Industry 4.0
• Cyber-security
• M2M Concepts
• Virtual and Remote Laboratories
• Remote Process Visualization and Virtual Instrumentation
• Remote Control and Measurement Technologies
• Networking, Grid and Cloud Technologies
• Mixed-reality Environments
• Telerobotics and Telepresence, Coboter
• Collaborative Work in Virtual Environments
• Smart City, Smart Energy, Smart Buildings, Smart Homes
• Innovative Organizational and Educational Concepts
• Standards and Standardization Proposals
• Applications and Experiences
As submission types have been accepted:
• Full Paper, Short Paper
• Work in Progress, Poster
• Special Sessions
• Round Table Discussions, Workshops, Tutorials

zamfira@unitbv.ro
Preface vii

All contributions were subject to a double-blind review. The review process was
very competitive. We had to review nearly 300 submissions. A team of about 129
reviewers did this terrific job. My special thanks go to all of them.
Due to the time and conference schedule restrictions, we could finally accept
only the best 116 submissions for presentation. The conference had again more than
140 participants from 31 countries from all continents.
REV2018 will be held in Düsseldorf, Germany, and REV2019 in Bangalore,
India.

Michael E. Auer
REV General Chair

zamfira@unitbv.ro
Online Engineering & Internet
of Things – Proceedings of the 14th International
Conference on Remote Engineering
and Virtual Instrumentation (REV 2017)

Committees

General Chair

Michael E. Auer

International Advisory Board

Kimberly DeLong MIT, Cambridge MA, USA


David Guralnick President of the International E-Learning
Association (IELA) and Kaleidoscope
Learning, USA
Ingvar Gustavsson Blekinge Institute of Technology, Sweden
Bert Hesselink Stanford University, USA
Zorica Nedic University of South Australia
Neil Albert Salonen President, University of Bridgeport, USA
Cornel Samoila University of Brasov, Romania

Conference Co-chairs

Doru Ursutiu IAOE President, Romania


Abul Azad GOLC President, USA
Tarek Sobh University of Bridgeport, USA

ix

zamfira@unitbv.ro
x Online Engineering & Internet of Things

Program Committee Chairs

Danilo Garbi Zutin, Austria


Elif Kongar, USA

ASEE Liaison

Navarun Gupta, USA

IEEE Liaison

Russ Meier, USA

Workshop and Tutorial Chair

Andreas Pester, Austria

Special Session Chair

Igor Titov, Russia

Demonstration and Poster Chair

Teresa Restivo, Portugal

Publication Chair and Web Master

Sebastian Schreiter, France

International Program Committee

Akram Abu-Aisheh Hartford University, USA


Laiali Almazaydeh Al-Hussein Bin Talal University, Jordan
Yacob Astatke Morgan State University, USA
Gustavo Alves ISEP Porto, Portugal
Chris Bach University of Bridgeport, USA

zamfira@unitbv.ro
Online Engineering & Internet of Things xi

Nael Bakarad Grand Valley State University, USA


David Boehringer University of Stuttgart, Germany
Michael Callaghan University of Ulster, Northern Ireland
Manuel Castro MIT Madrid, Spain
Arthur Edwards University of Colima, Mexico
Torsten Fransson KTH Stockholm, Sweden
Javier Garcia-Zubia University of Deusto, Spain
Denis Gillet EPFL Lausanne, Switzerland
Olaf Graven Buskerud University College, Norway
Ian Grout University of Limerick, Ireland
Christian Guetl Graz University of Technology, Austria
Alexander Kist University of Southern Queensland, Australia
Petros Lameras Serious Games Lab, School of Computing,
Electronics and Mathematics, Coventry
University
Reinhard Langmann CUAS Dusseldorf, Germany
Ananda Maiti University of Southern Queensland, Australia
Zorica Nedic University of South Australia, Australia
Ingmar Riedel-Kruse Stanford University, USA
Hamadou Saliah-Hassane TÉLUQ, Montréal, Canada
Franz Schauer Tomas Bata University, Czech Republic
Juarez Silva University of Santa Catarina, Brazil
James Uhomoibhi University of Ulster, UK
Vladimir Uskov Bradley University, USA
Matthias Christoph Utesch Technical University of Munich, Germany
Igor Verner Technion Haifa, Israel
Katarina Zakova Slovak University of Technology, Slovakia

zamfira@unitbv.ro
Contents

Internet of Things
Cloud-Based Industrial Control Services . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Reinhard Langmann and Michael Stiller
Wireless Development Boards to Connect the World . . . . . . . . . . . . . . . . 19
Pedro Plaza, Elio Sancristobal, German Carro, Manuel Castro,
and Elena Ruiz
CHS-GA: An Approach for Cluster Head Selection
Using Genetic Algorithm for WBANs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Roopali Punj and Rakesh Kumar
Proposal IoT Architecture for Macro and Microscale Applied
in Assistive Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Carlos Solon S. Guimarães, Jr., Renato Ventura B. Henriques,
Carlos Eduardo Pereira, and Wagner da Silva Silveira
Using Industrial Internet of Things to Support Energy Efficiency
and Management: Case of PID Controller . . . . . . . . . . . . . . . . . . . . . . . . 44
Tom Wanyama
MODULARITY Applied to SMART HOME . . . . . . . . . . . . . . . . . . . . . . 56
Doru Ursuţiu, Andrei Neagu, Cornel Samoilă, and Vlad Jinga
Development of M.Eng. Programs with a Focus on Industry 4.0
and Smart Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Michael D. Justason, Dan Centea, and Lotfi Belkhir
Remote Acoustic Monitoring System for Noise Sensing . . . . . . . . . . . . . . 77
Unai Hernandez-Jayo, Rosa Ma Alsina-Pagès, Ignacio Angulo,
and Francesc Alías

xiii

zamfira@unitbv.ro
xiv Contents

Testing Security of Embedded Software Through Virtual


Processor Instrumentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Andreas Lauber and Eric Sax

Virtual and Remote Laboratories


LABCONM: A Remote Lab for Metal Forming Area . . . . . . . . . . . . . . . 97
Lucas B. Michels, Luan C. Casagrande, Vilson Gruber, Lirio Schaeffer,
and Roderval Marcelino
A Virtual Proctor with Biometric Authentication for Facilitating
Distance Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Zhou Zhang, El-Sayed Aziz, Sven Esche, and Constantin Chassapis
From a Hands-on Chemistry Lab to a Remote Chemistry Lab:
Challenges and Constrains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
San Cristobal Elio, J.P. Herranz, German Carro, Alfonso Contreras,
Eugenio Muñoz Camacho, Felix Garcia-Loro, and Manuel Castro Gil
Advanced Intrusion Prevention for Geographically Dispersed
Higher Education Cloud Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
C. DeCusatis, P. Liengtiraphan, and A. Sager
Remote Laboratory for Learning Basics of Pneumatic Control . . . . . . . 144
Brajan Bajči, Jovan Šulc, Vule Reljić, Dragan Šešlija, Slobodan Dudić,
and Ivana Milenković
The Augmented Functionality of the Physical Models of Objects
of Study for Remote Laboratories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Mykhailo Poliakov, Karsten Henke, and Heinz-Dietrich Wuttke
More Than “Did You Read the Script?” . . . . . . . . . . . . . . . . . . . . . . . . . 160
Daniel Kruse, Robert Kuska, Sulamith Frerich, Dominik May,
Tobias R. Ortelt, and A. Erman Tekkaya
Collecting Experience Data from Remotely Hosted
Learning Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Félix J. García Clemente, Luis de la Torre, Sebastián Dormido,
Christophe Salzmann, and Denis Gillet
“Remote Wave Laboratory” with Embedded
Simulation – Real Environment for Waves Mastering . . . . . . . . . . . . . . . 182
Franz Schauer, Michal Gerza, Michal Krbecek, and Miroslava Ozvoldova
Remote Laboratories: For Real Time Access to Experiment Setups
with Online Session Booking, Utilizing a Database and Online
Interface with Live Streaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
B. Kalyan Ram, S. Arun Kumar, S. Prathap, B. Mahesh,
and B. Mallikarjuna Sarma

zamfira@unitbv.ro
Contents xv

Web Experimentation on Virtual and Remote Laboratories . . . . . . . . . . 205


Daniel Galan, Ruben Heradio, Luis de la Torre, Sebastián Dormido,
and Francisco Esquembre
How to Leverage Reflection in Case of Inquiry Learning?
The Study of Awareness Tools in the Context of Virtual
and Remote Laboratory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
Rémi Venant, Philippe Vidal, and Julien Broisin
Role of Wi-Fi Data Loggers in Remote Labs Ecosystem . . . . . . . . . . . . . 235
Venkata Vivek Gowripeddi, B. Kalyan Ram, J. Pavan,
C.R. Yamuna Devi, and B. Sivakumar
Flipping the Remote Lab with Low Cost Rapid
Prototyping Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
J. Chacón, J. Saenz, L. de la Torre, and J. Sánchez
Remote Experimentation with Massively Scalable
Online Laboratories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Lars Thorben Neustock, George K. Herring, and Lambertus Hesselink
Object Detection Resource Usage Within a Remote Real-Time
Video Stream . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Mark Smith, Ananda Maiti, Andrew D. Maxwell, and Alexander A. Kist
Integrating a Wireless Power Transfer System into Online
Laboratory: Example with NCSLab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
Zhongcheng Lei, Wenshan Hu, Hong Zhou, and Weilong Zhang
Spreading the VISIR Remote Lab Along Argentina.
The Experience in Patagonia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
Unai Hernandez-Jayo, Javier Garcia-zubia, Alejandro Francisco Colombo,
Susana Marchisio, Sonia Beatriz Concari, Federico Lerro,
María Isabel Pozzo, Elsa Dobboletta, and Gustavo R. Alves
Educational Scenarios Using Remote Laboratory VISIR
for Electrical/Electronic Experimentation . . . . . . . . . . . . . . . . . . . . . . . . . 298
Felix Garcia-Loro, Ruben Fernandez, Mario Gomez, Hector Paz,
Fernando Soria, María Isabel Pozzo, Elsa Dobboletta, André Fidalgo,
Gustavo Alves, Elio Sancristobal, Gabriel Diaz, and Manuel Castro

Use and Application of Remote and Virtual Labs in Education


Robot Online Learning Through Digital Twin Experiments:
A Weightlifting Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
Igor Verner, Dan Cuperman, Amy Fang, Michael Reitman, Tal Romm,
and Gali Balikin

zamfira@unitbv.ro
xvi Contents

Interactive Platform for Embedded Software Development Study . . . . . 315


Galyna Tabunshchyk, Dirk Van Merode, Peter Arras, Karsten Henke,
and Vyacheslav Okhmak
Integrated Complex for IoT Technologies Study . . . . . . . . . . . . . . . . . . . 322
Anzhelika Parkhomenko, Artem Tulenkov, Aleksandr Sokolyanskii,
Yaroslav Zalyubovskiy, and Andriy Parkhomenko
Incorporating a Commercial Biology Cloud Lab into
Online Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
Ingmar H. Riedel-Kruse
Learning to Program in K12 Using a Remote Controlled Robot:
RoboBlock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
Javier García-Zubía, Ignacio Angulo, Gabriel Martínez-Pieper,
Pablo Orduña, Luis Rodríguez-Gil, and Unai Hernandez-Jayo
Spatial Learning of Novice Engineering Students Through Practice
of Interaction with Robot-Manipulators . . . . . . . . . . . . . . . . . . . . . . . . . . 359
Igor Verner and Sergei Gamer
Concurrent Remote Group Experiments in the Cyber Laboratory . . . . . 367
Nobuhiko Koike
The VISIR+ Project – Preliminary Results of the Training Actions . . . . 375
M.C. Viegas, G. Alves, A. Marques, N. Lima, C. Felgueiras, R. Costa,
A. Fidalgo, I. Pozzo, E. Dobboletta, J. Garcia-Zubia, U. Hernandez,
M. Castro, F. Loro, Danilo Garbi Zutin, and C. Kreiter
Laboratory Model of Coupled Electrical Drives for Supervision
and Control via Internet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
Milan Matijević, Željko V. Despotović, Miloš Milanović, Nikola Jović,
and Slobodan Vukosavić
Online Course on Cyberphysical Systems with Remote Access
to Robotic Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
Janusz Zalewski and Fernando Gonzalez
Models and Smart Adaptive Interfaces for the Improvement
of the Remote Laboratories User Experience in Education . . . . . . . . . . . 416
Luis Felipe Zapata Rivera and Maria M. Larrondo Petrie
Empowerment of University Education Through Internet
Laboratories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
Abdallah Al-Zoubi
Expert Competence in Remote Diagnostics - Industrial Interests,
Educational Goals, Flipped Classroom & Laboratory Settings . . . . . . . . 438
Lena Claesson, Jenny Lundberg, Johan Zackrisson, Sven Johansson,
and Lars Håkansson

zamfira@unitbv.ro
Contents xvii

Parallel Use of Remote Labs and Pocket Labs in Engineering


Education. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
Thomas Klinger, Danilo Garbi Zutin, and Christian Madritsch
The Effectiveness of Online-Laboratories for Understanding Physics . . . . 459
David Boehringer and Jan Vanvinkenroye

Remote Control and Measurement Technologies


On the Fully Automation of the Vibrating String Experiment . . . . . . . . 469
Javier Tajuelo, Jacobo Sáenz, Jaime Arturo de la Torre, Luis de la Torre,
Ignacio Zúñiga, and José Sánchez
Identifying Partial Subroutines for Instrument Control Based
on Regular Expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
Ananda Maiti, Alexander A. Kist, and Andrew D. Maxwell
Internet of Things Applied to Precision Agriculture . . . . . . . . . . . . . . . . 499
Roderval Marcelino, Luan C. Casagrande, Renan Cunha, Yuri Crotti,
and Vilson Gruber
Computer Vision Application for Environmentally Conscious
Smart Painting Truck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
Ahmed ElSayed, Gazi Murat Duman, Ozden Tozanli, and Elif Kongar
Remote Monitoring and Detection of Rail Track Obstructions . . . . . . . . 517
Mohammed Misbah Uddin, Abul K.M. Azad, and Veysel Demir
Improving Communication Between Unmanned Aerial Vehicles
and Ground Control Station Using Antenna Tracking Systems . . . . . . . 532
Sebastian Pop, Marius Cristian Luculescu, Luciana Cristea,
Constantin Sorin Zamfira, and Attila Laszlo Boer
Remote RF Testing Using Software Defined Radio . . . . . . . . . . . . . . . . . 540
Stephen Miller and Brent Horine
Remote Control of Large Manufacturing Plants Using Core
Elements of Industry 4.0. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546
Hasan Smajic and Niels Wessel

Games Engineering
Dinner Talk: A Language Learning Game Designed
for the Interactive Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
Jacqueline Schuldt, Stefan Sachse, and Lilianne Buckens
The Experimento Game: Enhancing a Players’ Learning Experience
by Embedding Moral Dilemmas in Serious Gaming Modules . . . . . . . . . 561
Jacqueline Schuldt, Stefan Sachse, Verena Hetsch, and Kevin John Moss

zamfira@unitbv.ro
xviii Contents

The Finite State Trading Game: Developing a Serious Game


to Teach the Application of Finite State Machines
in a Stock Trading Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570
Matthias Utesch, Andreas Hauer, Robert Heininger, and Helmut Krcmar
A Serious Game for Learning Portuguese Sign
Language - “iLearnPSL” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
Marcus Torres, Vítor Carvalho, and Filomena Soares
The Implementation of MDA Framework in a Game-Based
Learning in Security Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596
Jurike V. Moniaga, Maria Seraphina Astriani, Sharon Hambali,
Yangky Wijaya, and Yohanes Chandra
Industrial Virtual Environments and Learning Process . . . . . . . . . . . . . . 609
Jean Grieu, Florence Lecroq, Hadhoum Boukachour, and Thierry Galinho
How Game Design Can Enhance Engineering Higher Education:
Focused IT Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619
Olga Dziabenko, Valentyna Yakubiv, and Lyubov Zinyuk
Physioland - A Serious Game for Rehabilitation of Patients
with Neurological Diseases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628
Tiago Martins, Vítor Carvalho, and Filomena Soares

Human Computer Interfaces, Usability, Reusability, Accessibility


The Development of ICT Tools for E-inclusion Qualities. . . . . . . . . . . . . 645
Dena Hussain
Insights Gained from Tracking Users’ Movements Through
a Cyberlearning System’s Mediation Interface . . . . . . . . . . . . . . . . . . . . . 652
Daniel Stuart Brogan, Debarati Basu, and Vinod K. Lohani
Practical Use of Virtual Assistants and Voice User Interfaces
in Engineering Laboratories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 660
Michael James Callaghan, Victor Bogdan Putinelu, Jeremy Ball,
Jorge Caballero Salillas, Thibault Vannier, Augusto Gomez Eguíluz,
and Niall McShane
Approaching Emerging Technologies: Exploring Significant
Human-Computer Interaction in the Budget-Limited Classroom . . . . . . 672
James Wolfer
Touching Is Believing - Adding Real Objects to Virtual Reality . . . . . . . 681
Paulo Menezes, Nuno Gouveia, and Bruno Patrão
The Importance of Eye-Tracking Analysis in Immersive
Learning - A Low Cost Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689
Paulo Menezes, José Francisco, and Bruno Patrão

zamfira@unitbv.ro
Contents xix

Simulation
Augmented Reality-Based Interactive Simulation Application
in Double-Slit Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701
Tao Wang, Han Zhang, Xiaoru Xue, and Su Cai
Developing Metacognitive Skills for Training
on Information Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708
Jesus Cano, Roberto Hernandez, Rafael Pastor, Salvador Ros,
Llanos Tobarra, and Antonio Robles-Gomez
Optimization of the Power Flow in a Smart Home . . . . . . . . . . . . . . . . . 721
Linfeng Zhang and Xingguo Xiong
A Virtualized Computer Network for Salahaddin University
New Campus of HTTP Services Using OPNET Simulator . . . . . . . . . . . 731
Tarik A. Rashid and Ammar O. Barznji

Online Engineering
GIFT - An Integrated Development and Training System
for Finite State Machine Based Approaches . . . . . . . . . . . . . . . . . . . . . . . 743
Karsten Henke, Tobias Fäth, René Hutschenreuter,
and Heinz-Dietrich Wuttke
A Web-Based Tool for Biomedical Signal Management . . . . . . . . . . . . . . 758
S.D. Cano-Ortiz, R. Langmann, Y. Martinez-Cañete, L. Lombardia-Legra,
F. Herrero-Betancourt, and H. Jacques
Optimization of Practical Work for Programming Courses
in the Context of Distance Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . 764
Amadou Dahirou Gueye, Pape Mamadou Djidiack Faye,
and Claude Lishou
Enabling the Automatic Generation of User Interfaces
for Remote Laboratories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 778
Wissam Halimi, Christophe Salzmann, Hagop Jamkojian,
and Denis Gillet
A Practical Approach to Teaching Industry 4.0 Technologies . . . . . . . . . 794
Tom Wanyama, Ishwar Singh, and Dan Centea
Design of WEB Laboratory for Programming and Use
of an FPGA Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 809
Nikola Jović and Milan Matijević
Remote Triggered Software Defined Radio Using GNU Radio . . . . . . . . 822
Jasveer Singh T. Jethra, Pavneet Singh, and Kunal Bidkar

zamfira@unitbv.ro
xx Contents

Open Educational Resources


MOOC in a School Environment: ODL Project . . . . . . . . . . . . . . . . . . . . 833
Olga Dziabenko and Eleftheria Tsourlidaki
Survey and Analysis of the Application of Massive Open Online
Courses (MOOCs) in the Engineering Education in China . . . . . . . . . . . 840
Yu Long, Man Zhang, and Weifeng Qiao
Conversion of a Software Engineering Technology Program
to an Online Format: A Work in Progress and Lessons Learned . . . . . . 851
Jeff Fortuna, Michael D. Justason, and Ishwar Singh
Increasing the Value of Remote Laboratory Federations
Through an Open Sharing Platform: LabsLand . . . . . . . . . . . . . . . . . . . 859
Pablo Orduña, Luis Rodriguez-Gil, Javier Garcia-Zubia, Ignacio Angulo,
Unai Hernandez, and Esteban Azcuenaga
Standardization Layers for Remote Laboratories as Services
and Open Educational Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 874
Wissam Halimi, Christophe Salzmann, Denis Gillet,
and Hamadou Saliah-Hassane

Present and Future Trends Including Social and Educational Aspects


Innovative Didactic Laboratories and School Dropouts:
A Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 887
Carole Salis, Marie Florence Wilson, Fabrizio Murgia,
and Stefano Leone Monni
Intellectual Flexible Platform for Smart Beacons . . . . . . . . . . . . . . . . . . . 895
Galyna Tabunshchyk and Dirk Van Merode
An Approach for Implementation of Artificial Intelligence
in Automatic Network Management and Analysis . . . . . . . . . . . . . . . . . . 901
Avishek Datta, Aashi Rastogi, Oindrila Ray Barman, Reynold D’Mello,
and Omar Abuzaghleh
Investigation of Music and Colours Influences on the Levels
of Emotion and Concentration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 910
Doru Ursuţiu, Cornel Samoilă, Stela Drăgulin, and Fulvia Anca Constantin
Framework for the Development of a Cyber-Physical Systems
Learning Centre . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 919
Dan Centea, Ishwar Singh, and Mo Elbestawi

zamfira@unitbv.ro
Contents xxi

Applications and Experiences


The Use of eLearning in Medical Education and Healthcare
Practice – A Review Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 933
Blanka Klimova
Efficiency and Prospects of Webinars as a Method of Interactive
Communication in the Humanities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 940
Natalya Nikolaevna Petrova, Lyudmila Pavlovna Sidorenko,
Svetlana Germanovna Absalyamova, and Rustem Lukmanovich Sakhapov
Port Logistics: Improvement of Import Process Using RFID . . . . . . . . . 949
Ignacio Angulo, Unai Hernandez-Jayo, and Javier García-Zubia
Integration of an LMS, an IR and a Remote Lab . . . . . . . . . . . . . . . . . . 957
Ana Maria Beltran Pavani, William de Souza Barbosa, Felipe Calliari,
Daniel B. de C Pereira, Vanessa A. Palomo Lima,
and Giselen Pestana Cardoso
Artificial Intelligence and Collaborative Robot to Improve
Airport Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 973
Frédéric Donadio, Jérémy Frejaville, Stanislas Larnier,
and Stéphane Vetault
Methodological Proposal for Use of Virtual Reality VR
and Augmented Reality AR in the Formation of Professional Skills
in Industrial Maintenance and Industrial Safety . . . . . . . . . . . . . . . . . . . 987
Jose Divitt Velosa, Luis Cobo, Fernando Castillo, and Camilo Castillo
Sketching 3D Immersed Experiences Rapidly by Hand
Through 2D Cross Sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1001
Frode Eika Sandnes
Analyzing Modular Robotic Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1014
Reem Alattas
An Educational Physics Laboratory in Mobile Versus Room
Scale Virtual Reality - A Comparative Study . . . . . . . . . . . . . . . . . . . . . . 1029
Johanna Pirker, Isabel Lesjak, Mathias Parger, and Christian Gütl
Human Interaction Lab: All-Encompassing Computing Applied
to Emotions in Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1044
Hector Fernand Gomez Alvarado, Judith Nunez-R, Luis Alberto Soria,
Roberto Jacome-G, Elena Malo-M, and Claudia Cartuche
Distance Learning System Application for Maritime Specialists
Preparing and Corresponding Challenges Analyzing . . . . . . . . . . . . . . . . 1050
Vladlen Shapo
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1059

zamfira@unitbv.ro
Internet of Things

zamfira@unitbv.ro
Cloud-Based Industrial Control Services
The Next Generation PLC

Reinhard Langmann1 ✉ and Michael Stiller2


( )

1
Hochschule Duesseldorf University of Applied Sciences, Duesseldorf, Germany
langmann@ccad.eu
2
Fraunhofer Institute for Embedded Systems and Communication Technologies ESK,
Munich, Germany
michael.stiller@esk.fraunhofer.de

Abstract. The paper presents the concept and implementation for Cloud-based
Industrial Control Services (CICS) as a next generation PLC. As a distributed
service-oriented control system in the cloud, a CICS controller can replace the
traditional PLC for applications with uncritical timing in terms of Industry 4.0.
The CICS services are programmed to industry standards, pursuant to standard
IEC 61131-3, and executed in a CICS runtime in the cloud. This paper gives an
overview about the concept and implementation, discusses the results of appli‐
cation examples as well as the evaluation of the operability of a CICS controller.

Keywords: Control service · Cloud-based controller · Web-based control system ·


Automation system

1 Introduction

Industrial controls and, in particular, PLC controllers currently form an important techno‐
logical basis for the automation of industrial processes. Even in the age of Industry 4.0
(I40) and Industrial Internet, it can be assumed that these controllers will continue to be
required to a considerable extent for the production of tomorrow. However, the controllers
must fulfil a range of additional requirements resulting from the new production conditions.
When applying Industry 4.0 principles [1], high-quality networked production systems
result based on Cyber Physical Systems (CPS), also referred to as Cyber Physical Produc‐
tion Systems (CPPS). A series of I40 requirements are placed on the future controllers used
in these systems. Current PLC controllers cannot yet fulfil the majority of these require‐
ments or can only do so on a rudimentary basis or with extremely high expense.
Basic requirements of future and I40-able PLC controllers involve efficient networking
in an, at least partially, global network and the ability to provide control functions as
control services in this network. Here the IP network (IP – Internet Protocol) functions as
a global network in the version as Intranet or Internet with all associated standardised
Information and Communication (IC) technologies. Only in this way can the required inte‐
gration become part of a future I40 production landscape.

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_1

zamfira@unitbv.ro
4 R. Langmann and M. Stiller

The paper describes the concept and a prototype implementation for the new type of a
PLC controller in which the controller functions (programs) will be implemented as control
services in a cloud. The programming of this new PLC occurs as is usual in industry
pursuant to the standard IEC 61131-3. The R&D results described in the paper have been
provided since 2014 as part of the R&D project “Potential, structure and interfaces of
cloud-based industrial control services (CICS)”.

2 State-of-the-Art

Resulting from the historical development of PLC controllers, these have been devel‐
oped as proprietary device systems that are operated locally under real-time conditions.
If a networking of these controllers is necessary from a user viewpoint, proprietary TCP/
IP protocols or those standardised in the automation sector (Modbus TCP, Profinet etc.)
are used for this. The standard technologies widespread from the Internet and Web have
so far hardly played any role for PLC controllers.
For a number of years, however, a transformation has been under way, with PLC
manufacturers increasingly integrating IC technologies from the web world in their
systems, such as web server and HTML pages for diagnosis and configuration, in order
to adapt the controllers incrementally to the new requirements.
Four different approaches to make PLC controllers I40 compatible can essentially
be revealed from state-of the-art technologies. Generally, all work assumes that the
creation of the control programs must take place in accordance with the standard
IEC61131-3, i.e. controllers from the cloud are currently only accepted in the industry
if the engineering also follows the industry standard.

2.1 Introduction of Basic Web Technologies

Most of the newer PLC controllers already contain a web server as well as special HTML
pages built into the device, enabling a browser-based configuration and diagnosis of the
controller. Process data or program variables form the control program and can also be
read, and sometimes also written, with restrictions. Access via a web browser occurs
through the HTTP protocol, which is query-based and therefore relatively slow. Exam‐
ples of this can be found in [2]. The solutions are proprietary and adapted to the relevant
controller.

2.2 Global Networking of Process Data

For integration of the PLC controllers in supervisor, management and coordination


systems (e.g. SCADA or MES systems), which are partly based on web technologies,
additional modules are integrated in the PLC controllers, which enable a bidirectional
and event-based process data transmission between the controller and supervisor &
management system. This includes solutions such as, for example, the use of Java applets
on websites for access to Siemens controllers [3], the web connector with MQTT broker

zamfira@unitbv.ro
Cloud-Based Industrial Control Services 5

in Bosch-Rexroth controllers [4], or browser-based access to controllers that already


contain an OPC UA server [5].
These solutions also involve proprietary and closed control-integrated modules.
Although the modules utilise web technologies, they cannot be transferred to other
controllers. The I40 requirements can only partly be fulfilled with sometimes high adap‐
tation expense for integration in a CPPS.

2.3 Introduction of Service Principles


Based on the I40 requirement for the service capability of an I40 controller, some
projects [6, 7] are involved with the integration of service functions in PLC controllers.
Thus, the Device Protocol for Web Services (DPWS) enables, as a standardised protocol,
service-based access to PLC controllers [8] etc., also for reading/writing process data.
The internal functional system of a PLC should be equipped correspondingly for this
purpose with the support of the controller manufacturer.
An option for implementation of the DPSW, independently of the manufacturer of
a PLC, is shown in [9]. A service server is implemented here as a functional module
based on the standard IEC 61131-3 programming language. This can then be used for
the control programming.
However, the DPWS solutions have one basic disadvantage: Instead of reducing or
removing the information encapsulation (I40 requirement), additional functionalities
(service functions) are encapsulated in the controller. Moreover, DPSW uses the very
heavy-duty and complex Microsoft web service protocols. The attainable transmission
times of process data via a global network therefore tend to be in the upper range.

2.4 Virtualisation of PLCs

Current R&D work deals with the virtualisation of complete PLC controllers and their
outsourcing into a cloud. A scalable control platform for cyber-physical systems in
industrial productions is researched and realised in [10]. Such a control platform is
intended to provide scalable computing power that is automatically made available
depending on the complexity of the algorithms. The strict requirements of production
technology, such as real-time capability, availability and security should be met.
In [11], a cloud-based controller is presented, which also uses a virtual control system
in an IaaS cloud. The work of [12] also uses virtualised PLC controls in the cloud and
connects these to OPC UA-based automation devices using web technologies. Problems
with the virtualisation of PLC result especially from the fact that already available
manufacturer-specific PLCs are virtualised. These controllers, however, are closed
systems, which were originally not developed considering the aspect of web technolo‐
gies. Adjustments, modifications or extensions of these controllers by third parties are
hardly possible. Functionality cannot be resolved as services. The flexibility of virtual‐
isation is very limited.
In summary, it can be estimated that there are different solutions and efforts to equip
PLC controllers with additional functions in order to be able to use the controllers in an
Industry 4.0-type IP network. To this end, the known work already uses web technologies

zamfira@unitbv.ro
6 R. Langmann and M. Stiller

in part, in a manufacturer specific and/or limited way, and increasingly also use the service
principle as well as cloud structures as a new paradigm for the realisation of control func‐
tions. However, there are several deficits exist, which accordingly require additional
research.

3 Concept

3.1 Control Classification

To assess the I40 capabilities of a PLC controller, a classification is introduced which


divides an industrial controller according to its two abilities Service Ability (SA) and
Control Locality (CL). The properties are divided according to Class C
(Controller) = <SA><CL> . With the proposed methodology, I40 control classes can
be defined and structural configurations for Cloud-based Industrial Control Services
(CICS) can be indicated (Table 1).

Table 1. I40 capability of a controller


Class Service ability Control locality
0 No service All control programs are encapsulated locally in the
physical (hardware) system
1 Services only for non-critical Some control programs that include non-critical and
and overarching overarching functionalities are not located on the local
functionalities hardware but are instead distributed to other systems (for
example, in the network)
2 Services for most functions Most control programs are distributed in the network.
available Control programs which are critical in terms of time and
safety remain in the local hardware
3 All control programs as All control programs are distributed in the network. Third
services instances can access all the control algorithms in real time
and change them

Looking at a PLC as a CPS component, the traditional IEC 61131 control program
(CP) can be divided into three parts:
• basic functional program part (CP basic - CPb),
• a program part which performs superior, administrative and/or user interface func‐
tions (CP supervisory - CPs),
• critical part of the program regarding real-time and security (CP critical - CPc).
In order to evaluate the I40 capabilities of a CICS controller, this 3-part structuring
of the control program is used, among other things. Figure 1 shows the structure of such
a PLC.

zamfira@unitbv.ro
Cloud-Based Industrial Control Services 7

Fig. 1. Structure of a PLC as a CPS component

If the control system, as shown in Fig. 1, is used as the basis and modified as a result
of the increasing displacement of the control programs into a cloud as services, it leads
to the evolution of a PLC as CPS component “industrial control”, as shown in Fig. 2.

Fig. 2. Evolution of an industrial controller (PLC) as a CPS component

Three types of CPS components (Fig. 2) are produced according to the aforemen‐
tioned disassembly of the PLC program into three parts:
(a) The controller only implements the program components CPb and CPc. The tradi‐
tional runtime environment of a PLC is still required.
(b) For safety reasons, only the CPc program parts are implemented in the CPS compo‐
nent. The classic and manufacturer-specific PLC runtime machine is no longer
required. The implementation of the CPc could also be carried out with specific
embedded program parts (e.g. in C).
(c) The CPS component no longer contains a control part, but only sensors and actua‐
tors. All control programs are distributed in the network.
Service ability considers the ability of a control to utilise control functionalities (control
program parts) as services in the sense of cloud computing. According to Table 1, the
program parts CPb, CPs and CPc can be distributed unequally. In class C11, for example,
the uncritical and overlapping functionalities (CPs) are not located on the local control
platform, but distributed on other systems in the network (corresponds to a traditional,
distributed control system). However, part of the CPs could also be used as a service from
a cloud.

zamfira@unitbv.ro
8 R. Langmann and M. Stiller

3.2 CICS Basis Model


A CICS base model must take into account both, the aspects of control engineering and
the web technology features.
From a control engineering point of view, a CICS control based on traditional PLC
consists of the following components (Fig. 3):
• CISC program (CICS-P): IEC61131-3 control program in the PLCopen XML nota‐
tion. It includes only the program and the variables, but no controller configuration.
• CICS runtime (CICS-RT): Execution environment for the CICS program. It can be
cycle controlled or event-based.
• CICS router (CICS-R): Device configuration for a CICS controller, i.e. it is deter‐
mined which CPS components (which automation devices) are connected to the
controller.

Fig. 3. General structure of a CICS controller

By separating CICS-P, CICS-RT and CICS-R in a CICS controller and the principle
arbitrary distribution of the individual components in an IP network utilising cloud
technologies, both a change in the control program code (control algorithm) as well as
a change in the device configuration with, for example, the replacement of modules and
Plug & Work in real time are possible. CICS-R and CICS-P can be exchanged on-the-
fly during a program cycle.
For the identification of a viable CICS basic model, it is also necessary to show
possible solution variants for CICS, starting with the basic principles on the web, and
then to reflect these in the available web technologies.
In principle, two types of network computers are available in the Internet (Web) as
a world-wide computer network:
• Server computers that can provide and run IT entities (objects, services, programs).
• Client computers that can only run IT entities.
As a working principle on this server/client computer network, the client-server
principle, i.e. a client must first submit a request to run an IT entity on the server. This
means that application technology IT entities in the server cannot act on their own (self-
acting). The client is usually a web browser on the client computer.

zamfira@unitbv.ro
Cloud-Based Industrial Control Services 9

If one considers any application-specific functional system, which is implemented


with web technology tools, the models for the execution (RUN) of this system are the
result, as shown in Fig. 4:
(a) The functional system is only stored on the server and is only executed there. The
execution of the system in the server is started by the client (server mode).
(b) The functional system is stored on the server and is loaded into the client via a
request. The system is only executed in the client (client mode).
(c) The functional system is stored on the server and components of the functional
system are also executed on the server. Other components are loaded into the client
via a request and these components are also executed there. The execution of the
system components in the server is started by the client (mixed mode).

Fig. 4. Models for executing a web technology based functional system (block with black
font = only saved, block with white font = is being executed)

In all three cases, the functional system can be distributed over several servers (cloud)
or even several clients. If the control technology based CICS structure, shown in Fig. 3,
is adapted to the web based functional systems, seen in Fig. 4, two server and client
based CICS base models are obtained:
1. Server Mode (SM): The CICS runtime is statically linked to a fixed CPS component
in a configuration process. After the CICS control has been started via the client, the
CICS controller automatically connects to the associated CPS component via the IP
network and executes the control program.
2. Server-based Mixed Mode (SMM): Before starting the CICS runtime, a CICS router
is loaded from the server to the client. After the CICS runtime is started, this router
dynamically connects the CPS component with the CICS runtime on the server. The
process data from the automation device are now routed to the server via the client.
3. Client Mode (CM): CICS runtime and CICS routers are executed as an instance on
the client (Web browser). The client is an inherent part of the CICS control system
and is necessarily required for executing the control program. The server is no longer
required at the runtime.

zamfira@unitbv.ro
10 R. Langmann and M. Stiller

4. Client-based Mixed Mode (CMM): The control program runs in the CICS runtime
on the client, but the communication to the CPS component runs over a dynamically
reconfigurable CICS router in the server.
Figures 5 and 6 illustrate the four basic models of a CICS control.

Fig. 5. Component structure and communication paths for server-based CICS solutions (1) –
Server Mode (SM); (2) – Server-based Mixed Mode (SMM)

Fig. 6. Component structure and communication paths for client-based CICS solutions (1) –
Client-base Mixed Mode (CMM) (2) – Client Mode (CM)

3.3 Control Services


The controller features of a CICS controller should no longer be available as classic
control functions, but rather as control services according to the service paradigm. A
CICS (literally: Cloud-based Industrial Control Service) can thereby use all the features
of cloud computing and, thus, create new business models, such as the rental of control
services.
In terms of information technology, CICS has to be produced, parametrised, distrib‐
uted, stored and recalled as objects by means of methods. Since the components of a
CICS control are no longer available as hardware, but only as software objects in the IT

zamfira@unitbv.ro
Cloud-Based Industrial Control Services 11

or Web world and generally also are stored there in databases, it makes sense to use data
models for the modelling of the CICS service architecture. Figure 7 shows the CICS
structure, seen in Fig. 3, as an Entity Relationship Diagram (ERD).

Fig. 7. CICS services of a CICS controller according Fig. 3 presented as an ERD diagram

According Fig. 7 a CICS controller is realised with two services: Runtime service
and Router service. Both CICS services are built according to the principle of web-
oriented automation services [13].

CICS-RT: Runtime Service


Corresponding to the state machine in a traditional PLC, the CICS-RT also has a defined
sequence behaviour as the most important component in order to execute a control
program.
A CICS-RT can be operated in cyclic mode and event-based mode. In cycle mode,
the I/O image is updated, equivalent to a traditional PLC. In event-based operation, the
control program is executed only when the value of an input variable changes or an
internal event occurs (for example, the execution of a timer).

CICS-R: Router Service


The I/O configuration service of a CICS control system (CICS-R) is separated from the
CICS runtime for the following reasons:
• Securing a dynamic reconfiguration, i.e. in the case of an identical control program,
the I/O configuration can be changed within a program cycle.
• Identical machines/systems can be operated with the same control program despite
different I/O modules.
• A distributed separate configuration service forms the basis for a future automatic
IIoT1-based device configuration.
CICS routing works according to the following two principles:
• A CICS program (PLCopen XML program) works with absolute I/O addresses.
• The CICS router connects the absolute I/O addresses to the real I/O addresses of the
devices (CPS components).
Figure 8 illustrates the functionality of a CICS router.

1
IIoT – Industrial Internet of Things.

zamfira@unitbv.ro
12 R. Langmann and M. Stiller

Fig. 8. CICS router

The digital and analogue inputs and outputs of a device, connected via the channel
interface, are routed to absolute I/O program addresses and transferred to the CICS-RT
via a CICS block channel. The routing rules (interconnection matrix) are defined via a
CICS-R XML file. The I/O process data is transmitted via the bidirectional CICS block
channel as a string between the CICS router and the CICS runtime.

4 Implementation

Within the context of the CICS project, there were two prototype implementations for
a CICS controller:
• CICS controller in the Client Mode (CM – Fig. 6-2)
• CICS controller in the Server-based Mixed Mode (SMM – Fig. 5-2)

4.1 CICS Control in the Client Mode


In the case of the CM solution, the CICS controller is implemented entirely as an instance
on the client (web browser). A process data communication with the automation devices
takes place only between the client and devices. Therefore the CICS controller can also
be operated in a separated and local (private) network.
From the perspective of technical implementation, the CICS CM Controller is a very
compact solution. In this case, CICS-RT and CICS-R can be implemented as a common
service and run in the client (Web browser). However, the work of the CICS runtime
depends on the type of operation and performance of the client computer and has to take
its limits into account. Figure 9 illustrates a CICS controller in the client mode.

zamfira@unitbv.ro
Cloud-Based Industrial Control Services 13

Fig. 9. CICS control in the Client Mode (CM)

The CICS controller is implemented as a JavaScript object, which is loaded onto the
client from a cloud (CICS Cloud). Communication between the CICS controller instance
and the automation devices takes place via a universal gateway as a web connector. The
gateway is implemented in an embedded system (pure.box of Wiesemann & Theis) as
a Device Gateway for Modbus TCP and for a proprietary TCP protocol. WebSocket is
used as a web protocol [14].
The control programs for the CM prototype were created with the industry-standard
programming system PC WORX (Phoenix Contact) in the language IL and exported as
PLCopen XML programs for execution in the CICS runtime. The execution is performed
via a JavaScript-based IL interpreter.

4.2 CICS Control in the Server-Based Mixed Mode

In the case of the SMM solution, the CICS runtime is executed in the server (cloud) as
an instance and the CICS router in the client (browser) as an instance (Fig. 10).

Fig. 10. CICS control in the Server-based Mixed Mode (SMM)

Here, as well, a direct process data communication takes place only between the
client and devices. Between CICS-R and CICS-RT, there is a special bidirectional CICS

zamfira@unitbv.ro
14 R. Langmann and M. Stiller

block channel for the transmission of I/O images. The process data are transmitted by
this channel as Strings over WebSocket. The CICS runtime is operated via an HMI proxy
on the client. In terms of technical implementation, the CICS-SMM controller is a
distributed elaborate solution. However, process data connection to the devices can also
be performed locally and the CICS-RT can use the full performance of the server
anyway. A dynamic re-configuration is easy and possible.

5 Application

The CICS components (CICS-RT and CICS-R) are constructed as services according
to the WOAS principle [15] and are instantiated as JavaScript objects using uniform and
consistent methods. They can therefore be used in an available IoT platform or as stand-
alone web pages. The freely available IIoT platform WOAS was used for the following
application examples (http://woas.ccad.eu). The applications therefore did not have to
be programmed, but instead were configured in the IIoT platform in a browser-based
EDIT mode with little effort.
An application example for a CICS-CM and for a CICS-SMM described hereafter.

5.1 Application of a CICS-CM Controller

The CICS-CM was successfully tested at a processing and testing station and presented
at the SPS/IPC/Drives automation fair in Nuremberg (Germany) in 2015 (Fig. 11).

Fig. 11. Application example of a CICS controller in client mode (CM); (a) processing and test
station; (b) I/O modules, IoT gateway and switch; (c) HMI in the web browser for operating/
visualisation of the CICS-CM

zamfira@unitbv.ro
Cloud-Based Industrial Control Services 15

The PLC program works like a cyclical state machine that waits for the presence of
a piece in position 1 of the table and then performs some actions in other positions; it
returns to position 1 and repeats the process. The dedicated PLC program has around
250 lines and uses about 60 variables. Some of the advantages of the CM solution are:
• The server is no longer required for the runtime. The server/cloud only serves the
purpose of storing the CICS services, including the control programs. Faults or fail‐
ures of the server have no effect on the CICS control system at runtime.
• The process data communication between the devices and the CICS controller can
be limited to the local network when the client is also located on this network.
• Depending on the performance class of the client, several CICS control instances can
run on a client and thus control different CPS components (devices).
• The quality and reliability of communication between the CICS controller and
connected devices can be monitored by the client.
• Generally, any client (PC, tablet, smartphone) can function as a CICS controller.

5.2 Application Examples of a CICS-SMM Controller


The CICS-SMM controller has been extensively tested in various application examples.
In one application example, a CICS-SMM controller controls a working cell consisting
of two processing and test stations (Fig. 11a) and a loading robot. Figure 12 shows the
technological structure for this example.

Fig. 12. Technological structure of a CICS-SMM application example

Two CICS controller instances control the two processing and test stations. Another
CICS instance is responsible for coordinating the robot with the two stations. Same as
in the test example for the CICS-CM, the connection of the respective CICS router
instances to the devices of the two stations takes place via a universal gateway as a web
connector. The Modbus TCP interface of the robot is connected to the Internet via

zamfira@unitbv.ro
16 R. Langmann and M. Stiller

WebSocket using a device gateway, realised by means of Node-RED [16]. The


application was successfully presented at SPS/IPC/Drives 2016.

6 Evaluation

6.1 Realtime Characteristics

A CICS control system uses IP networks for data transmission, regardless of the solution
variation. From the perspective of an automation technician, these networks are a priori
neither reliable nor deterministic and are not within the jurisdiction of the respective
technical automation solution. Extensive time measurements for different communica‐
tion structures were therefore performed for both CICS prototypes.
A practice-oriented method was chosen for the time measurements, which allows
direct statements about the reaction time of a CICS controller [17]. With respect to the
real-time capability, the following general statements can thus be made:
• With a CICS CM solution, response times of about 80…120 ms at a 95% probability
can be achieved by a standard Internet connection.
• With a CICS SMM solution, the reaction times likewise with a probability of 95%
are about 100 ms.
If the CICS controller is operated only in the Intranet, response times of under 40 ms
can certainly be achieved. Altogether, the statement can be made that technical processes
with process times of >150 ms (simple assembly process, temperature and mixing
processes, climate and energy processes, etc.) can already be performed successfully
from the cloud by means of a CICS.

6.2 Operability

The operability of a CICS control system is understood to mean the characteristics which
are important for a conventional PLC: reliability, data security and machine safety. In
this regard, new challenges arise that need to be studied and solved for the practical use
of CICS control systems.
For the sake of maintaining reliability, the two realised prototypes were investigated
in more detail by way of example. In this process, monitoring the network quality, inte‐
grating a ping-pong channel as well as a local operation for the process data transfer to
the device played a role.
Although reliability problems can be recognised with the methods described, they
cannot be eliminated. However, the CICS controller can at least be brought into a safe
state in problem situations.

7 Summary

Using the CICS concept, a new type of industrial control was developed and tested that
allows for the complete detachment of control function and associated equipment to

zamfira@unitbv.ro
Cloud-Based Industrial Control Services 17

globally distributed, cloud-based software control services. A CICS control is operated


by a classic IEC61131-3 control program, thus ensuring the interoperability and indus‐
trial compatibility of the control system. The application of the service paradigm for
industrial control functions significantly increases flexibility, meets industry 4.0 require‐
ments such as changeability, reconfiguration and autonomy, and enables new business
models to lease automation functions.
For testing and evaluation, prototypical implementations were deployed for a purely
client-based CICS controller and for a mixed client/server-based CICS controller. Both
CICS controller types were successfully tested in the context of application scenarios
from production automation. An evaluation of the CICS applications showed that simple
technical processes with process times of greater than 150 ms can already be controlled
reliably over the standard Internet.
With CICS, previous hardware-oriented and centralised procedures for the control
of automated devices, machines and systems (e.g. PLC controllers) can be distributed
and used transparently for uncritical real-time conditions (e.g. environmental processes,
logistics processes, energy processes, simple assembly processes) through IP-network-
distributed software functions.

Acknowledgments. The IGF project CICS (18354N) of the Forschungsvereinigung


Elektrotechnik beim ZVEI e.V. – FE, Lyoner Str. 9, D-60528 Frankfurt am Main is funded via
the AiF within the framework of the programme for funding industrial community research and
development (IGF) of the Federal Ministry of Economics and Technology based on the resolution
of the German parliament.

References

1. Kagermann, H., Wahlster, W., Helbig, J.: Recommendations for implementing the strategic
initiative INDUSTRIE 4.0. In: Research Union: Business - Science, April 2013
2. Evans, Z.: Web Server Technology in Automation. Siemens Industry (2011)
3. Klindt, C.J., Baker, R.B.: Interface to a programmable logic controller. US 6853867 B1,
8 February 2005
4. Bosch-Rexroth: WebConnector: Connect simply automation and web environment (in
German). Product News, p. 50 (2015)
5. Beckhoff automation GmbH: From sensor to IT Enterprise - Big Data & analytics in the
cloud (2015). ftp://beckhoff.com/Software/embPC-Control/Solution/Demo-IoT/Flyer-
IoT-Sensor_to_Cloud.pdf
6. EU project: SOCRADES 2006–2009. http://www.socrades.net
7. Colombo, A.W., et al.: Industrial Cloud-Based Cyber-Physical Systems. Springer,
Switzerland (2014)
8. Microsoft: Introducing DPWS (2015). https://msdn.microsoft.com/en-us/library/dd170125.aspx
9. Mathes, M., et al.: SOAP4PLC: web services for programable logic controller. In: 17th
Euromicro International Conference on Parallel, Distributed and Network-Based Processing,
Proceedings, pp. 210–219 (2009)
10. ISW of the University Stuttgart: Industrial cloud-based control platform for the production
with Cyber-Physical Systems (piCASSO - in German). BMBF project, Stuttgart (2013).
http://www.projekt-picasso.de

zamfira@unitbv.ro
18 R. Langmann and M. Stiller

11. Grischan, E.: UACloud-Based Automation (in German). atp edn., 3/2015, pp. 28–32
12. Schmitt, J.: UACloud-Enabled Automation Systems using OPC UA. atp edn., 7–8/2014,
pp. 34–40
13. Competence Center Automation Düsseldorf (CCAD): WOAD – Interface description (in
German). – R&D document, Düsseldorf, 09 April 2014
14. Braß, M.: Development and test of an Industry 4.0 gateway for the Internet access to Modbus
TCP and proprietary TCP protocols. (in German). – Documentation of project work, CCAD,
25 January 2016
15. Langmann, R., Meyer, L.: Architecture of a web-oriented automation system. In: 18th IEEE
International Conference on Emerging Technologies & Factory Automation, ETFA 2013,
10–13 September 2013, Cagliari, Italine, Proceedings
16. Node-RED (2016). http://nodered.org
17. Langmann, R.: An interface for CPS-based automation devices (in German). In: Proceedings
of AALE 2014, pp. 133–142, DIV-Verlag, Munich (2014)

zamfira@unitbv.ro
Wireless Development Boards to Connect
the World

Pedro Plaza1(&), Elio Sancristobal2, German Carro2, Manuel Castro2,


and Elena Ruiz2
1
Plaza Robótica, Torrejón de Ardoz, Spain
pplaza@plazarobotica.es
2
UNED, Madrid, Spain
{elio,mcastro}@ieec.uned.es, germancf@ieee.org,
elena@issi.uned.es

Abstract. Nowadays, Wireless applications are widely extended in the Sci-


entific, Education and Hobbyist communities. The aim of this paper is to pro-
vide a review of some of the most popular boards which allow an ease way to
develop a wide range of applications related to STEM (Science, Technology,
Engineering and Mathematics) in an educational manner. Moreover, the scope is
focused on those development boards which allow Wireless communications in
order to perform Things which can be integrated into an Internet of Things
environment. Arduino WiFi Shield, Arduino Yún Shield, Arduino MKR1000,
NodeMCU ESP8266 and Onion Omega have been analyzed, compared and
discussed. The analysis has been carried out attending on the Built-in Hardware,
the Programmer Interface, the connection possibilities and the Developer
Community which is behind the corresponding board.

Keywords: IoT  Wireless  Robotics  Education  STEM

1 Introduction

The aim of this paper is to present some of the current development boards which can
be used to deploy IoT (Internet of Things) applications within an STEM (Science,
Technology, Engineering and Mathematics) educational environment.
Nowadays, there are a wide range of development boards which can be classified in
several ways. According to [1], the development platforms can be categorized in four
groups:
• Based on Microcontrollers,
• based on Microprocessors,
• based on FPGA (Field Programmable Gate Array), and
• Hybrid Development Platforms.
The IoT (Internet of Things) movement has impacted on the traditional Develop-
ment Boards. There are lots of IoT based applications which are being developed in
many different fields [2]. Some examples are [3] in Emergency Medical Services, [4] in
Cloud Computing and [5] in Remote Educational laboratories.

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_2
zamfira@unitbv.ro
20 P. Plaza et al.

The Arduino WiFi Shield provides to the Arduino board a wirelessly internet
connection [6]. It cannot work in a stand-alone mode. Hence, this board requires a
microcontroller to interact with other elements.
The Yún Shield easily brings the Yún features to Arduino and Genuino boards. It is
a good choice for IoT projects using wireless connection to access the internet [6].
Genuino MKR1000 is a powerful board that combines the functionality of the Zero
and the Wi-Fi Shield. It is the ideal solution for makers wanting to design IoT projects
with minimal previous experience in networking [6].
This board is an Open Source Firmware and development kit that helps the IoT
product prototyping within a few Lua script lines [7].
The Onion Omega is a Hardware development platform with built-in WiFi and a
full Linux Operating System [8].
There are a vast variety of software development platforms which can be used as
core of IoT applications. Some of them are cost-effective platforms such as the men-
tioned in this paper. Furthermore, these platforms can be easily included with the aim
of elaborating robotic educational activities where the proactive learning is empowered
through experiments in the real world.
This paper is divided in four sections. Section 2 presents the analyzed IoT devel-
opment boards. Section 3 compares all of them. The last section summarizes the
achieved conclusions after the performed investigation.

2 Wireless Development Boards


2.1 Arduino and Genuino IoT Boards
Arduino is an Open Source Hardware and Software project. Additionally, Arduino is
supported by a user community that designs and manufactures devices and interactive
objects, and it is worldwide extended and growing day by day [6]. There are four
development boards provided by Arduino for IoT purposes: Arduino WiFi Shield,
Arduino Yún Shield and Arduino MKR1000. Along this section all of them are ana-
lyzed. All of them are programmed using the Arduino Software IDE (Integrated
Development Environment). The Arduino language is based on C/C++. It links against
AVR Libc and allows the use of any of its functions. The Arduino IDE is produced for
the following operating systems: Windows, Mac OS (operative System), Linux.
Additionally, a portable IDE can be used for Windows and Linux. Every element of the
mentioned platforms in this section – Hardware, Software and documentation – is
freely available and open-source.
Arduino boards are intended for United States meanwhile Genuino is the sister
brand for products which are sold outside the United States.
The Arduino WiFi Shield provide a wirelessly connection to Arduino boards. The
connection is established by following a few simple instructions in order to connect
Things through the internet. This board presents the following characteristics:
• It is based on the HDG204 Wireless LAN (Local Area Network) 802.11b/g System
in-Package.

zamfira@unitbv.ro
Wireless Development Boards to Connect the World 21

• There is an onboard micro-SD card slot, which can be used to store files for serving
over the network.
• The board mechanical data are: Length: 63.2 mm and width: 53.5 mm.
• The Arduino WiFi Shield board cost is 69.00 € [6].
In the same way than Arduino WiFi Shield, the Yún Shield extends the Arduino
board with the power of a Linux based system which enables advanced network
connections and applications. This board presents the following characteristics:
• Yún Web Panel and the ‘‘YunFirstConfig’’ sketch can be used to connect through
WiFi or wired network (Ethernet) in a simple way.
• The Shield preferences and sketch uploading can be performed directly from the
attached Arduino/Genuino board.
• The board mechanical data are: Length: 68.6 mm and width: 53.3 mm.
• The Genuino Yún Shield board cost is 39.90 € [6].
Genuino MKR1000 has been designed to offer a practical and cost effective
solution for projects which require Wi-Fi connectivity. This board presents the fol-
lowing characteristics:
• It is based on the Atmel ATSAMW25 SoC (System on Chip).
• This processor is part of the SmartConnect family of Atmel Wireless devices.
SmartConnect family is specifically designed for IoT projects.
• The ATSAMW25 includes also a single 1  1 stream PCB (Printed Circuit Board)
Antenna.
• The board includes a Li-Po charging circuit which allows the use of a Li-Po battery
as external power. Additionally, a 5 V external power supply is allowed. Internally,
the MKR1000 switches automatically from both supply sources.
• The board mechanical data are: Length: 65.0 mm and width: 25.0 mm.
• The Genuino MKR1000 board cost is 31.99 € [6].

2.2 NodeMCU ESP8266


NodeMCU is an open source IoT platform. The term NodeMCU refers to the Firm-
ware. This board presents the following characteristics:
• The Lua scripting language is used for programming the board. It is based on the
eLua project.
• The Development Kit is based on ESP8266, integrates GPIO (General Purpose
Input Output), PWM, IIC, 1-Wire and ADC all in the same board.
• The board mechanical data are: Length: 38.0 mm and width: 25.0 mm.
• The NodeMCU ESP8266 board cost is 7.95 € [9].

2.3 Onion Omega


This board includes a built-in WiFi, it is Arduino-compatible and a Linux is running
inside. This board presents the following characteristics:
• It lets the Hardware prototyping using familiar tools such as Git, pip, npm.

zamfira@unitbv.ro
22 P. Plaza et al.

• High level programming languages such as Python, Javascript, PHP can be used.
• The Onion Omega is fully integrated with the Onion Cloud with the aim of creating
Internet of Things applications.
• It is Open Source. The processor is the Qualcomm Atheros AR9331 SoC.
• The board mechanical data are: Length: 42.7 mm and width: 26.4 mm.
• The Onion Omega board cost is 19.99 $ [10]. Using [11] for currency conversion
from United States dollars to Euros the board cost is 17.94 €.

3 Discussion

Along the previous sections seven IoT development platforms have been analyzed with
the aim of knowing about the Built-in Hardware, the Programmer Interface, the con-
nection possibilities and the Developer Community which is behind the corresponding
board.
There are two types of Arduino/Genuino IoT boards: Shield boards and the
full-integrated boards. The Shield boards require an additional microcontroller in order to
interact with other elements such as sensors or actuators which are widely used in robotic
education. On the other hand, NodeMCU and Onion Omega can be used in a stand-alone
mode. Table 1 summarizes the microcontroller and processor for each board.

Table 1. IoT development board processing device.


IoT development board Microcontroller Processor
Arduino WiFi Shield Atmel AT32UC3 None
Genuino Yún Shield None Atheros AR9331
Genuino MKR1000 SAMD21 Cortex-M0+ None
NodeMCU ESP8266 None Tensilica Xtensa LX106
Onion Omega None Big-Endian

With the aim of powering the boards, it is important to know what is voltage level
for each one in order to adapt levels from the board to other connected devices. Table 2
lists the IoT development boards and the voltage for the power supply and the input
and output port interfaces.
Other important characteristic for the development is the available memory. The
presented boards include different kind of memory resources, Table 3 compiles which
type of memory – volatile and non-volatile - is available and how much memory can be
used.
In common applications, IoT boards should interface with other devices using a
wired connection. These communications usually are performed using a serial inter-
face. Furthermore, digital and analog ports are used in order to read sensor values or
interact with some kind of actuators. Table 4 summarizes the serial and port interfaces
which can be performed for each IoT board.

zamfira@unitbv.ro
Wireless Development Boards to Connect the World 23

Table 2. IoT development board power supply and port interface voltage level.
IoT development board Power supply Input/Output voltage
Arduino WiFi Shield 5 V externally There is not Input/Output
port interface
Genuino Yún Shield 3.3 V There is not Input/Output
port interface
Genuino MKR1000 5 V or Li-Po single cell, 3.7 V, 3.3 V
700 mAh minimum
NodeMCU ESP8266 5 V from USB or 3.3 V from VIN 3.3 V
Onion Omega 5 V from USB or 3.3 V from VIN 3.3 V

Table 3. IoT development board: volatile and non-volatile resources.


IoT development board Volatile memory Non-volatile memory
Arduino WiFi Shield Internal SRAM: 64 KB Internal flash: 512 KB on-board
micro SD slot
Genuino Yún Shield RAM: 64 MB DDR2 Flash: 16 MB on-board micro SD
slot
Genuino MKR1000 Internal SRAM: 32 KB Internal flash: 256 KB
NodeMCU ESP8266 Internal RAM: 64 KB for QSPI flash: 512 KB to 4 MB
instructions
Internal RAM: 96 KB for
data
Onion Omega RAM: 64 MB DDR2 Flash: 16 MB

Table 4. IoT development board; serial and port interfaces.


IoT development board Serial interfaces Discrete ports
Arduino WiFi Shield SPI, USB, ICSP, None
UART and FTDI
Genuino Yún Shield SPI, USB, ICSP and None
UART
Genuino MKR1000 SPI, I2C, and UART Digital I/O pins: 8
PWM output: 12
Analog input pins: 7 (ADC 8/10/12 bit)
Analog output pin: 1 (DAC 10 bit)
NodeMCU ESP8266 SPI, I2C, I2S and Digital I/O pins: 10 (can be used for
UART PWM, I2C, 1-wire)
Analog input pin: 1 (ADC 10 bit)
Onion Omega SPI, I2S and UART Digital I/O pins: 18

zamfira@unitbv.ro
24 P. Plaza et al.

When there are Hardware and Software developments, the software required for
programming the board and the used language are very important elements. Table 5
states the IoT development board programming software and the used language for
programming them.

Table 5. IoT development board; programming tool and language.


IoT development board Programming tool Programming language
Arduino WiFi Shield Arduino IDE Arduino language (based on C/C++)
Genuino Yún Shield Arduino IDE, Arduino language (based on C/C++),
Web interface Python
Genuino MKR1000 Arduino IDE Arduino language (based on C/C++)
NodeMCU ESP8266 NodeMCU Simple LUA based programming
Firmware, language, Arduino language (based on
Arduino IDE C/C++)
Onion Omega Serial terminal, Python, Javascript, PHP
SSH terminal

Other important specification for any application is the size restriction. None of the
analyzed boards are very large, the larger one has a length of 68.6 mm and a width of
53.3 mm - Genuino Yún Shield – and the smallest one is sized with a length of
38.0 mm and a width of 25.0 mm - NodeMCU ESP8266. Table 6 compares the IoT
development boards dimensions.

Table 6. IoT development board dimensions.


IoT development board Length Width
Arduino WiFi Shield 63.2 mm 53.5 mm
Genuino Yún Shield 68.6 mm 53.3 mm
Genuino MKR1000 65.0 mm 25.0 mm
NodeMCU ESP8266 38.0 mm 25.0 mm
Onion Omega 42.7 mm 26.4 mm

Moreover, the board cost is compared too. Due to the project funding, the cost is an
important aspect which should be have in mind. As it can be seen, none of them are
especially expensive, the most expensive board costs 69.00 € - Arduino WiFi Shield –
and the cheapest one costs 7.95 € - NodeMCU ESP8266. All of them are affordable for
majority of projects. Table 7 specifies the IoT development boards cost.
Furthermore, other important aspect is getting support during the development,
communities are very useful. Traditionally, manufacturers provided some telephone or
some e-mail for this purpose. Nowadays, most of manufacturers include a forum with
the aim of providing support to their customers. These forums are built by company’s

zamfira@unitbv.ro
Wireless Development Boards to Connect the World 25

Table 7. IoT development board cost.


IoT development board Cost
Arduino WiFi Shield 69.00 €
Genuino Yún Shield 39.90 €
Genuino MKR1000 31.99 €
NodeMCU ESP8266 7.95 €
Onion Omega 17.94 €

experts and the customers and all of them form a community for the corresponding
product.
Arduino WiFi Shield, Arduino Yún Shield and Arduino MKR1000 are supported
by the Arduino community [12]. NodeMCU ESP8266 is supported by two commu-
nities: NodeMCU community [13] and ESP8266 community [14]. Onion Omega is
supported by the Onion community [15].
Finally, when a development is started it is important to get references about things
what other people has made and the way they are using the development boards.
Arduino projects are widely extended to scientific community, professionals and
hobbyists [16] presents a body area network, for acquiring data related to body position
and some simple movements based on a WiFi Shield stacked on an Arduino ChipKIT
MAX32 [17] describes a low-cost Wi-Fi sensor network based on ESP8266. Enable
monitoring of heart pulse sensor data on the cloud with ESP8266 is detailed in [18].
Using one of the described IoT development boards, mobile robots can be used as
remote laboratory in order to teach computer science in a similar way that is described
in [19, 20]. It can be used as a guide for using IoT as a TEL (Technology Enhanced
Learning) tool.

4 Conclusions

The result of this work shows the results of the analyzed IoT development boards that
can be introduced easily in classrooms within a STEM context. Performed educational
activities using the mentioned platforms are also considered. Hence, recommendations
are included with the aim of easing the inclusion of one of them in a classroom.
Any of this IoT development boards can be used in classrooms or remotely in order
to provide an easy way with the aim of including robotics within a STEM context.
Additionally, they allow making homemade applications framed in a DIY (Do It
Yourself) context.
Presented analysis is part of the state of the art of a doctoral thesis; a novel approach
to collaborative robotic educational tool is being developed. An Open Hardware plat-
form that can be used in classrooms with aim of developing educational programs
related to robotics in a collaborative environment which promotes innovation and
motivation for students during the learning process 1. The platform which is being
developed presents wirelessly connections such as Bluetooth a WiFi as enhancements 2.
The Wireless connection is provided by a WiFi development board which is integrated
as part of the collaborative robotic educational tool. The doctoral thesis is being carried

zamfira@unitbv.ro
26 P. Plaza et al.

out in the Engineering Industrial School of UNED (Spanish University for Distance
Education) and the Electrical and Computer Engineering Department (DIEEC).

Acknowledgment. The authors acknowledge the support provided by the Engineering Indus-
trial School of UNED, the Doctorate School of UNED, and the “Techno-Museum: Discovering
the ICTs for Humanity” (IEEE Foundation Grant #2011-118LMF).
And the partial support of the eMadrid project (Investigación y Desarrollo de Tecnologías
Educativas en la Comunidad de Madrid) - S2013/ICE-2715, IoT4SMEs project (Internet of
Things for European Small and Medium Enterprises), Erasmus+ Strategic Partnership nº
2016-1-IT01-KA202-005561), and PILAR project (Platform Integration of Laboratories based on
the Architecture of visiR), Erasmus+ Strategic Partnership nº 2016-1-ES01-KA203-025327.
And to the Education Innovation Project (PIE) of UNED, GID2016-17-1, “Prácticas remotas
de electrónica en la UNED, Europa y Latinoamérica con Visir - PR-VISIR”, from the Academic
and Quality Vicerectorate and the IUED (Instituto Universitario de Educación a Distancia) of the
UNED.

References
1. Plaza, P., Sancristobal, E., Fernandez, G., Castrom, M., Pérez, C.: Collaborative robotic
educational tool based on programmable logic and Arduino. In: 2016 Technologies Applied
to Electronics Teaching (TAEE), Seville, pp. 1–8 (2016)
2. Merino, P.P., Ruiz, E.S., Fernandez, G.C., Gil, M.C.: A wireless robotic educational
platform approach. In: 2016 13th International Conference on Remote Engineering and
Virtual Instrumentation (REV), Madrid, pp. 145–152 (2016)
3. Xu, B., Xu, L.D., Cai, H., Xie, C., Hu, J., Bu, F.: Ubiquitous data accessing method in
IoT-Based information system for emergency medical services. IEEE Trans. Ind. Inform.
10(2), 1578–1586 (2014)
4. Nastic, S., Sehic, S., Vogler, M., Truong, H.-L., Dustdar, S.: PatRICIA – a novel
programming model for IoT applications on cloud platforms. In: Service-Oriented
Computing and Applications (SOCA)
5. Fernandez, G.C., Ruiz, E.S., Gil, M.C., Perez, F.M.: From RGB led laboratory to servomotor
control with websockets and IoT as educational tool. In: 2015 12th International Conference
on Remote Engineering and Virtual Instrumentation (REV), pp. 32–36, 25–27 February
2015
6. Arduino. https://www.arduino.cc/. Accessed 21 Nov 2016
7. NodeMcu ESP8266. http://www.nodemcu.com/. Accessed 21 Nov 2016
8. Onion Omega. https://wiki.onion.io/Get-Started. Accessed 21 Nov 2016
9. NodeMCU v2 - Lua based ESP8266. http://www.exp-tech.de/nodemcu-v2-lua-based-
esp8266. Accessed 21 Nov 2016
10. Onion Omega. https://onion.io/store/. Accessed 21 Nov 2016
11. Currency conversion. http://www.x-rates.com/calculator/?from=USD&to=EUR. Accessed
21 Nov 2016
12. Arduino forum. https://forum.arduino.cc/. Accessed 21 Nov 2016
13. NodeMCU forum. https://www.hackster.io/nodemcu. Accessed 21 November 2016
14. ESP8266 forum. http://www.esp8266.com/viewforum.php?f=17. Accessed 21 Nov 2016
15. Onion community. https://community.onion.io/. Accessed 21 Nov 2016

zamfira@unitbv.ro
Wireless Development Boards to Connect the World 27

16. Orha, I., Oniga, S.: Study regarding the optimal sensors placement on the body for human
activity recognition. In: 2014 IEEE 20th International Symposium for Design and
Technology in Electronic Packaging (SIITME), Bucharest, pp. 203–206 (2014)
17. Thaker, T.: ESP8266 based implementation of wireless sensor network with Linux based
web-server. In: 2016 Symposium on Colossal Data Analysis and Networking (CDAN),
Indore, pp. 1–5 (2016)
18. Škraba, A., Koložvari, A., Kofjač, D., Stojanović, R., Stanovov, V., Semenkin, E.:
Streaming pulse data to the cloud with bluetooth LE or NODEMCU ESP8266. In: 2016 5th
Mediterranean Conference on Embedded Computing (MECO), Bar, pp. 428–431 (2016)
19. Lopes, M., Gomes, I., Trindade, R., Silva, A., Lima, A.C.: Web environment for
programming and control of mobile robot in a remote laboratory. IEEE Trans. Learn.
Technol. PP(99), 1–1
20. Charlton, P., Avramides, K.: Knowledge construction in computer science and engineering
when learning through making. IEEE Trans. Learn. Technol. PP(99), 1–1

zamfira@unitbv.ro
CHS-GA: An Approach for Cluster Head
Selection Using Genetic Algorithm for WBANs

Roopali Punj(B) and Rakesh Kumar

Department of Computer Science and Engineering, NITTTR, Chandigarh, India


roopali.cse@nitrtrchd.in

Abstract. Wireless Body Area Networks (WBANs), an advancing tech-


nology in the field of pervasive healthcare monitor patients ubiquitously
and provide real-time feedback. Data communication consumes more
energy than data processing in WBANs. As it is nearly impractical to
replace or recharge the dead sensor nodes, it has become a major concern
to overcome issues related to data communication in WBANs that affect
network lifetime and energy consumption. In this paper, we propose an
efficient algorithm for cluster head selection using genetic heuristics for
enhancing network lifetime and harnessing energy consumption of the
sensor nodes. It uses genetic heuristics and divides the network into clus-
ters. A cluster head is chosen for inter and intra-cluster communication.
Clustering is a feasible solution as it reduces the number of direct trans-
missions from source to sink. It enhances network lifetime and reduces
energy consumption as there is inverse relationship between the two, i.e,
less the energy consumption more is the network lifetime. The proposed
algorithm is also analyzed mathematically in terms of time complexity,
overhead and fault tolerance which reveals that our algorithm outper-
forms the existing techniques such as AnyBody and HIT in terms of
energy efficiency and network lifetime.

Keywords: Cluster head · Energy optimization · Genetic algorithm ·


Load balancing · WBANs

1 Introduction
The recent technological advancements have witnessed vast expansion of WSN
applications in many fields such as Power system applications, Disaster emer-
gency response, Healthcare applications, Air pollution monitoring, Structural
monitoring, Urban temperature monitoring, Precipitation monitoring, Water
pipeline monitoring, Ubiquitous geo-sensing, Commercial asset tracking, Urban
Internet and many more [1]. Sensor nodes are capable of sensing, processing
and transmitting physical, biological and environmental factors such as sound,
temperature and motion. WBANs, an extension of WSNs, consists of low-power,
intelligent, minute, lightweight sensor nodes to monitor the human body func-
tionalities, physiological parameters, physical activities and environmental con-
ditions of the patients. WBANs are useful in many health related application

c Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6 3

zamfira@unitbv.ro
CHS-GA: An Approach for Cluster Head Selection 29

spheres such as battlefield, disaster healthcare, biomedical applications, monitor-


ing military troop movements and large scale behavioral studies [2]. Sensor nodes
can either be implanted on human body (in-body) or are wearable (on-body)
forming a network known as WBAN [3]. WBANs are capable of monitoring
physiological parameters such as ECG, EEG, EMG, body temperature, blood
pressure, heart rate, blood oxygen level etc. Sensor nodes can process the sensed
data as well as transmit it to the remote server but have limited battery life-
time, memory, and storage. As every operation consumes energy of sensor nodes;
maximum in communication, they shed their energy soon. Harsh or remote envi-
ronmental conditions make it nearly impractical to revive or swap the battery of
sensor nodes over and again. Therefore, efficient energy consumption of sensor
nodes is a major concern in WBANs to augment network lifetime.
Clustering is a feasible solution for efficient communication and energy con-
sumption as it reduces the number of direct transmissions from source to sink.
Clustering, a process of categorization of similar data into disjoint classes, called
clusters, i.e., the data within a cluster are highly similar, while the objects in dif-
ferent clusters are more dissimilar. It is an example of unsupervised classification
[4]. Clustering in WBANs can be represented mathematically by considering a
set of input sensor nodes N = {n1 , n2 , n3 , .., nt }. The aim of clustering is to par-
tition the input set of sensor nodes into disjoint subsets, C = {c1 , c2 , c3 , ..., cm },
such that Cj = φ and Cj = Ck for j = k. Clustering helps in reducing collisions
amongst cluster members; balancing load and to find feasible number of cluster
heads etc. [5]. Thus, the problem of clustering can be considered as an NP-Hard
optimization problem [4]. The researchers have made a certain progress, but
there is still a space for optimization. The details will be discussed in related
work.
In this paper, we have proposed an efficient cluster head selection algorithm
based on genetic heuristics. The objective of the proposed algorithm is to find
an optimal set of cluster heads in WBANs for optimizing energy consumption,
load balancing and enhancing network lifetime.
This paper is organized as follows. Section 2 describes the related work.
Section 3 explicates the proposed algorithm. Section 4 analyzes complexity of
the proposed algorithm and compares it with existing techniques. Finally, Sect. 5
concludes the paper.

2 Related Work

Many clustering protocols have been developed for WSNs in the recent past
[6–8]. Low Energy Adaptive Clustering Hierarchy (LEACH) [6], is a single hop
clustering algorithm. It reduces energy consumption but the number of dead
nodes increase with the increasing number of nodes which ultimately affects the
network lifetime. In [7], a cluster based routing protocol, Energy Aware Clus-
tering Algorithm (EADC), has been developed to solve the problem of imbal-
anced energy usage at CH by constructing equal sized clusters enhancing network
lifetime. But an overhead occurs in sending large number of control messages.

zamfira@unitbv.ro
30 R. Punj and R. Kumar

In [8], the authors have proposed An Unequal Multi-Hop Balanced Immune Clus-
tering Protocol (UMBIC). It partitions the network into cluster sand constructs
optimum cluster heads and routing tree amongst them. But it works only for
static sensor nodes.
The algorithms proposed for WSNs are not suitable for WBANs as the lat-
ter have some specific properties [3,9]. Therefore, some algorithms have been
proposed for WBANs [10–12]. In [11], routing protocol Anybody for body area
networks has been proposed. It is a self-organized multi-hop routing protocol
and is better than the previously proposed algorithm LEACH [6] in terms of
constant number of clusters with the increasing number of nodes but it does not
take into account the residual energy. In [10], authors had proposed a Hybrid
Indirect Transmission (HIT) algorithm for data gathering that makes use of two
or more clusters and multiple multi-hop indirect transmissions. The authors had
focused on energy consumption and network delay. But the residual energy of the
sensor nodes is not taken into consideration which increases the number of dead
nodes per round. In [12] cluster based epidemic control through smartphone-
based body area networks has been proposed. It is efficient in densely populated
area and closer social interaction zone. It is efficient than traditional epidemic
control methods in dynamic data collection and numerical assumption about
social interaction. But missing data poses a serious threat to any data collection
application.
But the already existing algorithms do not consider residual energy of the sen-
sor nodes which ultimately affects the network lifetime and energy consumption,
so there is need for an efficient cluster head selection algorithm that enhances
the network lifetime and minimizes energy consumption.

3 Cluster Head Selection Using Genetic Algorithm


(CHS-GA)

In this section, we explain the prerequisites of our problem and describe the
proposed algorithm. In this paper, we propose an efficient cluster head selec-
tion algorithm in WBANs for optimizing energy consumption, load balancing
and enhancing network lifetime. The process of cluster head selection is repre-
sented using the properties of Genetic Algorithm (GA). GA follows the process
of natural evolution and evaluates the fitness of an individual. The fitness value
depends upon the parameters specific to the application. It is multi-objective
optimization criteria for (i) load balancing at CH, (ii) reducing energy consump-
tion and (iii) enhancing network lifetime.
As it has been proved that cluster head selection is an NP-Hard problem [13],
it requires random and optimization techniques to select CH. Thus, we choose
GA for cluster head selection because of its properties such as evolutionary,
convergence and global optimum solution. GA is based on a search procedure
that uses random choice to guide search through a parameter space. GA mainly
require value of the objective function associated with the particular problem in
hand. GA is basically used for optimizing parameters to approach some global

zamfira@unitbv.ro
CHS-GA: An Approach for Cluster Head Selection 31

optimal point. The basic GA is explained in Algorithm 1 [14]. In this paper,


the goal is to select the cluster head amongst the cluster members while guar-
anteeing the optimization of its objective function based on sense radius and
residual energy. The genetic operators are application specific and can be mod-
ified accordingly. The various steps in GA specific to cluster head selection in
WBANs are explained in the next section.

Algorithm 1. Genetic Algorithm


1: Select a set of initial population P from population pool N
2: while (Stopping criteria not true) do
3: Evaluate each individual i in P on the basis of the Fitness Function f
4: end while
5: for (Next Generation G) do
6: Select the Parent Chromosomes p from P
7: Apply Genetic Operators on p
8: Evaluate Fitness of each individual j in G
9: end for
10: The best candidates of G forms the New Generation P 
11: return P 

The block diagram of the proposed algorithm CHS-GA is explained in Fig. 1.


The various modules of the proposed algorithm are explained below. Also, a
pseudo code for the proposed algorithm CHS-GA is given by Algorithm 2.

a. Initialization: In
√ initialization block, CHS-GA randomly selects ten nodes in
the range 2 to n from initial population to explore the genetic diversity in
the search space. Each cluster head serves equal number of cluster members to
achieve fairness and load balancing. Each sensor node has associated with it
the sense radius, Rs and residual energy, Re. The new selected node must be
within the sensing range of the previous node so that the cluster is not widely
dispersed. Thus, saving transmission time. The residual energy of sensor nodes
must satisfy a particular threshold, ET h so that it can be a candidate for
cluster head. The nodes that satisfy the fitness function, are selected as cluster
members. Thus, as an output, a cluster is obtained with 10 cluster members.
b. Fitness Function: Fitness function block is used to determine the quality of
individuals obtained as an output from the initialization block.  Each node is
n
evaluated on the basis of the fitness function represented by f = i=1 Rsi Rei
subject to Rsi ≤ Rsi+1 and Rei ≥ ET h where Re is the residual energy, Rs is
the sense radius and ET h is the threshold for residual energy. The best node
is elected as the cluster head. The selected cluster head advertises a message
to the cluster members and maintains a routing table for communication
purposes. As an output, a cluster head is obtained which communicates with
the cluster members using the routing table.
c. Genetic Operators: Genetic Operators are used to select the next generation
on the basis of the previous generation. Selection Criteria, a genetic operator,

zamfira@unitbv.ro
32 R. Punj and R. Kumar

Fitness
evaluation If all
of each sensor Yes
START End
node nodes
added in covered ?
Cluster
Random
Selection No
No
of Sensor Best node
Nodes is elected Next Ge-
as CH neartion
is selected
using
Selection
and
Rsi >
Mutation
Rsi+1
operators
&&
Rei >
ET h ?

Yes

Nodes are
added to
Cluster

Fitness Function Genetic Operators


Initialization

Fig. 1. Block diagram of CHS-GA

is used to improve the quality of the initial population by selecting best


chromosomes, here referred to as nodes, for new generation. The proposed
algorithm uses Roulette Wheel Scheme (RWS) to select multi-hop nodes of the
selected CH. The nodes are then sorted in non-decreasing order on the basis of
euclidean distances from cluster head. Finally, top ten nodes are selected using
RWS for the new generation. Mutation genetic operator improves the quality
of new generations as it mixes the two chromosomes from one parent by
altering the genes respectively. It is used to preserve the genetic diversity. The
proposed algorithm applies mutation by selecting the nodes corresponding to
the previous CH as new generation which maintains the diversity while the
population is similar in local minima. The proposed algorithm uses boundary
as the mutation operator to check for the sensing radius of the nodes.

4 Complexity Analysis
This section presents the complexity analysis of proposed algorithm CHS-GA.
In this paper, we had analyzed the computational complexity of the proposed
algorithm with respect to time complexity, overhead and fault tolerance.

Lemma 1. The total time to find cluster heads required to cover the whole net-
work is inversely proportional to n and the overhead is O(n).

zamfira@unitbv.ro
CHS-GA: An Approach for Cluster Head Selection 33

Algorithm 2. CHS-GA
1: Repeat ∀n
2: Total Population, Nt = n1 , n2 , n3 , . . . nt
3: while i = 1 to 10 do
4: ni ← Random Selection (Ni )
5: if (Rsi ≤ Rsi+1 && Rei ≥ ET h ) then
6: N c ← Nc n i
7: i=i+1
8: end if
9: Each node maintains a Routing Table (RT) consisting of Node Id, Residual
Energy and Next Hop
10: end while
11: NCH = max(Nc )
12: NCH floods a message to rest of the nodes in cluster
13: a = Multi-Hop nodes for NCH
14: for (a = 1; a ≤ k; a + +) do
15: calc D(ai , ai+1 )
16: end for
17: Nk = nodes are selected using RWS with min(D) for max(k) = 10
18: NextCluster = Nk
19: for (Nk = 1; Nk <= 10; Nk + +) do
20: Goto step 2
21: end for
22: until all the sensor nodes are included in any of the cluster

Proof. It depends upon the number of total sensor nodes (N ) and cluster mem-
bers (n) in the network. These values are pre-defined or user specific, therefore,
the optimal number of cluster heads required to cover the whole network is known
beforehand. The constraints to include any node in the cluster are checked glob-
ally that reduces the overhead of checking conditions at each level. The nodes
that do not satisfy the constraints, i.e., outliers are declared as dead nodes. The
worst case occurs when n is small. It will lead to less number of cluster members
in a cluster and hence more number of cluster head in the network. If n is large
then the physical size of cluster will increase which denies the meaning of clus-
tering. The best case is when n has an optimal value that balances the load at
cluster head as well as manages the physical size of cluster. Also, load balancing
helps in reducing energy consumption at CH.
After CH selection, it advertises control messages to all cluster members
for initiating communication process. CH and all cluster members maintain a
routing table with 3 entries, i.e, node id, residual energy and next hop values.
Since the control messages are transmitted only once, we neglect this overhead.
Worst case occurs when the selected cluster head dies and for re-election of the
cluster head control messages are transmitted again.

zamfira@unitbv.ro
34 R. Punj and R. Kumar

Lemma 2. The proposed algorithm CHS-GA is fault-tolerant.

Proof. Cluster members send data to cluster head as soon as the cluster head is
elected. Routing table is updated dynamically. CH is a powerful node and there
is less probability of its failure and being declared as a dead node. CHS-GA is
fault tolerant because when the cluster head fails, all the nodes will be prevented
from sending their data to the dead cluster head. Cluster head keeps track of the
node which is second to it in terms of residual energy with the help of routing
table. Before it completely shreds its energy, it sends a control message consisting
of its residual energy to the second best node and informs the node to act as
cluster head. The newly formed cluster head advertises a control message to all
the cluster members for further data communication. Thus, enhancing network
lifetime.

As AnyBody [11] has constant number of cluster heads and serves unequal
number of cluster members, cluster head shreds its energy soon. This leads to
increase in energy consumption and decrease in network lifetime. Whereas in
the proposed algorithm, cluster head serves equal number of cluster members
which balances load at cluster head. Thus, enhances network lifetime and reduces
energy consumption. As HIT [10] uses chaining of cluster heads for transmitting
data from cluster head to sink, network delay increases which ultimately affects
the total transmission time. Also, it does not take into account the residual
energy of nodes which increases the number of dead nodes affecting network
lifetime. HIT has no mechanism if cluster head dies. Whereas the proposed
algorithm considers the residual energy of sensor nodes. Also, in worst case if
cluster head dies it notifies the cluster members for future communication. Thus,
the proposed algorithm CHS-GA outperforms the existing techniques in terms
of energy consumption, network lifetime and fault tolerance.

5 Conclusion and Future Scope

An efficient cluster head selection algorithm has been proposed to improve load
balancing and network lifetime at cluster head in WBANs. It uses genetic heuris-
tics for cluster head selection and create equal-sized clusters to balance load at
cluster head. The proposed algorithm checks for nodes which are in vicinity of
the randomly selected nodes so that the nodes lie within each other sensing
radius. The proposed algorithm guarantees for selecting a node as cluster head
with highest residual energy and to find optimal set of cluster heads for the com-
plete network coverage in a time inversely proportional to total number of sensor
nodes. Thus, balancing load at the CH which leads to reduced energy consump-
tion and enhanced network lifetime. Following this line of research, in future this
algorithm can be tested on the moving sensor nodes taking into consideration
the fluctuating distance between the sensor nodes and the base station. Security
mechanism can also be included at the cluster data for secure data transmission
between the source and sink.

zamfira@unitbv.ro
CHS-GA: An Approach for Cluster Head Selection 35

References
1. Rashid, B., Rehmani, M.H.: Applications of wireless sensor networks for urban
areas: a survey. J. Netw. Comput. Appl. 60, 192–219 (2016)
2. Misra, S., Chatterjee, S.: Social choice considerations in cloud-assisted WBAN
architecture for post-disaster healthcare: data aggregation and channelization. Inf.
Sci. 284, 95–117 (2014)
3. Movassaghi, S., Abolhasan, M., Lipman, J.: A review of routing protocols in wire-
less body area networks. J. Netw. 8(3), 559–575 (2013)
4. Hruschka, E.R., Campello, R.J., Freitas, A.A., de Carvalho, A.C.P.L.F.: A survey
of evolutionary algorithms for clustering. IEEE Trans. Syst. Man Cybern. Part C
(Appl. Rev.) 39(2), 133–155 (2009)
5. Gajjar, S., Sarkar, M., Dasgupta, K.: FAMACRO: fuzzy and ant colony optimiza-
tion based MAC/routing cross-layer protocol for wireless sensor networks. Procedia
Comput. Sci. 46, 1014–1021 (2015)
6. Heinzelman, W.B., Chandrakasan, A.P., Balakrishnan, H.: An application-specific
protocol architecture for wireless microsensor networks. IEEE Trans. Wireless
Commun. 1(4), 660–670 (2002)
7. Yu, J., Qi, Y., Wang, G., Gu, X.: A cluster-based routing protocol for wireless sen-
sor networks with nonuniform node distribution. AEU Int. J. Electron. Commun.
66(1), 54–61 (2012)
8. Sabor, N., Abo Zahhad, M., Sasaki, S., Ahmed, S.M.: An unequal multi-hop bal-
anced immune clustering protocol for wireless sensor networks. Appl. Soft Comput.
43, 372–389 (2016)
9. Movassaghi, S., Abolhasan, M., Lipman, J., Smith, D., Jamalipour, A.: Wireless
body area networks: a survey. IEEE Commun. Surv. Tutorials 16(3), 1658–1686
(2014)
10. Culpepper, B.J., Dung, L., Moh, M.: Design and analysis of hybrid indirect trans-
missions (HIT) for data gathering in wireless micro sensor networks. ACM SIG-
MOBILE Mob. Comput. Commun. Rev. 8(1), 61–83 (2004)
11. Watteyne, T., AugéBlum, I., Dohler, M., Barthel, D.: Anybody: a self-organization
protocol for body area networks. In: Proceedings of the ICST 2nd International
Conference on Body Area Networks, pp. 1–6, Florence, Italy (2007)
12. Zhang, Z., Wang, H., Wang, C., Fang, H.: Cluster-based epidemic control through
smartphone-based body area networks. IEEE Trans. Parallel Distrib. Syst. 26(3),
681–690 (2015)
13. Chatterjee, M., Das, S.K., Turgut, D.: WCA: a weighted clustering algorithm for
mobile ad hoc networks. Cluster Comput. 5(2), 193–204 (2002)
14. Goldberg, D.E.: Genetic Algorithms in Search, Optimization and Machine Learn-
ing, 8th edn. Pearson Education, London (1989)

zamfira@unitbv.ro
Proposal IoT Architecture for Macro
and Microscale Applied in Assistive Technology

Carlos Solon S. Guimarães Jr.1(B) , Renato Ventura B. Henriques1 ,


Carlos Eduardo Pereira1 , and Wagner da Silva Silveira2
1
Federal University of Rio Grande do Sul, Avenue Osvaldo Aranha,
103, Porto Alegre, RS, Brazil
{carlossolon,rventura,cpereira}@ece.ufrgs.br
2
Affecty Systems, Street Universidade das Missões, 464, Sant Ângelo, RS, Brazil
wagner@affecty.com

Abstract. Technology is present in different sectors of society, a world


mediated by information and communication technologies, can offer for
people with special needs the possibility of improving the limitations
imposed by their physiological condition. Today the Internet of Things
(IoT) is the emerging technology that can provide people with special
needs the support to achieve a better quality of life. It is in this con-
text that the proposal of an IoT architecture with indoor and outdoor
scenarios connected to Assistive Technologies (AT).

Keywords: Application server · Assistive technology · Internet of


Things · Smart city · Smart home · Smart stick · Smart wheelchair

1 Introduction
Objects around us have been connected for decades. Devices like TV remote
controls and garage door openers have been part of our domestic landscape for
generations. Industrial applications of these technologies-for example, through
remote monitoring and control of production-are also nothing new. In fact,
even the phrase “Internet of Things” or the abbreviation IoT is not a recent
invention [1].
However, recent developments in both networks and devices are enabling
much greater range of connected devices and IoT functionalities. Today, the
phrase “Internet of Things” refers to the world of smart connected objects and
devices. All of this is made possible by the miniaturization of electronic devices,
accompanied by a huge increase in the availability of internet connectivity. The
potential applications of this new IoT are virtually unlimited, and they have the
ability to greatly improve the quality of life of people. Devices allow a user to
change his or her thermostat remotely, dim or increase the intensity of lights,
control door locks, activate alarm systems, etc. While these applications certainly
add a level of fun and convenience for all users, the applications take on a whole
new level of importance when used by persons with disabilities and older adults.

c Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6 4

zamfira@unitbv.ro
Proposal IoT Architecture for Macro and Microscale 37

Some researches are proposing the integration of potential applications


between IoT and AT [1]. We highlight Domingo [2], which provides an overview
of IoT for people with disabilities with relevant application scenarios and the
key benefits are described together with the research challenges, which remain
open for further research, in Gubbi et al. [3] presents architectures for services
and different case studies, where Healthcare is highlighted as a future direc-
tion of IoT, emphasizes that, although IoT technologies can be very useful for
the care and support of people with disabilities, it is important to remember
that joining different areas represents both technological and social challenges,
involves interdisciplinary work among Science, engineering, sociology and social
structures.
This article explores the proposal of an IoT architecture with scenar-
ios of indoor applications Smart Home (Microscale) and outdoor Smart City
(Macroscale), integrated with AT Smart Stick and Smart Wheelchair. The paper
is structured as follows. Section 2 describes the pros of the IoT Architecture
with the application scenarios. Section 3 defines AT. Section 4 presents the pro-
totype of the Front-End and Back-End proposal. Finally, in Sect. 5, the article
is finished.

2 Proposal IoT Architecture

The implementation of IoT (intelligent network, smart home, smart city, per-
sonalized wearables) can be conceptualized as an ecosystem or scenarios - from
a technical point of view (focusing on norms, protocols or skills) and from a
social perspective (Analysis of social relationships or use cases), case studies
should be considered as user-oriented IoT implementations. Notably, however,
the smart home is a microscale ecosystem, while an smart city is a macroscale.
In both cases, questions about architecture models for implementing Internet
infrastructure of things should be analyzed for each application scenario [4].
The proposed IoT architecture is Service-Oriented Architecture (SOA) for the
construction of software solutions that use as their main element units of devel-
opment called services, which are self-described elements, platform agnostics, that
perform functions and that can range from simple requests to complex processes [5].
The tiered model of the service-oriented architecture provides services consumed
by people or other organizations to execute their activities, enabling the composi-
tion of new services and processes, Fig. 1 shows the conceptual model of the pro-
posed architecture.
The interactions between these components are search, publish, and interac-
tion operations. The service provider represents the layer that hosts the service
by allowing clients to access the service. The service provider provides the service
and is responsible for posting the description of the service it provides. The ser-
vice requestor is the application you are looking for, invoking an interaction with
the service, that is, requesting the execution of a service. Consumers search for
services on the registration server and retrieve information related to the com-
munication interface for services during the development phase or during client

zamfira@unitbv.ro
38 C.S.S. Guimarães Jr. et al.

Fig. 1. Proposal of IoT architecture applied in assistive technologies. Source: Author.

execution [6]. Tools of middlewares and frameworks are being researched, at first
we will use the Robot Operating System (ROS) as the main framework for the
application server, it provides the services you would expect from an operating
system, including hardware abstraction, low-level device control, implementation
of commonly-used functionality, message-passing between processes, and pack-
age management [4]. Secondary frameworks and midlewares will also be used to
develop scenarios and case studies [8,9]. Future multi-agent layers will be added
in the architecture.

2.1 Smart Home: Microscale

Smart homes refer to the integration of technology and information about the
home network for a better quality of life. Since intelligent machines are equipped
with an automatic environmental control system and with various devices such
as automatic sensors control devices, actuator devices and safety devices [4].
Initially, as a starting point we are using the openHAB framework for some
smart home services [7]. OpenHAB is a software for integration of different sys-
tems and technologies of residential automation into a single solution, and can
act as a central system. This feature makes it interoperable, since openHAB
can communicate with devices that use protocols such as Z-wave, KNX, xPL,
Enocean, MQTT, etc. Being free software, written in Java, it works on top of
any device that can run a JVM. OpenHAB has a web server integrated into its
user interface. Figure 2 shows the OpenHAB architecture.

zamfira@unitbv.ro
Proposal IoT Architecture for Macro and Microscale 39

Fig. 2. OpenHAB architecture: indoor environment. Fonte: OpenHAB [8].

The integration of sensors in the Smart home environment with openHAB is


essential to have a control interface. By integrating different hardware technolo-
gies and protocols, openHAB is being tested. Your subsystems can be deployed
and configured independently. There are different web-based user interfaces
(Classic UI, GreenT and Comet Visu) and there are also native clients for iOS
(openHAB) and Android (HABDroid).

2.2 Smart City: Macroscale

Smart city can be defined as the use of information and communication technolo-
gies to detect, analyze and integrate as key information of the central systems
in the execution of cities. At the same time, the smart city can make a smart
response to different types of needs, including daily subsistence, environmental
protection, public safety, city services, accessibility, industrial and commercial
activities. In short, “smart city” is the real approach of “smart planet” apply
to the specific region, achieving the informational and integrated management
of cities. It can also be said to be an effective integration of intelligent planning
ideas, intelligent building modes, intelligent management methods, and intelli-
gent development approaches [4].
To test for Smart City, this API is used from Google Maps and OpenGTS [8].
The GPS module sends the global positioning information using a communica-
tion protocol, such as NMEA 0183 protocol In this case, the protocol is based on
American Standard Code for Information Interchange (ASCII) and outputted
serially to the controller that transfers data over a GSM connection to GGSN
(Gateway GPRS Support Node) mobile operator providing the data to a remote
server over a TCP (Transmission Control Protocol), as shown in Fig. 3.
A digital city, refers to remote sensing, global positioning system (GPS),
geographic information systems (GIS) and other space information technologies
as the main means, building the digital city’s geographical information structure,
platform construction Of urban geographic information for the public service.

zamfira@unitbv.ro
40 C.S.S. Guimarães Jr. et al.

Fig. 3. Outdoor environment. Fonte: Author.

3 Visual Impaired and Handicapped


In this section the case studies will be partially presented with an approach to
the development process based on the scenarios seen in the previous sections.

3.1 Visual Impairment: Smart Stick


An appropriate product design requires interaction with the practice of
ergonomics. For all this, ergonomics must be present in all stages of project
development. The design of the handle or cable of the embedded system of the
Smart Stick project has been developed in conjunction with the Department of
Design and Graphic Expression (DGE) the Federal University of Rio Grande do
Sul - UFRGS, Fig. 4 presents some models in development for integration with
the embedded system [9].

Fig. 4. Prototypes of cables for the embedded system. Source: Design and Graphic
Expression (DEG) - UFRGS.

The case study has been developed based on Silva [9], it is an electronic
system to support mobility to replace sight by sound and Vibration [10]. There
are many configurations that can be defined for a design of an electronic walk-
ing stick, Fig. 5 presents the conceptual model numbered with the deployment
diagram Smart Stick.

zamfira@unitbv.ro
Proposal IoT Architecture for Macro and Microscale 41

Fig. 5. Conceptual model with deployment diagram smart stick. Source: Author.

The case study partially describes the design of a Smart Stick for Telemetry
and Telecontrol of an Embedded System applied to the macro and micro navi-
gation of the visually impaired. The conceptual model shows that the embedded
micro navigation system (surrounding environment) is integrated into the Stick,
while the macro navigation (Telemetry and remote control) is adapted to the
visually impaired (board computer wearable or mobile phone). This separation
takes the weight of the electronic cane and also divides the processing.

3.2 Handicapped: Smart Wheelchair

For this case study, a motorized wheelchair will be used that are assembled with
modules, sensors and controllers [10]. In this way, a chair can connect with the
scenarios and interact with the environments. The information is made available
to an application server for which the system uses the services appropriate for the
needs of the users and working conditions [11]. Figure 6 presents the deployment
diagram Smart Wheelchair.

Fig. 6. Deployment diagram for smart wheelchair. Source: Author.

zamfira@unitbv.ro
42 C.S.S. Guimarães Jr. et al.

Wheelchair users can enjoy of intelligent control that allow the diversion of
obstacles and avoid irregular terrain, executed automatically, providing greater
safety and comfort for macro and micro navigation. The goal of the instrumenta-
tion is to develop a smart, low-cost wheelchair that, guided by a sensor network,
can avoid obstacles and prevent mistaken actions regardless of user actions.

4 Front-End and Back-End System

In addition to the application server, frameworks, middleware and embedded


systems, the Front End and Back End of the project scenarios need to be devel-
oped. Initially, some prototypes of Front-End’s are being built, with fields of
“Identification”, that through the information inserted in the Front-end of the
application will be consulted the database in the application server. Each user of
the system will carry an identifier. Through this identifier, the services [5] will be
selected for the category of assistive technology of the registered user. The system
must provide services for assistive technologies (Smart Stick and Smart Wheel-
chair), application scenarios (Smart Home and Smart City) and make personal
information available to the user. Figure 7 partially presents screen prototypes
for the Smart Home and Smart City scenarios [8,13].

Fig. 7. Partial screen prototypes for the smart home and smart city scenarios. Source:
Author and Wagner Silveira.

The system back-end allows administrator access to all content and function-
ality. A central panel is developed with shortcuts to the most common options
and is divided by menus that allow to manage contents and functionalities. To
access with administrator profile must be informed in the authentication screen
an identifier referring to the administration, the system will provide features
related to this profile. The system must allow access to all configurations refer-
ring to the smart stick and smart wheelchair, in the reporting area can be gener-
ated statistics referring to the quantity of devices available by city and generate
detailed report [6].

zamfira@unitbv.ro
Proposal IoT Architecture for Macro and Microscale 43

5 Conclusions
The project is in the process of heuristics and has been gaining space for the
development of an IoT architecture with scenarios of applications connected to
AT to assist in the orientation and mobility of people with disabilities. The
project is in the research, modeling and definition phase of hardware and soft-
ware devices, prediction for the first tests for the second half of 2017, with
partial system. Future work includes the improvement of the project as a whole,
development of embedded hardware, new services for assistive technologies and
multi-agent systems for adaptive scenarios.

Acknowledgement. The authors would like to thank to CAPES, this work has been
funded by the research project PROCAD Assistive Technologies.

References
1. G3ict: Internet of Things: New Promises for Persons with Disabilities (2015).
http://g3ict.org/resource center/publications and reports/p/productCategory
books/subCat 2/id 335. Accessed 7 Sept 2016
2. Domingo, M.C.: An overview of the Internet of Things for people with disabilities.
J. Netw. Comput. Appl. 35, 584–596 (2015)
3. Gubbi, J., Buyya, R., Marusic, S., Palaniswami, M.: Internet of Things (IoT): a
vision, architectural elements, and future directions. J. Fut. Gener. Comput. Syst.
29, 1646–1658 (2013)
4. Robot Operating System: Open-source collection of software frameworks for robot
development (2016). http://www.ros.org/. Accessed 11 Nov 2016
5. Vermesan, O., Friess, P.: Internet of Things. Converging Technologies for Smart
Environments and Integrated Ecosystems, p. 17363. European Commission,
Belgium (2013)
6. Erl, T.: Service-oriented architecture. In: Concepts, Technology, and Design, pp.
83–280. Indianapolis, Indiana (2005)
7. Botta, A., Donato, W., Donato, W., et al.: Integration of cloud computing and
Internet of Things: a survey. J. Fut. Gener. Comput. Syst. 56, 684–700 (2016)
8. openHAB: A vendor and technology agnostic open source automation software for
your home (2016). http://www.openhab.org/features/architecture.html. Accessed
16 Sept 2016
9. Guimarães, C.S.S., Pereira, C.E., Henriques, R.V.B.: Telemetry and remote an
embedded system applied in macro and micro blind navigation. Paper presented
at the 11th international conference on remote engineering and virtual instrumen-
tation, Porto, Portugal, pp. 424–433, February 2014
10. Silva, R.F.L.: Integrated product design to urban design: long Bengal electronics.
Dissertation, Federal University of Santa Catarina, Brazil (2009)
11. Lee, E.A., Seshia, S.A.: Introduction to Embedded Systems. A Cyber Physical
Systems Approach, pp. 93–370. University of California, Berkeley (2011)
12. Marques, P.J.: Proposal of a wheelchair position determination system in an intel-
ligent environment. Dissertation, Federal University of Rio Grande do Sul, Brazil
(2014)
13. Open GPS Tracking System: Open-Source GPS Tracking System - OpenGTS
(2016). http://www.opengts.org/. Accessed 11 Aug 2016

zamfira@unitbv.ro
Using Industrial Internet of Things to Support Energy
Efficiency and Management: Case of PID Controller

Tom Wanyama ✉
( )

Faculty of Engineering, W Booth School of Engineering Practice and Technology,


McMaster University, Hamilton, ON, Canada
wanyama@mcmaster.ca

Abstract. It is generally agreed in literature that manufacturers are starting to


monitor energy consumption in some capacity, whether at site level or down to
specific processes and production lines. This monitoring is a prerequisite for
energy saving since it enables companies to make operational changes to reduce
energy consumption and costs. The main challenge to energy monitoring is the
need to integrate manufacturing, and energy monitoring and control devices that
support different communication protocols and are usually distributed over a wide
area. This paper describes how the new networking paradigm of Industrial
Internet of Things is used to show the effects of PID tuning on energy efficiency.
Moreover, the paper describes how process and energy system data is transferred
from devices using Open Platform Communication (OPC) technology over
Ethernet to business applications such as Microsoft Excel. Finally, the paper
describes how Microsoft Excel can be used to integrate process energy data with
utilities’ electricity pricing information in real-time to help plant managers to
make decisions on when and how to run manufacturing processes so as to optimize
energy use.

Keywords: Energy efficiency · PID · Industrial Internet of Things

1 Introduction

The Proportional Integral and Derivative (PID) is the most used automatic controller of
industrial processes today. This controller requires its parameters to be adjusted
according to the nature of the process. This adapting of the controller to the process is
called controller tuning. The focus of tuning is usually minimizing the error between
the desired value of the process variable (PV) and the setpoint (SP). However, it is
generally agreed in literature that most PID controllers are not properly tuned which
affects the performance as well as the energy consumption of the controlled system. In
traditional manufacturing systems, PID controllers as well as their associated data are
usually separated from the energy monitoring and control systems, making it difficult
to relate the controller performance parameters and the process energy consumption.
But modern industrial network technologies through the paradigm of Industrial Internet
of Things (IIoT) make all of the information throughout manufacturing facilities acces‐
sible to those who need it, whenever they need it, wherever they are [1]. This makes it

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_5

zamfira@unitbv.ro
Using Industrial Internet of Things to Support Energy Efficiency and Management 45

possible to integrate process control and energy monitoring information into single
business automation applications such as Microsoft Excel, enabling real-time associa‐
tion of PID performance and process energy consumption.
In this paper we present an IIoT that integrates industrial process control, energy
monitoring data, and utility electricity pricing information using industrial networking
technologies. The industrial process component of the IIoT is based on PID control of
a pilot scale heat exchanger using a Micrologix PLC that has Ethernet IP communication
capability. The energy consumption of the industrial process is monitored by the energy
component of the IIoT using an IEC61850 SEL751A relay. The data from the PLC and
the relay is sent to DataHub OPC client from where it is accessed by the business appli‐
cation; in this case Microsoft Excel. In addition, this paper describes how control and
energy data is processed in a single Microsoft Excel file, in real-time, showing the effect
of PID settings on the energy consumption of the controlled system. Providing such
information to machine operators and plant managers ensures that they know the impact
of the way they operation their PID controllers on energy consumption.
The rest of this paper is arranged as follows: Sect. 2 covers the background of data
access for the PID controlled pilot heat exchanger. In Sect. 3 we present a case study
and Sect. 4 deals with the testing and results of the case study. Section 5 covers the
conclusion.

2 Background

2.1 Industrial Networks

Industrial networks are the backbone of Industrial Internet of Things (IIoT). In fact, IIoT
is the integration of sensors, industrial controllers and computers, cloud computing
systems, big data technologies, and advanced data analytics systems using network
(industrial networks) and web technologies. The development of IIoT is based on the
philosophy that smart machines are better than humans at accurately and consistently
capturing and communicating data, and at analyzing that data to generate actionable
information. This information should enable companies to pick up on inefficiencies and
problems sooner, saving time and money and supporting business intelligence efforts.
In manufacturing specifically, IIoT holds great potential for quality control, sustainable
and green practices, supply chain traceability and overall supply chain efficiency.
Although IIoT has the potential to improve the overall industrial supply chain efficiency,
the focus of this paper is the use of IIoT to show the effects of PID tuning and process
control on energy efficiency of industrial systems.

2.2 Open Platform Communication

Industrial networks are used in many industrial domains including but not limited to
manufacturing, electricity generation, transmission and distribution, food processing,
transportation, water distribution and waste management, oil and gas production [5].
Each industrial domain has its own slightly different networking requirements, leading
to differences in the associated network protocols. This creates a problem of integrating

zamfira@unitbv.ro
46 T. Wanyama

data from different domain, since devices with different network protocols cannot
communicate with each other. The Open Platform Communication OPC is the solution
to this problem. Figure 1 shows that OPC defines the standard for the interface between
industrial data servers and clients. The clients can be Human Machine Interface appli‐
cations, MES or ERPs. In addition, the figure shows that servers have specific network
protocol drivers that enable them to communicate with industry specific controllers [3].

Fig. 1. OPC standard

2.3 Energy Efficiency

As the manufacturing sector changes its business paradigm from “maximum gain for
minimum capital” to “maximum value from minimum resources”, energy efficiency is
becoming one of the most important forms of “alternative energy”. Therefore there is
need to focus on energy saving and optimization from design to management of manu‐
facturing processes [4]. Energy saving and optimization through operational and
management techniques is like a marathon, rather than a sprint, with savings measured
in hour-to-hour and day-to-day increments. What enables energy optimization is the
continuous seeking of answers to the questions such as:
• When and why did a machine exceed typical energy draw?
• Why did equipment changeover cause startup surges?
• Why did component change extend the production cycle into a peak-draw period?
Therefore, energy as well as its quality has to be continuously monitored down to
the manufacturing and process lines.

zamfira@unitbv.ro
Using Industrial Internet of Things to Support Energy Efficiency and Management 47

The ability to integrate energy and manufacturing process data facilitated by IIoT
brings about a search for answers to a new set of questions that can increase energy
efficiency and optimization. Such questions include:
• How does energy consumption change with process control strategy?
• What happed to energy consumption when controller (e.g. PID Kp, Ki and Kd) param‐
eters were changed?
• What is the difference in energy consumption during dynamic state and stead state?
• How do system dynamics affect energy quality and consumption?
Moreover, IIoT enables the posting and automatic updating of the energy pricing
information, including time-of-use. This constantly reminds plant operators and
managers the importance of production timing to the overall cost of energy. In general,
IIoT supports the integration of energy efficiency performance criteria into production
management systems such as Manufacturing Execution Systems (MES) and Enterprise
Resource Planning (ERP) applications as an enabler of energy efficient manufacturing
processes [2].

2.4 PID Controller


PID is one of the most widely used control technologies in industry. However, the tech‐
nology has a major implementation challenge of not having associated industrial stand‐
ards. This has resulted in a wide variety of PID controller architectures. In the paper
titled “Reducing Energy Costs by Optimizing Controller Tuning”, O’Dwyer [8] reports
that up to forty-six different structures for the PID controller have been identified in
literature. Therefore, controller manufacturers vary in their choice of architecture. Yet,
controller tuning methods that work well on one architecture may work poorly on
another, affecting not only the stability of the controlled systems but also their energy
efficiency.
Figure 2 shows a block diagram representation of the ideal PID controller with unity
feedback [7]. The figure shows that the control system has two main components,
namely: PID controller and the plant. The plant is made up of the process as well as the
components of the measurement element of the control system. Ts (t) is the desired
process output (setpoint), and Ta (t) is the actual process output (the process variable).
The difference between Ts (t) and Ta (t) is the process error e(t). f (t), given by Eq. 1, is
the manipulated variable generated by the PID controller. Some PID manufacturers refer
to this variable as the control variable. Note that the role of the PID controller is to
generate f (t) that minimizes e(t) [6].
[ ]
1 t d
f (t) = Kp e(t) + ∫ e(𝜏)d𝜏 + Td e(t) (1)
Ti 0 dt

zamfira@unitbv.ro
48 T. Wanyama

Fig. 2. Ideal PID controller

Where:
Kp is the proportional gain, Kp = KC Controller gain (unit-less).
Ti is the reset time (seconds-1).
Td is the rate time (seconds).
Two hundred seventy six tuning rules have been identified for ideal PID controller
structure described by Eq. 2 [8], and each of these rules has difference energy cost
performance. The dependence of energy efficiency performance of controlled systems
on the PID architecture, and on the tuning rules, increases the importance of monitoring
energy performance of PID controllers in real time.

3 Case Study

This section presents a model IIoT system that uses a PID to control a pilot scale heat
exchanger. The PID controller is deployed on a Micrologix PLC and the energy supply
of the system is monitored using a SEL751A relay. The system data is collected into an
Excel file over Ethernet, using OPC technology.

3.1 System Control

Our pilot scale heat exchanger unit shown in Fig. 3 is heated using an ON-OFF controlled
blow dryer (see Fig. 4). A PID controlled fan is used to cool the unit to the desired
temperature that supports the transfer of heat to air flowing through a copper pipe that
is inside the exchanger chamber. Incoming air into the chamber is at room temperature,
while outgoing air is at preset temperature. The focus of this paper is the control of
temperature inside of the heater exchanger chamber.

zamfira@unitbv.ro
Using Industrial Internet of Things to Support Energy Efficiency and Management 49

Fig. 3. Heater exchanger rig

Fig. 4. Process diagram of heat exchanger

Figure 4 shows the process diagram of our pilot heat exchanger system. The setpoint
(SP) Ts (t) is the desired temperature of the exchanger chamber, and the manipulated
variable (MV) f (t) is the 0–5 V analog input to the KT-5194 DC Motor PID speed
controller. But in this system, the controller is used in open loop PWM control mode,
were the 0–5 V input signal determines the value of the 0–24 V (10 A-maximum) PWM

zamfira@unitbv.ro
50 T. Wanyama

output. The 0–24 V power supply to the fan DC motor is the control variable (CV) c(t)
of our PID loop.
The temperature inside the heater transfer chamber is measured using an RTD probe
whose resistance varies from 100 Ω at 0 °C to 220 Ω at 300 °C. The RTD signal is fed
in a signal conditioner that produces a proportional 0–10 V analog signal. The signal
conditioner output is the input to a Micrologix 1400 PLC that has an ADC the converts
the analog signal to 0–4095 digital variable. This variable is scaled to produce the actual
temperature of the chamber, which is the process variable (PV) Ta (t) of the PID loop [6].
The output of our PID controlled is given by Eq. 2.
[ ( )]
1 t d Ta (t)
Output = Kc e(t) + ∫ e(𝜏)d𝜏 + Td + bias2 (2)
Ti 0 dt

We set the feedforward bias to zero, and the process variable (PV) Ta (t) is equal to
the setpoint (SP) Ts (t) plus the process error e(t) for and indirect process such as cooling.
Since the setpoint is a constant, the derivative of the process variable is equal to the
derivative of the process error (see Eq. 3). With bias = 0,
( ) ( )
d Ta (t) d Ts (t) + e(t) de(t)
= = , if Ts (t) = Constant3 (3)
dt dt dt
Substituting Eq. 3 into Eq. 2 result into our PID output being given by an equation
similar to Eq. 1.

3.2 System Network


Figure 5 shows the network architecture of the IIoT system we used to study the effects
of PID tuning on energy efficiency, and to show case the ability to display process data,
energy monitoring data, and utility electricity pricing information in the same Microsoft
Excel document in real time.
Our model IIoT system has Micrologix 1400 PLC that communicates using Ethernet
IP protocol and a SEL 751A relays that uses IEC61850 communication standard. At the
application layer, Ethernet IP supports two data structures, namely: Communication and
Information Protocol (CIP) and DF1. Micrologix 1400 uses the DF1 data structure. On
the other hand, the SEL relay supports DNP3, Modbus, GOOSE and MMS data struc‐
tures at the application layer. At the physical layer, both Ethernet IP and IEC61850
support Ethernet; that is why we were able to connect the SEL relay and the Micrologix
PLC on the same LAN as shown in Fig. 5. Moreover, Fig. 5 shows that the business PC
accesses the process and energy sub-system of our IIoT system, as well as the webserver
of the electricity company over the Internet. This is enabled by a combination of web
services and VPN technologies used to support our IIoT system [6].

zamfira@unitbv.ro
Using Industrial Internet of Things to Support Energy Efficiency and Management 51

Fig. 5. IIoT PID system architecture

3.3 System Data Access

The process and energy data of our model IIoT system is accessed using KEPServer
OPC server. The server has multiple drivers including Ethernet IP and IEC61850 MMS.
These drivers are configured as channels to deliver the associated data to OPC clients.
This is necessary because OPC servers usually do not poses advanced data access
features such as HMI, alarms and event handling, data logging and historian, and process
data tunneling and bridging. It is OPC clients that are normally utilized to provide these
features. In our model IIoT system, we use OPC DataHub [6] to access data from
KEPServer and provide a Human Machine Interface (HMI) for the system, as well as
Dynamic Data Exchange (DDE) to a Microsoft Excel file.
Microsoft Excel has powerful features that allow the user to directly query databases
and websites. Our model IIoT system uses the Web query feature of Excel to retrieve
refreshable information that is stored on the electricity utility company’s web site (see
Fig. 5). The pricing data is extracted from the information using Macros programmed
in Visual Basic. Then the date is analyzed and integrated with process and energy data
using the tools in Excel.

4 Testing, Results and Discussion of Results

4.1 Testing

During testing, the pilot scale heat exchange was set to maintain an operating temper‐
ature of 32 oC in the heat exchanger chamber. This temperature was to be achieved by
heating the chamber with a blow dryer, while cooling it with a PID controlled fan. Test

zamfira@unitbv.ro
52 T. Wanyama

settings such PID parameters (Kp, Ki and Kd), temperature setpoint, and PID mode were
done through the HMI shown in Fig. 6. In addition, the HMI provided the means for
monitoring the following performance measures of the system: System supply voltage,
fan controlled voltage, power consumption, and actual heat exchanger temperature.

Fig. 6. HMI of the heat exchange system with PID in manual mode

The heat exchanger was tested with PID mode set to manual for a period of 5 min.
Its power consumption was sampled every 30 s and sent to a Microsoft Excel sheet in
real time, through Cogent DataHub OPC client. Thereafter, the heat exchanger was
tested with the PID mode set to automatic, and its power consumption was logged and
stored in an Excel sheet. In both test cases, the trapezoidal rule was used to calculate the
energy consumption of the heat exchanger.

4.2 Results

Figure 6 shows the HMI of the heat exchanger with the PID set to manual, while Fig. 7
shows the HMI with PID in automatic mode.
Figure 8 shows the power consumption of the 24 V DC, 1.7 A fan motor when
controlled by PID in manual and in automatic modes.

zamfira@unitbv.ro
Using Industrial Internet of Things to Support Energy Efficiency and Management 53

Fig. 7. HMI of the heat exchange system with PID in automatic mode

Fig. 8. Power consumption of the PID controlled pilot heat exchanger

zamfira@unitbv.ro
54 T. Wanyama

Figure 9 show the real time display of the energy data of the heat exchanger and
energy cost in a Microsoft Excel sheet. This is made possible by the use of IIoT tech‐
nologies.

Fig. 9. Real time display of heat exchanger energy data in excel sheet

4.3 Discussion of Results

The trend chart in Figs. 6 and 7 show that the PID automatic mode provides smooth
system control than the manual mode. Moreover, Fig. 8 shows that the power consump‐
tion of the motor under manual model has higher variance that when the PID is in auto‐
mation mode. This is expected since the manual model in essentially ON-OFF control.
Not that ON-OFF switching (control) of machinery such as Heating, Ventilation and
Air Condition equipment courses power quality issue.
The area of the graphs in Fig. 8 represent the energy consumed by the motor in 5 min.
Our calculations show that the area under the manual mode graph is equivalent to
1.5 × 10−3 kWh, while the area under the automatic model graph is equivalent to
0.833 × 10−3 kWh. This means that automatic PID loop is over 40% more efficient than
the manual loop; leading to over 40% reduction in energy cost.

zamfira@unitbv.ro
Using Industrial Internet of Things to Support Energy Efficiency and Management 55

5 Conclusion

In this paper we present an IIoT that integrates industrial process control, energy moni‐
toring data and utility electricity pricing information using industrial networking tech‐
nologies. The industrial process component of the IIoT is based on PID control of a pilot
scale heat exchanger using an Ethernet IP enabled Micrologix PLC. The energy
consumption of the industrial process is monitored by the energy component of the IIoT
that has an IEC61850 SEL751A relay. Furthermore, this paper describes how the IIoT
enables the processing of control and energy data in a single Microsoft Excel file, in
real-time, showing the effect of PID settings on the energy cost of the controlled system.
Providing such information to machine operators and plant managers ensures that they
know the impact of the way they operation their PID controllers on energy consumption.

References

1. Bunse, B., Kagermann, H., Wahlster, W.: Industry 4.0: Smart Manufacturing for the Future,
Germany Trade and Invest, Berlin, German, July 2014. http://www.gtai.de/GTAI/Content/EN/
Invest/_SharedDocs/Downloads/GTAI/Brochures/Industries/industrie4.0-smart-
manufacturing-for-the-future-en.pdf. Available as of 12 April 2015
2. Bunse, K., Vodicka, M.: Managing energy efficiency in manufacturing processes –
implementing energy performance in production information technology systems. In: Berleur,
J., Hercheui, M.D., Hilty, L.M. (eds.) What Kind of Information Society? Governance,
Virtuality, Surveillance, Sustainability, Resilience. IFIP Advances in Information and
Communication Technology, vol. 328, pp. 260–268. Springer, Heidelberg (2010). doi:
10.1007/978-3-642-15479-9_25
3. Burke, T.J.: OPC Unified Architecture Interoperability for Industry 4.0 and the Internet of
Things, OPC Foundation. https://opcfoundation.org/wp-content/uploads/2016/05/OPC-UA-
Interoperability-For-Industrie4-and-IoT-EN-v5.pdf. Available as of November 2016
4. European Communities, ICT and Energy Efficiency: The Case for Manufacturing, Office for
Official Publications of the European Communities, Luxembourg (2009). ISBN
978-92-79-11306-2
5. Galloway, B., Hancke, G.P.: Introduction to industrial control networks. IEEE Commun. Surv.
Tutorials 15(2), 860–880 (2013). Second Quarter
6. Kafuko, M., Wanyama, T.: Integrated hands-on and remote PID tuning laboratory. In:
Proceeding of the Canadian Engineering Education Association Conference, Hamilton,
Ontario, Canada, June 2015
7. Likins, M.: PID Tuning Improves Process Efficiency, Yokogawa Corp. of America. http://
www.yokogawa.com/technical-library/resources/media-publications/pid-tuning-improves-
process-efficiency/. Available as of November 2016
8. O’Dwyer, A.: Reducing energy costs by optimizing controller tuning. In: Proceedings of the
2nd International Conference on Renewable Energy in Maritime Island Climates, Dublin,
Ireland, pp. 253–258, April 2006

zamfira@unitbv.ro
MODULARITY Applied to SMART HOME
From Research to Education

Doru Ursuţiu1 ✉ , Andrei Neagu2, Cornel Samoilă3, and Vlad Jinga4


( )

1
Transylvania University of Braşov - AOSR Academy, Braşov, Romania
udoru@unitbv.ro
2
Transylvania University of Braşov, Braşov, Romania
andrei.c.neagu@gmail.com
3
Transylvania University of Braşov - ASTR Academy, Braşov, Romania
csam@unitbv.ro
4
Transylvania University of Braşov - Benchmark Electronic, Braşov, Romania
jingavlad@yahoo.com

Abstract. Reducing energy demand in the residential sector is an important


problem worldwide. This study is focused on the awareness of residents to energy
conservation, potential of reducing energy and the implementation of a solution
in the field of Intelligent House. This paper presents a newly designed integrated
wireless modular monitoring system that supports real-time data acquisition from
multiple wireless sensing units.

Keywords: Energy savings · Building monitoring system · Wireless sensor


network · Xbee · Smart home · Cypress · IQRF

1 Introduction

Energy usage and its resulting impacts on our environment became one of the major
concerns humanity is facing today. Depletion of fossil fuels, the impacts on the envi‐
ronment from mining those fuels, and the spectre of global warming exacerbated by
burning them are critical reasons for us to become more responsible for the energy we
consume.
As one report of The Intergovernmental Panel on Climate Change shows, the indus‐
trial activities that our modern civilization depends upon have raised atmospheric carbon
dioxide levels from 280 parts per million to 379 parts per million in the last 150 years.
The panel also concluded there’s a better than 90% probability that human-produced
greenhouse gases such as carbon dioxide, methane and nitrous oxide have caused much
of the observed increase in Earth’s temperatures over the past 50 years. According to
this report the rate of increase in global warming due to these gases is very likely to be
unprecedented within the past 10,000 years or more [1].
Generating energy requires precious natural resources, for instance coal, oil or gas
while reducing energy consumption has lots of benefits – we can save money and help

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_6

zamfira@unitbv.ro
MODULARITY Applied to SMART HOME 57

protect and preserve our environment. Therefore, using less energy helps us to preserve
these resources and make them last longer in the future.
In the light of these facts above, we believe energy must be in future a concern of
all citizens and new tools must be created in support of our environment, starting even
from every household.
It must be noted that energy service demand may also reflect changes in the level of
comfort and lifestyle requirements of households. Specific energy consumption is
defined as the energy required to maintain a particular level of energy service in house‐
holds. It is a modelled alternative to energy intensity, and takes account of changes in
demand for individual energy services (such as level of household comfort or hot water
use), and helps to remove the impact of higher and lower external temperatures on energy
use.
In this paper we focused on energy consumption in Belgium and Netherlands.
According to VEA Flemish Energy Agency, the average energy consumption of a person
per day is 50 kWh and from that 71% represents heating [2].
Let`s take into consideration that most companies involved in reducing energy
consumption and environmental pollution try to minimize energy consumption by
raising the efficiency of their systems and improving the buildings (heating devices
manufacturers such as Daikin or Viessmann, campaigns like the US Solar Decathlon),
or increasing user’s control over their systems (Google’s Nest, Smappee, smart metering
systems of Daikin and Veissmann). The disadvantage of these systems is high cost and
significant changes in the construction of the building (Fig. 1).

30000

25000

20000
kWh/year

15000

10000

5000

0
Brussels region Flemish region Walloon region Belgium
natural gas 16480 20712 18733 19644
fuel oil 16604 27902 26176 26489
wood 20202 22099 21350
coal 19110 22582 21803
Propane 21234 15946 18608

Fig. 1. Average total energy consumption (kWh/year, dwelling) per principal energy source per
dwelling per region and for Belgium (survey results) [2]

Figure 2 describes the heating losses starting from the boiler till the end user. As we
can see there are three main factors that are involved in the heating process: the first two
ones are the Boiler and the Building, components that most of the companies mentioned

zamfira@unitbv.ro
58 D. Ursuţiu et al.

above are dealing with; the 3rd one, and we might say the most unpredictable one is the
third one, people’s behavior. In this research we will focus to provide a technological
solution, an energy monitoring system which can engage people in a more responsible
way to save energy in the cope of lowering energy costs.

Fig. 2. Energy losses in a building

To address the above issues, monitoring the habitant’s behaviour and informing them
about their own way of using energy, this paper describes such a monitoring system
designed to validate, collect and connect the users with the status of the building. During
our study we compared different ways of building a wireless modular system (here we
focused on the wireless technologies available on the market) and with the information
collected we had identified and validate how the flow of input data collected can
contribute in habitants perception of using energy.
Regarding the hardware, our focus is to provide a reliable and also cheap solution
for first validation stage. As to the modular configuration and the limits of the system.
During our research we evaluate several scenarios and validate our assumptions by case
studies conducted in two buildings.

2 Energy Monitoring Architecture

Our aim during the research is to find the best solution for creating and testing a moni‐
toring system dedicated to student house owners. The focus of the system is on the
heating structure and heating losses, which is the most expensive cost for our market.
Because our modular system should not affect the building structure, heating pump
structure and should be easy to install, the following are required:
– a wireless communication structure
– collecting information of air flow in each room.

zamfira@unitbv.ro
MODULARITY Applied to SMART HOME 59

In order to identify the heating losses it is necessary to observe the temperature of


the heater, temperature of the room and the status open/close of the window. This infor‐
mation should be enough to predict the thermodynamic flow (Fig. 3).

Fig. 3. Room scenario

In order to be able to validate the requirements mentioned above and also to validate
the principals of creating a modular system for this purpose we defined several hardware
requirements for the alfa product: low power wireless communication protocol, two
temperature sensors and one contact sensor.

3 The Architecture of the System & Operation Pattern

Figure 4 presents the complete structure of the alfa system we are proposing. The flux
of information is marked with the blue arrow and is composed by:
1. The “Gateway” described above - the link between the building and the cloud
(formed by Xbee module, the main logic board);
2. The “Sensor” – 2 temperature sensors, 1 contact sensor, the link board and the Xbee
module;
3. The “Cloud” – database;
4. User interface – website.
A full descriptions of the elements mentioned above and also the role of each of
them, please refer to the full paper “Evaluating the reliability and scalability of a wireless
energy monitoring system in buildings” [3].
In order validate the system capabilities and purpose we created an end to end data
flow concept. And so the working pattern of the sensors to the users interface is described
in Fig. 4. This process has 4 steps:
1. Starting with the sensors, every 30 s we collect samples from the two temperature
sensors (TMP36) and the contact sensor. Through the Xbee Explorer the data is

zamfira@unitbv.ro
60 D. Ursuţiu et al.

Fig. 4. System architecture & description of the operation pattern

streamed to the Xbee module. The Xbee module packs data in frames and renders
the information ready for the transmission.
2. The frame is send wirelessly to The Gateway. Acting as a coordinator, the Xbee
module receives the frame and forwards it to the Arduino Ethernet Board through
the Arduino Xbee Shield.
3. The Arduino board unpacks the frame and pushes the raw data through the Ethernet
port to the Carriots database to be stored.
4. After each week the data is interrelated and processed manually. The result then is
placed on a website, using charts and easy to understand description.
Figure 5 Presents the first part of the System – “The Sensor”, module placed in the
rooms, the transmission node.

Fig. 5. “The sensor” setup – the room transmission node

zamfira@unitbv.ro
MODULARITY Applied to SMART HOME 61

4 Data & User Interface

In order to validate the system and the impact that it may have to the habitants we
installed it in two different buildings. We collected data during for several weeks by not
informing the habitants about the system presence and purpose and after that we inform,
let them challenge each other in ways of saving energy.
During one month of collecting and stored data from each building, with the scope
to present to the house owner, in an easy to understand way, the result of energy
consumption in each room we developed a website including some charts and few
recommendations on how to save energy was developed for that purpose.
Based on the input from our three sensors we managed not to just inform the owner
on the consumption but also to evaluate the habits of the residents. Figure 6 illustrates
three days consumption by data collected from one of the experimental rooms (Room4).
In this case we can clearly see that the room average temperature is above the normal
comfort zone of 21 °C while open window usage is not energy efficient. The tenant, in
most of the situations, turns the heater at the maximum and opens the window.

Fig. 6. Data collected from one of the devices during three days (red- heater temperature, green
– room temperature, blue – window open/close).

In order to have a clear view of those habits and based on the average of the inputs,
we created a profile of each user. For that we made an average of temperatures collected
and movement open/closed of the window based on the time line.
Figure 7 describes user profile. Furthermore, providing to users tips on how to be
more environmental friendly, we can clearly see the improvements in energy usage of
the users, represented by the green line. Even if we did not get the same involvement
for each tenant, as Error! Reference source not found. shows, it is obvious that, by
concentrating more on those tips we can get better results for the future developments.

zamfira@unitbv.ro
62 D. Ursuţiu et al.

Fig. 7. User’s habits profile

In order to have a better understanding of the Fig. 7 the red line represents the usage
habit during the first three days of testing and the green one in the last days, after
providing the tips. Subtracting the two, in Fig. 6 we can see the improvement in user
habits. This result was obtained in one room, Room 4.

5 Evaluation Methodology

To evaluate system coverage and redundancy we applied several key methods to simu‐
late and test it in a real case scenario. First we used the WHIPP tool to simulate the
coverage of the system focusing on the reception sensitivity. By using this tool we were
able to have a first understanding of the distance that might be between the devices. The
next step is represented by a RSSI measurement conducted in a real environment. The
test was made using X-CTU software, allowing us to understand which links are reliable
and where an extra device is needed. The last step is a mash routing redundancy. In that
phase we evaluated the system capability of establishing a new link in case of power
loss. There are several key methods that we applied to test and also to determine the
maximum range of our system.

5.1 WiCa Heuristic Indoor Propagation Prediction Tool

WiCa Heuristic Indoor Propagation Prediction Tool is an environment for planning


wireless networks. The tool is a heuristic indoor network planner for exposure calcula‐
tion and optimization in wireless homogeneous and heterogeneous networks, with which
networks are automatically jointly optimized for both coverage and electromagnetic
exposure. It is capable to predict and optimize the coverage and expose of an indoor
wireless network (WiFi, UMTS, XBee). It is based on an advanced and experimentally
validated propagation model [4]. In the Fig. 8 is presented the layout of one of the
building /tasted side in the WiCa Heuristic Indoor Propagation Prediction Tool.

zamfira@unitbv.ro
MODULARITY Applied to SMART HOME 63

Fig. 8. WiCa prediction tool

In our case we used WHIPP tool to define the exposure limitation of the system in
the tested buildings. We started by creating a plan of the building. For that purpose we
used the preset material such as dry-wall, wood doors and the xbee sensors JN516x with
the transmitting power of 3 dBm. The elevation of the sensors, for all the simulation,
was set to 1.5 m. Taking into consideration that the position of the devices is strictly
related to the heating modules in the building we focused on prediction coverage simu‐
lation.

5.2 RSSI Measurement

Received signal strength indicator (RSSI) is the signal strength level of a wireless device
measured in dBm of the last received packet [5]. The main idea behind the RSS system
is that the detected signal strength value reduces within the distance travelled. In free
space, the RSS degrades with the square of the distance from the sender [6]. Using the
Friis transmission equation, the ratio of the received power Pr (dB) to the transmission
power Pt (dB) can be expressed as:
( )2
𝜆
Pr = Pt × Gt × Gr
4𝜋d
where, Gt(dB), Gr (dB) are gain of transmitter and gain of receiver respectively, λ is the
wavelength, and d (m) is the distance between the sender and receiver. It can be seen
that the larger the wavelength of the propagating wave is, the less susceptible it is to
path loss. The received signal strength is converted to RSSI which can be defined as the
ratio of the received power Pr (dB) to the reference power PRef (dB).

zamfira@unitbv.ro
64 D. Ursuţiu et al.

Pr
RSSI = 10log
PRef

Using X-CTU software, one of the XBee modules is configured as Coordinator while
the other as a Router. After pairing with the Coordinator, the Router starts transmitting.
Once the Coordinator has received the data packets successfully, it sends back an
acknowledgment (ACK). To obtain the RSSI value the software takes the average of
100 RSSI results from 100 packets of 32 bytes each. Hence, the RSSI value is measured
after sending 100 packets of 32 bytes each, and then the average is used to generate
RSSI.
The distance between the Routers and the Coordinator was variable. We applied
different case scenarios depending of the building where we tested the system. As
presented in Figure Building A and Figure Building B we conducted static tests by
placing the coordinator setup in the main hallway of each floor. The average distance
between floors in case of Building A is 2.7 m and for Building B 2.3 m. For each floor
setup we tested the link to each device in the system, one by one, in order to determine
the relationship between RSSI values and the distances.

5.3 Mesh Routing System Redundancy

A ZigBee mesh network configuration is done automatically and flawlessly by the XBee
devices. The Coordinator starts a ZigBee network, and other devices then join the
network by sending association requests. As we described in the second chapter, ZigBee
networks are considered self-forming networks due to their ability of self-routing.
After forming the mesh network, to relay the message from one device to another,
the most optimized path is selected. However, if one of the routers becomes damaged
or otherwise unable to communicate due to power loss, the network can select an alter‐
native route.
One of the most important characteristics of ZigBee mesh networking has been its
self-healing capacity, the ability to create alternative paths when one node fails or a
connection is lost through mesh routing [7]. In order to test this attribute of mesh routing,
we observed the elapsed time between the elimination of one path and the search and
formation of another. To perform the test, we captured messages received on Coordi‐
nator from the network. Then we powered off one by one each Router until the link was
disconnected. This experiment determines where a repetitive (Router) device is needed
in order to guarantee the redundancy of the system.

5.4 Test Results Between the Tool and Real Conditions

Taking into consideration the results of the mesh redundancy we observed that without
the device from room 2, the network cannot communicate with the device from room
1. In order to compare the results of the WIHPP tool with the ones conducted in real
situation we simulated the scenario A without the device from room 2, Fig. 9b.

zamfira@unitbv.ro
MODULARITY Applied to SMART HOME 65

Fig. 9. (a) WHIPP simulation. (b)WHIPP simulation without device 2

Below, in Fig. 9 are presented the two scenarios: figure (a), with the device from
room 2 and picture (b) without it. There is a change in color between the two simulations.
The first one has a green – yellow color but the second one a darker green, marked with
the red oval. According to the tool this color represents a rage of sensitivity between
−44 to −50 dBm and in real situation test it will show a loss of connection.

6 Experiment Conclusions

As it was shown in this paper, the performance of the proposed system, different
scenarios were undertaken in order to measure the performance of the monitoring system
network, with a focus on the mesh routing redundancy and the RSSI level of the Xbee
modules, comparing the results with a software simulation. The results showed that the
performance of the network highly depends on distance range between devices and
indoor environment. A reliable communication is two floor distance (15 m) or the
communication can be lost or impossible to set.
As a general conclusion of the design and testing phases of the proposed alpha
monitoring system, the study shows the performance of the system as a tool to monitor
and optimize energy consumption. After one month of collecting data for the proposed
system, by installing the system in student houses, we managed to define habits of the
building users. This helped to obtain a profile for each room and the informing approach
on the students lead us to validate the system’s purpose to save energy. By using the
monitoring system we managed to obtain savings between 8% and 21% and to make the
first steps in creating a standard regarding the energy habits of the room user [3].
This result bring even more value to the scope of saving energy as the cost of using
the system is one small investment for an indefinite period of time. It comes with a
relatively simple structure and usage, having a user friendly interface and a low cost,
while the energy savings on the long term can bring significant reduction on energy cost
for the user and protection of the environment in terms of conservation of natural
resources needed to produce energy.

zamfira@unitbv.ro
66 D. Ursuţiu et al.

7 Future Development

In a future development we plan to investigate the possibilities to extend the sensors


used and the modularity aspects of the entire system. We expect that for the beta version
to use batteries for the ending nodes and to implement a friendlier interface for the end
user.
Another point of interest is to reduce as much as possible the device dimensions. In
this way the new generation should have all the sensors in one single box. As described
in the Fig. 10, we plan to integrate the entire Sensor in a layered module.

Fig. 10. Concept beta version-layered module

We are looking to explore the possibilities of using different technology like the
PSoC Analog Coprocessor provided by Cypress as a layer of processing multiple sensors
inputs. PSoC Analog Coprocessor integrates programmable analog blocks, including a
new Universal Analog Block (UAB), which can be configured with GUI-based software
components. By using this technology we aim to upgrade the ways we can design custom
analog front ends for sensor interfaces.

Fig. 11. CY8CKIT-048 PSoC from Cypress and CK-USB-04A from IQRF

zamfira@unitbv.ro
MODULARITY Applied to SMART HOME 67

More than that, due to our strong collaboration with Microrisc and to their complex
structure of collecting, cloud storage and visual online platform, we intend to integrate
into our system the IQRF transition technology. Till now the IQRF technology help us
in our need of scaling the range of the system and on top of that, form our first test we
were able to reduce to 20% the energy consumption of each node. As presented in
Fig. 11 we started our test by using CY8CKIT-048 PSoC from Cypress and CK-
USB-04A from IQRF.
Regarding the user approach we want to improve the web platform in order to give
better access to each student. We also want to create a friendly way to send real time
messages to students with possible actions. In order to protect the concept we will look
for a solution to move all the processing algorithms from the hardware. Extra features
will be also implemented in the same device, like CO2 and humidity sensors. This
information will add even more value to our system providing quality of the air status
in an indoor environment.

References

1. Climate Change: Synthesis Report Summary for Policymakers. (2007) http://www.ipcc.ch/pdf/


assessment-report/ar4/syr/ar4_syr_spm.pdf
2. Energy Consumption Survey for Belgian Households: study accomplished under the authority
of EUROSTAT, Federal Public Service (FPS) Economy, SMEs, Self-Employed and Energy,
VEA Flemish Energy Agency, SPW Service Public de Wallonie, IBGE-BIM Brussels
Environment, 2012/TEM/R/
3. Neagu, A.C., Joseph, W., Deruyck, M., Haerick, W.: Evaluating the reliability and scalability
of a wireless energy monitoring system in buildings. Master thesis
4. Plets, D., Joseph, W., Vanhecke, K., Martens, L.: Exposure optimization in indoor wireless
networks by heuristic network planning. Prog. Electromagnet. Res. 139, 445–478 (2013)
5. D. International: XBEE User Manual, ed: Digi International, pp. 1–155 (2012)
6. Dargie, W., Poellabauer, C.: Fundamentals of Wireless Sensor Networks: Theory and
Practice. Wiley, New York (2010)
7. Lin, Z., Yu, C., Ting-ting, F.: Self-healing network organization and protocol implementation
based on zigbee technology. Beijing, Commun. Technol. 45(04), No. 244, Totally, (2012)
8. European Innovation Partnership on Smart Cities and Communities: Strategic
Implementation Plan; Smart Cities and Communities - European Innovation Partnership
[COM(2012)4701]
9. Thinagaran Perumal Universiti: Development of an Embedded Smart Home System, Putra
Malaysia; itma 2006
10. Crandalla, A. S , Cook, D. J.: Smart Home in a Box: A Large Scale Smart Home Deployment.
School of Electrical Engineering and Computer Science, Washington State University,
Pullman
11. Kamilaris, A.: Enabling Smart Homes Using Web Technologies. University of Cyprus (2012)
12. Ullah, M. Z.: An Analysis of the Bluetooth Technology. School of Computing Blekinge
Institute of Technology Soft Center, Sweden, June 2009

zamfira@unitbv.ro
Development of M.Eng. Programs with a Focus
on Industry 4.0 and Smart Systems

Michael D. Justason, Dan Centea ✉ , and Lotfi Belkhir


( )

McMaster University, Hamilton, ON, Canada


{justaso,centeadn,belkhir}@mcmaster.ca

Abstract. Master of Engineering Programs are often designed to provide skills


that can be readily used in industry. Although many M.Eng. Programs include
courses that can be selected from an existing pool of traditional engineering topics
to fulfill various specializations, this paper describes the development of new
M.Eng. Programs designed to include courses that address the new trends in
industry. This paper presents the design and implementation of new M.Eng.
Programs that focus on modern approaches in manufacturing; namely Industry
4.0 and Smart Systems. The integration of these new M.Eng. Programs with
related undergraduate programs are also described, as is the potential to provide
certain students with an accelerated pathway to professional licensure. Several
common elements of Industry 4.0 trends are contained within these new
programs. These elements include cyber-physical systems, internet of things, and
development of smart systems. This paper presents the development of three
M.Eng. Programs: Automotive, Automation, and Advanced Manufacturing.
These programs focus on real-world problems of industries in which progress is
fast and in which specialists need to provide constantly evolving, creative, and
innovative solutions. Being designed for both to full-time students and part time
students from industry, the courses developed for these programs are offered in
the evening. Students can chose between a coure-and-project option that includes
6 courses and a project and a course-only option that include eight course. The
graduates of these programs are expected to have a strong technical grounding
with broad management and industry perspectives combined with strong nontech‐
nical areas of expertise.

Keywords: M.Eng. · Industry 4.0 · Smart systems · McMaster

1 Introduction and Background

The W Booth School of Engineering Practice and Technology (SEPT) at McMaster


University in Hamilton, Ontario, Canada is a School contained within the Faculty of
Engineering. SEPT offers seven undergraduate programs which award Bachelor of
Technology (B.Tech.) degrees. The School also offers five specialized Masters programs
awarding M.Eng. degrees. The defining characteristic of the School is its focus on real-
world problems. SEPT exists as a complement and a contrast to the traditional Depart‐
ments within the Faculty of Engineering which focus on theory and discovery and award

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_7

zamfira@unitbv.ro
Development of M.Eng. Programs with a Focus on Industry 4.0 and Smart Systems 69

Bachelor of Engineering (B.Eng.) degrees, Master of Applied Science (M.ASc.) degrees


and Doctorate (Ph.D.) degrees.
The undergraduate programs in SEPT differ from traditional Bachelor of Engi‐
neering Programs in several ways. Some advantages of the programs are: they are
strongly influenced by industry and their curriculums are flexible, they focus more on
experiential learning and student-centered learning, they employ more sessional/
industry instructors, and they integrate the fundamentals of business into the curriculum.
One disadvantage of the programs is that they are not ‘accredited’ by the governing body
that oversees engineering curricula in Canada. Graduates of these programs have a more
difficult, pathway to professional licensure than graduates of traditional Bachelor of
Engineering programs.
Of the seven undergraduate programs in SEPT, four of them are “degree completion
programs” (DCP). These are designed for graduates of post-secondary institutions called
“colleges of applied arts and technology” or colleges for short. McMaster University’s
degree completion programs offer a Bachelor of Technology degree upon the completion
of 24 courses above and beyond the completion of a three-year college diploma in a
related field. The four DCP programs are: Civil Engineering Infrastructure Technology,
Energy Engineering Technologies, Manufacturing Engineering Technology, and Soft‐
ware Engineering Technology. Each program contains 17 technical courses and 7 busi‐
ness/management courses. These programs also contain a mandatory 8-month co-op
work term, although this requirement is waived for the majority of students since the
program’s unique evening and weekend schedules attract a large number of working
students. There are currently 400 students in the DCP programs, the majority are enrolled
as part-time students.
The three remaining undergraduate programs in SEPT are Automotive and Vehicle
Technology, Process Automation Technology, and Biotechnology. These programs are
direct-entry from High School and are 4.5-year degrees which include 12-months of co-op
work placement. These programs are offered during regular daytime hours and are full-time
programs. There are currently 800 students enrolled in these programs.
At the graduate level, the School of Engineering Practice and Technology offers five
unique Masters programs–four granting the degree Master of Engineering, and one
granting the degree Master of Technology. These programs are: Master of Engineering
Entrepreneurship and Innovation, Master of Technology Entrepreneurship and Innova‐
tion (open to students with non-engineering/non-science undergraduate degrees),
Master of Engineering Design, Master of Engineering and Public Policy, and Master of
Engineering in Manufacturing Engineering. There are currently 150 students enrolled
in these programs.
The expansion of the Master of Engineering in Manufacturing Engineering into three
‘Industry 4.0 and Smart Systems’ focus-areas forms the subject of this paper. The three
new M.Eng. focus-areas are: Automation, Automotive, and Advanced Manufacturing.

zamfira@unitbv.ro
70 M.D. Justason et al.

2 Motivation

The School of Engineering Practice and Technology is well positioned to prepare grad‐
uates for employment in the manufacturing sector. The School’s location in Hamilton,
Ontario is central to Canada’s manufacturing industry.
Producing university graduates with skills that are immediately applicable is a chal‐
lenge in many industries with rapidly-changing technology, and this is especially true
in the manufacturing industry [1]. It is particularly evident in the area of automotive
engineering [2]. SEPT has already implemented undergraduate programs that address
this challenge; namely Automotive and Vehicle Technology, Process Automation Tech‐
nology, and Manufacturing Engineering Technology. Industry 4.0-based Master’s
Programs will provide students with a continuing pathway to graduate-level programs.
It is also intended to facilitate the pathway to professional licensure for graduates of the
aforementioned undergraduate programs, but is also open to graduates of traditional
undergraduate engineering programs as well as international students. The pathways
created by the new M.Eng. programs are shown in Fig. 1.

Bachelor of Engineering Bachelor of Technology


(TradiƟonal Departments with Accredited Programs)  AutomoƟve and Vehicle Technology
 Mechanical  Process AutomaƟon Technology
 Industrial  Manufacturing Engineering Technology
 Chemical

Master of Engineering in Manufacturing


(Industry 4.0 and Smart Systems focus areas)
 AutomaƟon
 AutomoƟve
 Advanced Manufacturing

Experience
References
Professional PracƟce ExaminaƟon
PROFESSIONAL LICENSURE

Fig. 1. Pathways

It should be noted that graduates of the Bachelor of Technology undergraduate


program have an existing pathway to professional licensure (shown by the light-grey
patterned arrow in Fig. 1) but this involves a series of technical challenge exams admin‐
istered by the Provincial license-granting body. The number of exams can range from
as few as four, to as many as ten depending on the year of graduation. More recent
graduates are assigned fewer exams thanks to the evolution of the curriculum towards
content that is more favorable to the licensing body. Completion of an M.Eng. degree
after the B.Tech. degree can significantly reduce or in some special cases even eliminate
the need to complete any challenge exams (represented by the dark-grey patterned
arrow). Additionally, the time spent in the M.Eng. program may also count towards the

zamfira@unitbv.ro
Development of M.Eng. Programs with a Focus on Industry 4.0 and Smart Systems 71

amount of work experience required for licensure. Typically, the M.Eng. program will
count as 1-year of the required 4-years of work experience.
The new M.Eng. Programs within the School of Engineering Practice and Tech‐
nology have content and delivery methods consistent with the Vision of the new school.
The School’s Vision can be characterized by the following elements: industry-driven,
hands-on, case-studies, in course projects, advanced methods and technologies, inno‐
vative teaching methods, sustainability, community-focused, professional development,
communications, management, design, problem-solving, and integration of professional
and technical skills.
With this Vision in mind the motivation for introducing these three new focus areas
to the Master of Engineering in Manufacturing Engineering (MEME) can be organized
into four main areas:
1. Opportunities for Students
2. Opportunities for Faculty
3. Opportunities for Partners
4. Opportunities for the Faculty of Engineering.
The creation of the new M.Eng. programs also provides the School with an oppor‐
tunity to educate students in areas that are complementary to the technical aspects of
Industry 4.0. Successful Industry 4.0 implementation involves aspects of a business
‘outside’ the functions related directly to the manufacturing process. Business consid‐
erations such as human resource management, accounting and finance, strategy, culture,
and leadership all play a role in the successful implementation of Industry 4.0. This
supports the idea of a T-Shaped graduate, with a broad knowledge of business that is
outside their specific technical area [3]. This concept is particularly important in the area
of human resource management and supply-chain management [4, 5].

2.1 Opportunities for Students


These new focus areas within the M.Eng. Programs in the W Booth School of Engi‐
neering Practice and Technology create the following opportunities for students:
• The chance to obtain a graduate degree in a high-demand, industry driven topic.
• Pathways to the new M.Eng. in Manufacturing Engineering (MEME) focus areas can
be streamlined by adding undergraduate elective courses that offer advanced credit
for M.Eng. programs.
• Undergraduate and Graduate students will connect inside the courses that are offered
for advance credit. This offers the potential for mentoring, and possibly even collab‐
oration on projects.
• Graduate students will have the opportunity to become Teaching Assistants for the
Undergraduate courses.
• Undergraduate students may have increased contact with industry partners engaged
in projects with Masters students. Possible co-op placement opportunities for under‐
graduate students may result.

zamfira@unitbv.ro
72 M.D. Justason et al.

• A clear pathway into a Master’s program will offer Bachelor of Technology graduates
an improved pathway to professional licensure.
• Completing an M.Eng. degree offers Bachelor of Technology graduates the oppor‐
tunity to participate in the Ritual of the Calling of the Engineer (the ‘Iron-Ring’).
• The new focus areas will remain accessible to graduates of more traditional Bachelor
of Engineering programs, both from McMaster and elsewhere.

2.2 Opportunities for Faculty


It should be noted that the undergraduate programs in SEPT employ a large number of
contract faculty and teaching-track faculty. These faculty members have heavy teaching
loads and are often responsible for teaching most of the core and entry-level courses.
These new focus areas within the M.Eng. programs in the W Booth School of Engi‐
neering Practice and Technology create the following opportunities for the undergrad‐
uate faculty members:
• Opportunities to teach and mentor graduate students
• Opportunity to teach graduate level courses in their own areas of expertise
• Chance for collaborative applied research, innovation in teaching and learning, and
pedagogical research.
Faculty already teaching in the existing five M.Eng. programs may see the following
opportunities:
• Synergy of people with common interests
• More effective use of human resources and less reliance on sessional lecturers
• Possibility to share resources: building (labs, room bookings, meeting rooms) and
technical support staff.
Additionally, there are ‘research gaps’ between current manufacturing systems and
the potential that exists with the implementation of industry 4.0 ideas. This supports the
idea for educational programs designed specifically around an Industry 4.0 framework
and creates the potential for research opportunities for Faculty involved in these new
focus areas [5].

2.3 Opportunities for Partners

These new focus areas within the M.Eng. Programs in the W Booth School of Engi‐
neering Practice and Technology create the following opportunities for the School’s
partners:
• Feeder colleges to the DCP programs can offer students a direct pathway from college
through to an M.Eng. degree, and ultimately to professional licensure
• Community Partners (Companies, Organizations, and Government):
– Richer engagement with groups of undergraduate and graduate students
– Working with a broader spectrum of potential co-op or full-time employees
prescreened through project engagement.

zamfira@unitbv.ro
Development of M.Eng. Programs with a Focus on Industry 4.0 and Smart Systems 73

2.4 Opportunities for Faculty of Engineering


• Growth in enrollment at both the undergraduate and graduate levels
• Ability to deliver specialized M.Eng. programs not currently offered in the traditional
engineering Departments
• The expansion of an already effective School focused on depth and breadth of
learning, pedagogical research, and engineering practice
• Expanding programs committed to serving community and industry needs
• Flexibility to offer undergraduate and graduate curriculum that responds quickly to
changes in community and industry needs (unlike accredited programs)
• Enhanced recognition and reputation of the Faculty
• Further the Faculty’s mission to implement the concept of sustainability into the
curriculum of all programs - embedding sustainability into the new M.Eng. focus-
areas is important in light of the great opportunities that exist in the area of Industry
4.0 [6].
An opportunity for the Faculty, and in particular the School of Engineering Practice
and Technology, is the opportunity to create a special teaching/learning/research facility
called a ‘Learning Factory’. This small-scale ‘functioning’ manufacturing facility offers
a chance for smaller companies to train employees in the skills and technology needed
to implement Industry 4.0 concepts. It also positions the university as a direct supporter
of local/regional companies through technology as well as providing a supply of appro‐
priately trained professionals [7].

3 Methodology

The activities described in this section were carried out by a SEPT committee called the
M.Eng. Task Force. This group of six SEPT faculty members, plus 1 representative from
the School of Graduate Studies, carried out monthly meetings from approximately
mid-2015 to mid-2016. This section outlines the committee’s activities.

3.1 Market Research

The first step in the development of the new M.Eng. focus areas was to engage students
and alumni in market research. The results of a survey that included responses from 354
B.Tech. students, 342 B.Eng. students, and 146 alumni are shown below. A study of
competing programs at nearby Universities was also completed.
• B.Tech. Students–50% indicated a desire to pursue an M.Eng. degree
• B.Eng. Students–more than 80% indicated a strong interest in an M.Eng. degree
• Alumni–approximately 66% of McMaster Engineering Alumni living within a 1-hour
commute of McMaster indicated a strong interest in an M.Eng. degree
• M.Eng. programs at other Ontario Universities are popular; even ‘over-subscribed’
• Based on historical enrollment numbers in the existing M.Eng. programs, there is
typically a large demand from international students (>60% of existing M.Eng.
students are international students).

zamfira@unitbv.ro
74 M.D. Justason et al.

Based on the results of the market research, it was evident that the demand for M.Eng.
programs was strong among all target groups. This market research encouraged the
M.Eng. Task Force to continue its activities.

3.2 Implementation
To facilitate a January 2017 implementation of the new Master of Engineering in Manu‐
facturing Engineering (MEME) focus areas, the new focus areas needed to be structured
within the framework of the existing program. It was not possible to seek approval for
a completely new program structure as this could take up to 2-years. The existing
framework for the MEME program was as follows:
• Students can take up to two graduate level courses from the Mechanical, Materials,
and Chemical Engineering Departments.
• Each student must complete a project at a manufacturing company plus six graduate
level courses (or 8-graduate level courses without a project).
• Courses from Departments other than the three ‘approved’ Departments (Mechan‐
ical, Materials, and Chemical) must be approved on a case by case basis.
• All other courses must be taken within SEPT.
The details of the implementation suggested that it was possible to offer the new
MEME focus-areas for students starting the program in January 2017 provided they
elected to complete the 6-course plus project option. The 8-course option would need
to be implemented in the Fall of 2017 due to the requirement to create and seek approval
for additional (new) courses within SEPT.

Table 1. Actual course-offerings (‘core’ and ‘elective’)


Automation Automotive Advanced
manufacturing
Industry 4.0 C C C
Components, networks, C E C
interoperability
Sensors and actuators C C C
Data mining & machine E E E
learning
Cyber security E E E
Systems analysis and E C
optimization
Hybrid & electric vehicles E C
design
Additive manufacturing C E
Robotics E E E
Analysis & troubleshooting of C C E
Mfg operations
Real time control, advanced C E E
topics

zamfira@unitbv.ro
Development of M.Eng. Programs with a Focus on Industry 4.0 and Smart Systems 75

Other actions undertaken by the M.Eng. Task Force included:


• Preparation of an expanded list of pre-approved graduate level courses from Depart‐
ments other than the three approved Departments (Mechanical, Materials, Chemical).
• Approval to offer a selected number of SEPT undergraduate courses as possible
‘advance-credit’ courses.
• Design and approval for new SEPT Industry 4.0-themed courses for inclusion in the
Fall 2017 program start (see Table 1).

3.3 Final Program Design


• Students will be required to take 8-courses
• Students may opt for 6-courses plus a project (project subject to approval)
• Full-time and part-time studies will be possible; courses delivered in the evenings
when possible
• Some online course-offerings will be considered.

3.4 Future Developments

The launch of a fourth Industry 4.0 focus area is targeted for September 2018. This focus
area would be in ‘Digital Solutions’.
A second M.Eng. theme-area tentatively referred to as ‘Smart Cities’ is also targeted
for September 2018. Specializations may include: Civil Infrastructure, Biotechnology,
and Power and Energy.

4 Summary and Conclusions

A set of M.Eng. Programs developed at McMaster University in the School of Engi‐


neering Practice and Technology with a focus on modern real-world problems from
industry and society are expected to produce graduates well positioned for careers on
the leading edge of manufacturing engineering and technology.
The new M.Eng Programs are innovative, interdisciplinary, industry-focused, and
have a strong focus on management, leadership, and community engagement. They have
strong industry interaction and include projects meaningful to society. The new
Programs also complement the associated undergraduate programs in manufacturing,
software, process automation, and automotive and vehicle technology, yet remain acces‐
sible to graduates of more traditional engineering disciplines.
Although implementations of Industry 4.0 key elements can vary significantly within
different specializations, there are several common elements. These include cyber-
physical systems, internet of things, and development of smart systems. This paper
presented the development of three M.Eng. Programs that include these elements: Auto‐
motive, Automation, and Advanced Manufacturing. These programs focus on real-
world problems of industries in which progress is very fast and in which specialists need
to provide constantly evolving, creative, and innovative solutions. The M.Eng. programs

zamfira@unitbv.ro
76 M.D. Justason et al.

offer full-time and part-time options, as well as course-and-project or course-only


options.
The graduates of these programs are expected to have a strong technical grounding
with broad management and industry perspectives combined with strong nontechnical
areas of expertise.

References

1. Schuh, G., Gartzen, T., Rodenhauser, T.M.A.: Promoting work-based learning through
industry 4.0. In: The 5th Conference on Learning Factories 2015, Bochum (2015)
2. Riel, A., Tichkiewitch, S., Stolfa, S., Kreiner, C., Messnarz, R., Rodic, M.: Industry-academia
cooperation to empower automotive engineering designers. In: 26th CIRP Design Conference,
Stockholm (2016)
3. Schumacher, A., Erol, S., Sihn, W.: A maturity model for assessing industry 4.0 readiness and
maturity of manufacturing enterprises. In: Changeable, Agile, Reconfigurable and Virtual
Production, Stockholm (2016)
4. Hecklau, F., Galeitzke, M., Flachs. S., Kohl, H.: Holistic approach for human resource
management in Industry 4.0. In: 6th CLF - 6th CIRP Conference on Learning Factories (2016)
5. Huxtablea, J., Schaefera, D.: On servitization of the manufacturing industry in the UK. In:
Changeable, Agile, Reconfigurable and Virtual Production, Bath (2016)
6. Stock, T., Seliger, G.: Opportunities of sustainable manufacturing in industry 4.0. In: 13th
Global Conference on Sustainable Manufacturing- Decoupling Growth from Resource Use,
Ho Chi Minh City (2016)
7. Faller, C., Feldmuller, D.: Industry 4.0 learning factory for regional SMEs. In: The 5th
Conference on Learning Factories 2015, Bochum (2015)

zamfira@unitbv.ro
Remote Acoustic Monitoring System
for Noise Sensing

Unai Hernandez-Jayo1,2(B) , Rosa Ma Alsina-Pagès3 , Ignacio Angulo1,2 ,


and Francesc Alı́as3
1
DeustoTech - Fundación Deusto, Avda. Universidades, 24, 48007 Bilbao, Spain
{unai.hernandez, ignacio.angulo}@deusto.es
2
Facultad Ingenierı́a, Universidad de Deusto, Avda. Universidades, 24,
48007 Bilbao, Spain
3
GTM - Grup de Recerca en Tecnologies Mèdia, La Salle - Universitat Ramon Llull,
Quatre Camins, 30, 08022 Barcelona, Spain
{ralsina, falias}@salleurl.edu

Abstract. The concept of smart cities comprises a wide range of control


and actuators systems aimed to improve the habitability and perception
that citizens have of cities. A smart city covers many of these systems,
ranging from applications that facilitate the governance of cities and
encourage citizens’ participation to services focused on improving their
quality of life. Among them, we can highlight those using Information
and Communication Technologies (ICT) to improve the environment of
the city. Besides deploying air quality monitoring systems, smart cities
are beginning to include other ICT-based systems, such as the work in
progress proposed in this paper, which is aimed to remotely monitor
noise levels at different points of the city using the public bus system as
mobile sensors network.

1 Introduction
According to the European Commission, a large majority of European Citizens
are living in urban environments [1], being approximately in 2010 the 50.5% of
the world’s population [2]. This trend, far from diminishing, in increasing year
by year. The United Nations Population Division reported in 1990 that in this
decade there were ten “mega-cities” with 10 million inhabitants or more. In 2014,
the number of mega-cities was 28 (representing about 12% of the world’s urban
dwellers). By 2030, the world is projected to have 41 mega-cities with 10 million
inhabitants or more [3].
From the perspective of the emerging economies, mega-cities will become
the largest markets for consuming new technology products, due to the need of
the authorities to apply the so-called ICT (Information and Communications
Technologies) to deal with problems related to the economy, buildings, mobility,
energy, citizens, planning and governance of the cities. It is in this scenario where
the concept of smart cities has been growing up for the last ten years in order
to provide solutions to the new challenges posed by these urban areas.

c Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6 8

zamfira@unitbv.ro
78 U. Hernandez-Jayo et al.

The idea of smart cities comprises a wide range of control and actuators
systems aimed to improve the habitability and perception that citizens have of
cities. A smart city covers many of these systems, ranging from applications
that facilitate the governance of cities and encourage citizens’ participation to
services specifically focused on improving their quality of life. Among these sys-
tems, we can highlight those using ICT to improve the environment of the city,
but not only from an air quality perspective, but also to control the noise levels
of the city. It is assumed by the World Health Organization that environmen-
tal noise has emerged as the leading environmental nuisance triggering one of
the most common public complaints in many Member States of the European
Union. The European Union tries to face the problem of environmental noise
with international laws and directives (as the European Noise Directive [4]) on
the assessment and management of environmental noise [5].
In this context, the work in progress presented in this paper shows the ICT-
based approach that the University of Deusto and the Ramon Llull University are
developing jointly in the frame of the Aristos Campus Mundus initiative, with
the goal of obtaining real time information about the equivalent noise levels
(Leq ) of a city. For that purpose, the developed tool will be able to monitor
remotely noise levels at different points of the city using the public bus system
as a mobile sensors network.
The paper is structured as follows. In Sect. 2, the related work regarding
mobile acoustic monitoring is detailed, and in Sect. 3, its challenges are pointed
out. In Sect. 4, the hardware approach proposed to address the problem of mobile
acoustic monitoring is explained, and in Sect. 5 the first acoustic signal processing
algorithms developed to face this challenge are described. In Sect. 6, we detail
the expected outcomes of the collaborative project, and, finally, in Sect. 7 we
focus on the conclusions and future work.

2 Related Work in Mobile Acoustic Monitoring

Traditional noise measurements in cities have been mainly carried out by pro-
fessionals that record and analyze the data in a certain location typically using
certified sound level meters. This approach allows reliable analyses but is costly,
hardly scalable and difficult to follow rapid changes of urban environments.
To address these drawbacks, in the last decade, several approaches of systems
focused on the monitoring of environmental noise have been proposed (see [6]
and references therein). Their main goal has been oriented to the development
of small equipments assuring the reliability of the acoustic measurements. More-
over, these systems have been designed to allow scalability by reducing the
cost of the hardware (i.e., low-cost acoustic sensors) and improving the net-
work data communication in order to tailor a noise map by means of mobile
acoustic monitoring.
One of the first experiences in mobile acoustic monitoring is detailed in [7],
where a mobile sensing unit (MSU) associated with a Global Positioning System
(GPS) is used to perform acoustic monitoring in Seoul (South Korea). With a

zamfira@unitbv.ro
Remote Acoustic Monitoring System for Noise Sensing 79

reduced set of sensors, the Seoul ubiquitous sensing project conducted a wide
range of tests across several city locations. The MSU were even installed in cars
and buses moving around the city by following repetitive circuits. These nodes
measured temperature and humidity, and also noise level. However, no signal
processing is included for the latter (or at least detailed) neither in the sensors
nor in the network hub.
In [8], the system is based on an array of sensors carried by a vehicle driving
along the streets of the city to acquire measurements from different locations.
The goal of that piece of research is estimating the locations and the power of
the stationary noise sources of the locations of interest. For this purpose, the
data gathered by the array is post-processed before plotting the several sources
in the noise map, but no details are given about the vehicle noise treatment.
Dekoninck et al. [9] focus on the study of low density roads, including both
mobile and fixed noise monitoring platforms. The proposal is based on perform-
ing the mobile measurements by bicycle, which provides a new view on the local
variability of noise and air pollution based on computing the differences of mea-
surements along road segments [10]. This proposal is easily applicable to any
other cities to monitor both noise and air pollution at the cost of having enough
bicycle riders.

3 Challenges of Mobile Bus Acoustic Signal Processing


The first challenge associated to audio signal processing for ubiquitous noise
measurements is the correction that has to be applied to the noise generated by
the mobile vehicle transporting the acoustic sensor, in this work, a public bus.
Obviously, the bus contributes to traffic noise in the city, but this noise source is
very close to the point of measure. Therefore, it has to be detected and its con-
tribution to the city noise map has to be considered appropriately. Nevertheless,
this problem presents a counterpart: the process can take advantage of having
the audio reference of the noise source (mainly produced by the bus engine).
In [11], a signal processing system dealing with the identification and the
estimation of the contribution of different noise sources to an overall noise level
is presented. The proposal is based on a Fisher’s Linear Discriminant classifier
and estimates the contribution based on a distance measure. Later, in [12] a
similar system based on probabilistic latent component analysis is described.
This approach is based on a sound event dictionary where each element consists
of a succession of spectral templates, controlled by class-wise Hidden Markov
Models.
In [13], a review of the difficulties for appropriately measuring the perfor-
mance of polyphonic sound event detection is stated before gathering several
metrics specifically designed for this purpose. For the problem of bus noise mixed
with other traffic noise sources, the complexity of sound event recognition is sig-
nificant. On the one hand, the sounds are continuously overlapped, and on the
other hand, the type of signal to be distinguished is very similar. To this aim,
a supervised model trained to identify overlapping sound events based on unsu-
pervised source separation is presented in [14]. In [15], the authors detect sound

zamfira@unitbv.ro
80 U. Hernandez-Jayo et al.

events from real data using coupled matrix factorization of spectral representa-
tions and class annotations. Finally, in [16], the authors exploit deep learning
methods to detect acoustic events by means of using the spectro-temporal local-
ity. For more references and details about these approaches, the reader is referred
to [13].
To conclude, the most challenging issue for the problem at hand is appropri-
ately integrating the bus and surrounding traffic noise levels to the noise map
due to their similarity so as to compute the Leq value correctly. This increases
the complexity of the separation system significantly, which can be addressed
thanks to having the reference signal, that is, considering as input the noise gen-
erated by the bus for the sound source identification and subsequent integration
in the noise map computation.

4 Hardware Approach
The hardware system designed to deploy the ubiquitous acoustic sensor net-
work to remotely monitor the noise pollution of a city is based on three main
subsystems:

Fig. 1. Scoped of remote acoustic monitoring system

zamfira@unitbv.ro
Remote Acoustic Monitoring System for Noise Sensing 81

– Hardware platform: formed by an embedded system capable of sampling sig-


nals across the human audible range. This system will be provided with a
microphone which is in charge of collecting acoustic signals, a GPS for geo-
referencing the collected samples and two communication modems, one WiFi
and the other GPRS. After collecting the information, the processed samples
will be uploaded to a central server.
– Signal processing software: it will be deployed on the embedded system
firmware. The set of algorithms should be able to (a) filter conveniently the
noise generated by the public bus where the embedded system will be installed
in order to include its contribution to the urban noise appropriately; and in
the future (b) characterise the noise sources captured by the microphone in
order to classify the type of traffic depending on the street and the period of
the day.
– Server and web application: will oversee saving all the information processed
by the multiple embedded systems that could be deployed on board public
buses, and will show these data in a web-based Geographic Information Sys-
tem (GIS) to represent a dynamic noise map of the city generated across the
routes followed by the network of public buses.

The current hardware prototype developed for the remote acoustic monitor-
ing system is shown in Fig. 1, which represents the scope of the whole system,
showing the following set of subsystems:
– FRDM-KL25Z embedded system
– CMA-4544PF-W omnidirectional capsule microphone with auto gain control
based on the MAX9814 amplifier
– ESP8266 WiFi communications module
– Adafruit FONA 808, which is and all-in-one mobile communication interface
plus a GPS module

5 Mobile Bus Audio Processing Future Approach


The first approach for dealing with the mobile audio processing system will be
implemented based on the state of the art described in Sect. 3. The structure
of the proposed system is shown in Fig. 2. The audio signal corresponding to
traffic noise is captured by the microphone, and windowed every 30 ms before
being parametrized through the feature extraction block. Next, each parame-
trized audio frame enters the classification stage based on some machine learning
approach, which has been previously trained for the problem at hand.
The feature extraction procedure is detailed in Fig. 3. After choosing the
frame size and overlap, the time signal is converted to frequency by means of a
Fast Fourier Transform. Next, a Mel filter bank is applied to the signal, and a
logarithm of the absolute value of the Mel-based spectrum is obtained to emulate
the ear listening response. Finally, the inverse cosine transform is applied to the
signal and a finite number of significant coefficients are chosen to describe the
audio signal denoted as Mel Frequency Cepstral Coefficients (MFCC) [17].

zamfira@unitbv.ro
82 U. Hernandez-Jayo et al.

ClassificaƟon

Audio Feature Machine Recognized


Windowing
Signal extrac on learning Sound Event

Fig. 2. Sound event classification diagram

Audio MFCC
Signal Coefficients
Windowing Filter bank Log
FFT IDCT Select
filtering |·|

Frame Size Filter bank matrix # Coefficients

Fig. 3. Feature extraction procedure - Mel Frequency Cepstral Coefficients diagram

After obtaining the MFCC of the input audio signal, they are input to a
machine learning algorithm for their classification (see Fig. 2). Although a myriad
algorithms can be found in the literature for this purpose, initially we consider
Fisher Linear Discriminant algorithm [18] following the work described in [11].
This is due to the fact that this method will give us the coefficient of participation
of each type of noise in the overall noise picture, in our case, the bus engine noise
and the road traffic noise.
We plan to test this approach using real data measurements in a bus driving
its route, recording audio samples from the traffic noise surrounding the vehi-
cle. Moreover, in order to have a general picture of the vehicle own noise, we
also plan to collect audio samples of the several specific driving sounds (brake,
engine, throttle, etc.) that occur on the bus route. Depending on the obtained
results, other feature extraction methods and machine learning algorithms could
be considered to solve the challenge in the future.

6 Expected Outcomes
From the mobile audio processing algorithms, we expect to identify the con-
tribution of the bus noise to the total traffic noise. This value will be used to
balance the noise contribution of the vehicle itself, in order to compensate the
short distance to the measuring equipment. This way, the final value of Leq cor-
responding to the traffic noise can be evaluated with the suitable contribution
of the bus noise in order to obtain reliable real-time noise maps from the routes
of the selected bus lines.
To develop a robust mobile audio processing approach, it is necessary to
properly configure and program the hardware and the firmware of the elec-
tronic embedded system deployed on the buses. This will require a good balance
between the accuracy of the signal acquisition, the number of collected samples

zamfira@unitbv.ro
Remote Acoustic Monitoring System for Noise Sensing 83

and the time needed to process them. In a first approach, our design will do
it on board, sending the results of the signal processing block to the central
server. Then, these data will be integrated and displayed as a noise map of the
city through the web-based GIS. If the performance of the embedded system
is not good enough to obtain an accurate noise map, then we will consider the
possibility of conducting the signal processing on the web server, using a more
powerful processor at the cost of increasing the communications cost.

7 Conclusions and Future Work

The goal of the research under development is to obtain a remote acoustic mon-
itoring system that allows to know in real time the city noise impact. For this
purpose, the acoustic sensor (a microphone) is connected to a mobile embedded
device with signal processing and data transmission capabilities. This embedded
system is designed to be installed on a public bus, so urban noise is captured
along the route travelled by the bus. Therefore, the larger number of sensors
deployed in buses lines, the more detailed information will be obtained to gen-
erate the noise map of the city. Finally, the collected information is sent to a
central server that runs a web-based GIS application designed to display the
collected noise levels in real-time.
After validating the viability of the ICT-based approach, the research team is
planning a set of field operational tests to evaluate the accuracy and reliability
of the proposed system. These tests are designed in order to: (i) characterize
the natural noises of the bus inside and outside: engine, breaks, throttle, etc.;
(ii) improve the processing of the acquired noise by applying the necessary error
corrections according to the values obtained in the first tests and weighting the
bus noise appropriately.
In these terms, the proposed system is intended to allow making a signifi-
cant step from current noise maps of cities, which are updated with a certain
periodicity (usually, every five years) through fixed measure points in the city
by following the European Noise Directive.

Acknowledgement. The authors would like to thank project ACM2016/06 entitled


“Towards the development of low cost ubiquitous sensors networks for real time acoustic
monitoring in urban mobility” from the II Convocatoria del Programa de Ayudas a
Proyectos de Investigación Aristos Campus Mundus 2016. Francesc Alı́as and Rosa
Ma Alsina-Pagès also would like to thank the Secretaria d’Universitats i Recerca del
Departament d’Economia i Coneixement (Generalitat de Catalunya) under grant ref.
2014 - SGR - 0590.

References
1. European Union Transport Themes, Clean Transport and Urban Mobil-
ity. http://ec.europa.eu/transport/themes/urban/urban mobility/index en.htm.
Accessed Nov 2016

zamfira@unitbv.ro
84 U. Hernandez-Jayo et al.

2. World Demographics Profile 2012. Index Mundi. http://www.indexmundi.com/


world/demographics profile.html. Accessed Nov 2016
3. World Urbanization Prospects: 2014 Revision. United Nations DESA’s Population
Division. https://esa.un.org/unpd/wup/Publications/Files/WUP2014-Highlights.
pdf. Accessed Nov 2016
4. EU Directive: Directive 2002/49/EC of the European Parliament and the Council
of 25 June 2002 relating to the assessment and management of environmental noise.
Official J. Eur. Commun., L 189/12 (2002). European Union
5. Night Noise Guidelines for Europe. World Health Organization 2009. http://www.
euro.who.int/ data/assets/pdf file/0017/43316/E92845.pdf. Accessed Nov 2016
6. Basten, T., Wessels, P.: An overview of sensor networks for environmental noise
monitoring. In: Proceedings of the 21st International Congress on Sound and Vibra-
tion, Beijing, China (2014)
7. Hong, P.D., Lee, Y.W.: A grid portal for monitoring of the urban environment using
the MSU. In: Proceedings of the International Conference on Advanced Commu-
nication Technology, Phoenix Park, Korea (2009)
8. Zhao, S., Nguyen, T.N.T., Jones, D.L.: Large region acoustic source mapping
using movable arrays. In: Proceedings of the International Conference on Acoustic,
Speech and Signal Processing, Brisbane, Australia, pp. 2589–2593 (2015)
9. Dekonick, L., Botteldoren, D., int Panis, L.: Sound sensor network based assess-
ment of traffic, noise and air pollution. In: Proceedings of EURONOISE, Maastrich,
The Netherlands, pp. 2321–2326 (2015)
10. Dekoninck, L., Botteldooren, D., Panis, L.I., Hankey, S., Jain, G., Marshall, J.:
Applicability of a noise-based model to estimate in-traffic exposure to black carbon
and particle number concentrations in different cultures. Environ. Int. 74, 89–98
(2015)
11. Creixell, E., Haddad, K., Song, W., Chauhan, S., Valero, X.: A method for recog-
nition of coexisting environmental sound sources based on the Fisher’s linear dis-
criminant classifier. In: Proceedings of INTERNOISE, Innsbruck, Austria (2013)
12. Benetos, E., Lafay, G., Lagrange, M., Plumbley, M.: Detection of overlapping
acoustic events using a temporally-constrained probabilistic model. In: Proceed-
ings of the International Conference on Acoustic, Speech and Signal Processing,
Shanghai, China, pp. 6450–6454 (2016)
13. Mesaros, A., Heittola, T., Virtanen, T.: Metrics for poliphonic sound event detec-
tion. Appl. Sci. 6, 162 (2016)
14. Heittola, T., Mesaros, A., Virtanen, T., Gabbouj, M.: Supervised model training for
overlapping sound events based on unsupervised source separation. In: Proceedings
of the 38th International Conference on Acoustics, Speech, and Signal Processing,
ICASSP 2013, Vancouver, Canada, pp. 8677–8681 (2013)
15. Mesaros, A., Dikmen, O., Heittola, T., Virtanen, T.: Sound event detection in real
life recordings using coupled matrix factorization of spectral representations and
class activity annotations. In: Proceedings of the IEEE International Conference
on Acoustics, Speech and Signal Processing (ICASSP), Brisbane, Australia, vol.
817, pp. 151–155 (2015 )
16. Espi, M., Fujimoto, M., Kinoshita, K., Nakatani, T.: Exploiting spectro-temporal
locality in deep learning based acoustic event detection. EURASIP J. Audio Speech
Music Process. (2015). doi:10.1186/s13636-015-0069-2
17. Melmstein, P.: Distance measures for speech recognition, psychological and instru-
mental. Patt. Recogn. Artif. Intell. (1976). C.H. Chen, Ed., Academic, New York
18. Fisher, R.A.: The use of multiple measurements in taxonomic problems. Ann.
Eugenics 7(2), 179–188 (1936). doi:10.1111/j.1469-1809.1936.tb02137.x

zamfira@unitbv.ro
Testing Security of Embedded Software Through Virtual
Processor Instrumentation

Andreas Lauber ✉ and Eric Sax


( )

Karlsruhe Institute of Technology, Engesserstr. 5, 76131 Karlsruhe, Germany


{Lauber,Sax}@kit.edu

Abstract. More and more functionality that demands remote access on a vehicle
is integrated into modern cars. Fleet management, infotainment, updates-over-
the-air and the upcoming functionality for autonomous driving need gateways
that enable a car-2-x communication. Misuse is a threat. Consequently, security
mechanisms play an increasing important role. But how can we show and prove
the effectiveness of these security functions?
Therefore, in this paper we will show an approach to test security aspects,
based on virtual instrumentation. The approach is to use a framework that
executes the application under development on a virtual model of the target micro
controller. An interception library generates scenarios systematically, whereas
the effects on registers and memory are monitored. We are intercepting the
running software at vulnerable functions and variables to detect potential
malfunctions. This will detect security vulnerabilities of all internal failure even
if no malicious behavior at the interfaces occur.

Keywords: Virtual processor · Security · Testing

1 Motivation

Within the last decade mobility has undergone major changes. One is the advent of data
exchange between cars and infrastructure. Instead of a vehicle being a standalone
mechanical device it has been transformed to a mobile platform with extensive electronic
sensors and computing power. Nowadays within a car a large amount of data is available
in form of sensor data, representing the state of the vehicle as well as its understanding
of the surrounding. By exchanging such information with others, new concepts for
efficient driving, optimizing traffic flow (see Kramer in [1]), and new comfort functions
become possible.
On the other hand many new threats are generated. The increasingly technology
allows the transmission of large amounts of data in real time to transmit states for diag‐
nosis and software updates over the air will be possible. I.e. the EE topology will get
accessible via an air interface. Therefore the vehicles may offer new attacking surfaces,
as some examples already show today.
It has already been shown how modern cars can be attacked and controlled without
having physical access to these vehicle [2, 3]. These attacks allow the manipulation of
car’s brakes and driver assistance systems or remote eavesdropping on conversations

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_9

zamfira@unitbv.ro
86 A. Lauber and E. Sax

held within the car. They are just a few examples of possible attacks. With an even more
connected car an even broader attack vector might be created.
To secure the vehicles against possible attacks security mechanism needs to be
implemented which has been a research focus within the last couple of years. Unfortu‐
nately not all attack surfaces can be closed by integrating security mechanism. Attack
vectors can also arise through sloppy implementations and inexperienced programming.
To overcome these issues functional testing and testing for security weaknesses are
necessary.
This paper is structured as follows: First we give a short overview of state of the art
security testing in Sect. 2. Afterwards we categorize the attacks on systems and point
out the important test cases in Sect. 3. Thereafter the virtual instrumentation and
processor interception is explained in Sect. 4. The interception lead to the security testing
framework shown in Sect. 5. At the end we conclude with a summary and give an outlook
for future work.

2 State of the Art for Security Testing

2.1 Theoretical Security Analysis

In theoretical security analyzes one must distinguish between high-level design analyzes
and detailed analyzes. In design analysis, protocols, interfaces, and specifications are
analyzed by reviewers to find and resolve systematic vulnerabilities such as bad encryp‐
tion or short keys. While only a theoretical description of the system has to be present
during design analyzes, one needs explicit knowledge about the implementation of the
algorithms for the detailed analyzes.
Theoretical security analyzes cannot detect any implementation errors due to misin‐
terpretation of the specification or errors from third-party software. To protect the system
against this type of error, Bayer recommends in [4] secure software development stand‐
ards. These can be achieved by means of the various standards and coding guidelines.
Even if errors are reduced, errors by specification or errors in third-party software, can
only be found by explicit tests of the functions in the overall system.

2.2 Static Code Analysis

For static code analysis the source code is automatically analyzed by means of formal
criteria, to identify volatility errors. Static code analysis can identify implementation
errors, but functional errors or design errors cannot be found by this analysis. In addition,
Knechtel described in [5] that these kind of analyzes are unreliable. He suggests the use
of explicit code reviews for sensitive functions. Another option to find weaknesses is
the system test on the real platform or a hardware prototype.

zamfira@unitbv.ro
Testing Security of Embedded Software 87

2.3 Functional Security Testing


Functional testing serves to ensure the correct execution of algorithms. Spillner
describes in [6] how software can be tested. A careful execution of the tests can detect
implementation errors and the resulting security vulnerabilities at an early stage. In order
to ensure fault-freeness of the tests, official security tests cases are usually carried out,
which also cover typical limits of the algorithms.
The algorithms are tested not only for the correct behavior according to the specifi‐
cation, but also for the robustness of these tests. In addition, the performance of security
algorithms is tested to identify potential bottlenecks that could affect overall security
performance, according to Bayer [4].
However the functional security testing will test the security algorithm as standalone
functions. An interaction with other functions of the system is not performed. Therefore
a weakness of the outer or sub function will affect the security of the system even if the
security algorithms are well tested. This means the security test should always include
functions of the overall system.

2.4 Fuzzing for Security Testing

Fuzzing is a technique that has been used for some time to test software in IP networks.
To do this, the implementations are subjected to an unexpected, invalid, or random input,
with the hope that the target will react unexpectedly to identify new vulnerabilities. The
responses to such attacks range from strange behavior of interfaces, unspecified behavior
of the system to complete crashes.
As a rule, the fuzzing can be divided into three steps. First, the input data is generated.
This can either be structured according to the specification or completely random. The
data are then fed into the system interfaces and the output is monitored. As a last step,
the recorded behavior must be analyzed by experienced programmers in order to identify
potential weaknesses. Disadvantage of fuzzing is that only the interfaces of the system
are monitored. Faulty states within the system cannot be detected.

2.5 Penetration Tests

While the above tests can be automated, the penetration test is a test method using human
testers. These tests attempt to exploit known vulnerabilities and gain access to the
system. The appropriate approach is usually based on years of experience by human
testers who perform these tests. An example of typical penetration tests is exploiting
undocumented debugging interfaces to gain access to buses and internal signals, but also
by opening the controller and directly accessing the silicon, the testers are looking for
information on possible attack vectors, according to Bayer in [4]. The knowledge
provided for the testers usually range from no information, access to the specification
to all information about the source code. Therefore, these tests can be used for black
box, gray box and white box tests.
The state of the art security testing can usually only be automated for independent
functions without the interaction of all functions in the complete system. Knechtel writes

zamfira@unitbv.ro
88 A. Lauber and E. Sax

in [5] that attacks are rarely due to weaknesses of individual keys or algorithms but rather
by weaknesses of the entire system. I.e. for security testing of the overall system,
including third party software, the overall system needs to be present. Further the internal
state of this system needs to be monitored in addition to the external interfaces. This
leads us to use virtual instrumentation of a processor running the software under test.
Finally, it should be noted that all practical security tests cannot guarantee complete
coverage. Therefore a compromise between test effort, time and completeness must be
made. I.e. practical security test serve only as a complement to theoretical security
analyzes and the consideration of security in the design phase.

3 Categorization of Attacks

As Radzyskewycz writes in [7], it is not a question of whether systems are attacked, but
when. Therefore it is important to implement security mechanisms according to the state
of the art. In addition, Wheatley in [8] describes that 44% of all attacks will be done over
known vulnerabilities.
The Symantec cooperation describes in [9] the loss or theft of passwords, incidental
ties and insider knowledge as other important causes of intrusion into systems. Only a
very small part of the attacks on systems are carried out by unknown vulnerabilities at
the time of attack. A distribution of the given causes for attacks is shown in Fig. 1.

Fig. 1. Classification of attack causes

Especially due to the large number of attacks with known vulnerabilities, it is impor‐
tant to design new software in such a way that known vulnerabilities are no longer
present. To ensure this, a software must be regularly tested against known vulnerabilities
during the development cycle. This must include all known security gaps, because the
old wisdom from project management is even more important in the field of security:
“The later a problem is detected, the higher is the cost to fix it.”
In the PC world, vulnerabilities are stored in a database of the MITRE Cooperation
on behalf of U. S. Department of Homeland Security. This Common Vulnerabilities and
Exposures Database [10] saves all known security gaps in existing applications. By the
year 2016, about 100,000 attacks on various systems were recorded in this database. In
addition, the MITRE Cooperation provides a database for the overview of all known

zamfira@unitbv.ro
Testing Security of Embedded Software 89

vulnerabilities in the Common Weakness Enumeration Database (CWE) [11]. There are
currently about 1,000 different vulnerabilities in this database. In the CWE, the weak
spots are divided into different categories. A classification of the attacks from the year
2015 leads to the distribution shown in Fig. 2. The most common attacks that are listed
in the CWE are so called Denail of Service (DoS) attacks. These attacks make 33% of
all known attacks on today’s systems. The goal is to get the attacked system to crash
and thus destroy the functionality of the system.

Fig. 2. Categorization of weaknesses according to CWE

More important than DoS attacks are attacks where an attacker can gain control over
the entire system. Buffer overflows with 22% and code execution with 24% have a special
significance. With so called buffer overflows (or overflows), memory areas are written
with too long data sets to overwrite the following data records in the memory, thereby
manipulating the contents of these variables. For an overview of attacks by buffer over‐
flows, see Foster in [12].
The principle of buffer overflows is also used in code execution. Whereupon not only
variables are overwritten, but the return address is set to a malicious, injected code of a
function. This not only generates influence of the behavior by changing the variables
but also take control over the entire system and execute malicious code.
Reason for the above attacks is usually a badly implemented software. Especially
the consistent check of the memory area for dynamic variables can prevent overflow in
most cases. However, due to time and memory space requirements in embedded systems,
this is often omitted. One reason for DoS attacks is often the division by zero, which is
not uniformly specified in microcontrollers and can therefore lead to different behavior
or even program termination.
In addition to the above, there is often undefined behavior in software development
when dereferencing so called null pointers that do not point to any memory, using
memory or objects after executing “free”, or read access to unallocated memory. In most
cases, the aforementioned problems can be avoided by means of consistent queries in
the programming, but the queries are rarely implemented for runtime and memory
reasons.
Not all problems can be found by individual testing of the independent modules.
Security weaknesses most often arise from the interaction of different modules and
therefor the overall system needs to be tested as a whole.

zamfira@unitbv.ro
90 A. Lauber and E. Sax

4 Interception of Software Running on Virtual ECUs

Instruction set simulators like Open Virtual Platforms (OVP) [13] can be used to model
a processor with the corresponding peripherals and run the cross-compiled application.
Running the cross-compiled application inside an instruction set simulator gives the
same behavior as on the target platform. The virtual prototyping of embedded systems
for OVP is described by Werner in [14]. Werner also compares OVP with other platforms
for virtual prototyping for embedded systems. He further explains in [15] the usage of
OVP for debugging cross-compiled applications to build a virtual test environment.
The Imperas binary interception tool as defined in [16] can cause the simulation to
stop the application and run the interception library at any point in time. This includes
among others the interception of the virtual platform before each instruction is morphed,
specific instructions are executed and a specific address range is read or written.
The interception technology is usually used for verification, analysis and profiling,
including detection of memory corruption, deadlocks, data races or memory usage. As
Imperas Software Limited writes in [16] this is especially useful when many different
data scenarios should be executed.
With the binary interception tool we can use our own library to examine the state of
the internal registers, instructions, memory, and other periphery. Furthermore, a replace‐
ment of the simulated behavior with a behavior defined in the interception library is
possible. This means if the interception library detects a specific behavior during simu‐
lator the corresponding instruction is either replaced or extended by the one defined in
the library.
The advantages of using the novel framework with interception libraries compared
to other debug interfaces is that no additional code needs to be inserted in the application
and no special access to the processor is needed. I.e. no resource overhead in the appli‐
cation and no additional instructions are executed. The application will be cross-
compiled as running on the real hardware platform without any additionalities. Another
advantage is that all parts of the interception technology will run in parallel to the simu‐
lation of the virtual platform.
As mentioned above we need to monitor the memory in order to find overflows and
the instructions to find zero divisions. Both can be done by running an interception
library in parallel to the main application.
An overview of the test framework can be found in Fig. 3. The platform for the virtual
processor will be described in a platform model file as described by Werner in [14]. The
virtual processor will consist of a processor with registers and memory for heap and
stack, local memory for code and variables, as well as peripherals. The interception
library will have direct access to these registers, memory, instructions, and peripherals.
The location of the variables inside the registers and memory will be configured in a
configuration file. Further, this file holds information about the intercepted instructions
and functions. We are generating this file with information of the source files from the
application. Therefore the source files are parsed and variables will be detected. The
supervision of instructions will be done during run time with the disassembled applica‐
tion code, searching for divisions and illegal memory access.

zamfira@unitbv.ro
Testing Security of Embedded Software 91

Fig. 3. Overview of security test-framework with virtual processor

4.1 Monitoring of Instructions

With the interception library [16] we can monitor all instructions on assembler level and
check each of them if we need to observe the corresponding instruction. The behavior
of the interception depends on the instruction. Our novel approach will intercept only
potential vulnerabilities and directly executed all other instructions. E.g. we look at the
different assembler instructions, if we find a division (either udiv for unsigned or sdiv
for signed division) the corresponding registers will be checked for zero division. If the
denominator is zero the execution of this command will be stopped and an error message
will be displayed. If the instruction is not a division the interception library will not be
executed.
To find potential vulnerabilities by zero divisions the observation of instructions can
be implemented as static interceptions, because the instructions are well known at
compile time and will be constant for all applications.

4.2 Monitoring of Memory Access

The same concept can be done with the memory. Each memory access (read and write)
will be monitored and an error message will be displayed if data is written to the wrong
memory range. The address range of the variables are stored in the interception config‐
uration (see Fig. 3) and access to these ranges will be observed. If a write access across
the variable boarders occur (buffer overflow) an error message will be displayed.

4.3 Heap and Stack Monitoring

For the heap and stack monitoring we need the memory tracing, because the local and
dynamic variables will be stored at the local memory. Further we need the function
tracing to trigger the interception whenever a function is called and new variables are
stored on stack or heap.
The local variables will be pushed to the stack, therefore the instruction monitoring
needs to add the variables to the dynamic monitoring, thus the interception library knows

zamfira@unitbv.ro
92 A. Lauber and E. Sax

the address and range of the new variables. The same will be done for dynamically
allocated or deallocated memory inside the heap. This memory is usually allocated or
deallocated with malloc and free. Another observer will find write access to the function
return address to detect illegal code executions.
Both the dynamic memory observation and the observation of the return address
needs to be done during runtime. Therefore a dynamic part of the interception library is
necessary, that can be extended by the simulation.

5 Virtual Instrumentation for Security Testing

The security testing is based on the cross-compiled code for the target platform. I.e. the
instructions order and the behavior is the same as on the real platform, since no further
or different optimization of other compilers will be done. Further the Executable and
Linkable Format (elf) file that is used for the testing can be flashed to the target device
without any additional changes. Current state of the art tests (see above) are focusing
on the source code without compiler optimization.
Even if the testing framework can check the source code using a static analysis before
cross-compiling and running the application on the virtual processor, we are not focusing
on this, since static code analysis is state of the art. This novel approach can even be
used to run the compiled application without having access to the source code of the
application. I.e. black box tests for security can be executed. But nevertheless informa‐
tion about functions and variables are needed in order to build the configuration file.
In the next step, after static code analysis, the application is checked for variables
and functions. The static variables and functions will be added to the interception
configuration. With this information the interception library is build and passed to the
instruction set simulator. If the defined interceptions occur the simulator will stop the
execution of the application and run the functions provided by the interception library.
The Imperas instruction set simulator is used to run the defined test cases and the
interception library. For this step a model of the target platform is needed. This should
include all necessary processors, memories, registers, and peripherals (see above). The
interception library will stop the running cross-compiled application in the simulator at
every predefined interception. Further if new memory is dynamically allocated the
interception library will be extended to observe this memory area as well. After deal‐
location of the memory the corresponding entry in the interception library will be
deleted.
At last the results of the simulation and the test process will be shown for documen‐
tation. The total workflow of the virtual instrumentation for security testing can be seen
in Fig. 4.

zamfira@unitbv.ro
Testing Security of Embedded Software 93

Fig. 4. Workflow of the virtual instrumentation for security testing

6 Conclusion and Future Work

In this paper we analyzed the different security weaknesses and derived from CWE that
the most common ones are buffer overflow, code execution and division by zero.
According to this knowledge we did a conceptual design for a security test framework
based on virtual instrumentation. We build an interception library that monitors the
memory and the instructions and reports security weaknesses if they occur.
Future work will investigate the concepts to automate the tests using virtual plat‐
forms. Further the memory observer for the variables and the test cases should be gener‐
ated automatically. The interception library should be used to generate test cases
according to the interfaces and a weakness database (CWE). These test cases will based
on the information of variables (including dynamic variables) and functions from the
application.
Another work will be done to use the framework for black-box-testing. This means
without any knowledge of the source code. Especially the observation of dynamic vari‐
ables have to be researched.
Protection of Shared memories and multi-processor systems can be tested as well.
The virtual framework will be extended for the usage of multi-processor systems in the
future.

Acknowledgement. This publication was written in the framework of the Profilregion


Mobilitätssysteme Karlsruhe, which is funded by the Ministry of Science, Research and the Arts
in Baden-Württemberg.

zamfira@unitbv.ro
94 A. Lauber and E. Sax

References

1. Kramer, J., Hillenbrand, M., Müller-Glaser, K.D., Sax, E.: Connected efficiency–a paradigm
to evaluate energy efficiency in tactical vehicle-environments. In: Bargende, M., Reuss, H.C.,
Wiedemann, J. (eds.) 16. Internationales Stuttgarter Symposium. Proceedings, pp. 1451–
1463. Springer, Wiesbaden (2016). doi:10.1007/978-3-658-13255-2_107
2. Koscher, K., et al.: Experimental security analysis of a modern automobile. In: IEEE
Symposium on Security and Privacy, pp. 447–462 (2010)
3. Checkoway, S., et al.: Comprehensive experimental analyses of automotive attack surfaces.
In: USINEX Security Symposium (2011)
4. Bayer, S., Enderle, T., Oka, D.-K., Wolf, M.: Automotive security testing—the digital crash
test. In: Langheim, J. (ed.) Energy Consumption and Autonomous Driving. LNM, pp. 13–22.
Springer, Cham (2016). doi:10.1007/978-3-319-19818-7_2
5. Knechtel, H.: Methoden zur Umsetzung von Datensicherheit und Datenschutz im vernetzten
Steuergerät. ATZ Elektronik 10(1), 26–31 (2015)
6. Spillner, A., Linz, T.: Basiswissen Softwaretest: Aus- und Weiterbildung zum Certified
Tester; Foundation Level nach ISTQB-Standard, 4th edn. dpunkt.verlag (2010)
7. Radzkewycz, T.: Automotive networks can benefit from security. In: Connected Vehicle
Journal: Designing for Next-Generation Connected and Autonomous Vehicles (2016)
8. Wheatley, M.: Known vulnerabilities cause 44 percent of all data breaches. http://
siliconangle.com/blog/2016/01/12/known-vulnerabilities-cause-44-percent-of-all-data-
breaches/. Accessed 31 Oct 2016
9. Symantec Corporation: Internet Security Threat Report. 2013 Trends, vol. 19 (2014)
10. MITRE Corporation: Common Vulnerabilities and Exposures (CVE). https://cve.mitre.org/.
Accessed 31 Oct 2016
11. MITRE Corporation: Common Weakness Enumeration (CWE). https://cwe.mitre.org/.
Accessed 31 Oct 2016
12. Foster, J.C., Osipov, V., Bhalla, N.: Buffer Overflow Attacks: Detect, Exploit, Prevent.
Syngress Publishing Inc., Rockland (2005)
13. Imperas Software Limited: Open Virtual Platforms: The source of Fast Processor Models &
Platforms. http://www.ovpworld.org/. Accessed 15 Dec 2016
14. Werner, S., et al.: Cloud-based design and virtual prototyping environment for embedded
systems. Int. J. Online Eng. (IJOE) 12(9), 52–60 (2016)
15. Werner, S., Lauber, A., Becker, J., Sax, E.: Cloud-based remote virtual prototyping platform
for embedded control applications: cloud-based infrastructure for large-scale embedded
hardware-related programming laboratories. In: Proceedings of 2016 13th International
Conference on Remote Engineering and Virtual Instrumentation (REV). IEEE (2016)
16. Imperas Software Limited: Imperas Binary Interception Technology: User Guide, no. V1.5.3
(2016)

zamfira@unitbv.ro
Virtual and Remote Laboratories

zamfira@unitbv.ro
LABCONM: A Remote Lab for Metal Forming
Area

Lucas B. Michels(&), Luan C. Casagrande, Vilson Gruber,


Lirio Schaeffer, and Roderval Marcelino

Florianópolis, Brazil
lucasboeira@ifsc.edu.br, lucasccasagrande@gmail.com,
{vilson.gruber,roderval.marcelino}@ufsc.br,
schaefer@ufrgs.br

Abstract. This paper aims to describe the LABCONM, that is an educational


laboratory that provides remote access to one remote educational compression
testing machine (MDTEC). This laboratory was developed specifically to help in
the teaching/learning process of metal flow curves in the metal forming area.
Two different types of analysis were defined to validate the laboratory, where
the first one considerate the teaching approach and the second one the operation
of the laboratory. In the technical analysis, the researchers conducted 20
remotely operated tests, where it was verified the quality and repeatability of the
data to demonstrate metal flow. The data shown sufficient quality and repeata-
bility, so that the MDTEC could be used as an educational experiment. In the
pedagogical analysis, two classes, which were attending Metal Forming Course,
participated. In the group that did not access the LABCONM, 17% of students
had unsatisfactory result in the mathematical question. In the group that
accessed to the LABCONM, 100% of the students had satisfactory or excellent
result in the same question. Consequently, it was possible to conclude that there
is an influence of laboratory especially for those students who have more dif-
ficulties in theoretical learning.

1 Introduction

Conceptual learning in the metal forming area is complex and difficult because it
involves knowledge of symbols, equations, theories, principles and procedures. This
complexity is natural, because basically knowledge is an abstraction of reality, result of
experiments, analyzes, studies, and standards.
It is believed that observation, interaction, practice, and experimentation are edu-
cational practices that can complement and enhance the learning process in engineer-
ing, making theory more meaningful for the student. Without interaction, students are
passive and the learning process becomes slower [1]. Doubtless, experimentation
establishes a relationship between practice and theory [2]. For this reason, experiments
are essential in the teaching process, especially in engineering and experimental sci-
ences [3].
A solution to provide experimentation and interaction as a way to aid learning has
been the development of experiments and remote laboratories [4]. With this new

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_10
zamfira@unitbv.ro
98 L.B. Michels et al.

concept of laboratory, it is possible to share the same equipment between users [3] and
geographically distant universities.
Considering the wide range of advantages and possibilities, research on remote
laboratories has grown increasingly in recent years. This trend aims to take advantage
of new resources and technologies to improve the technological education. Many of the
recent publications on remote laboratory are related to various engineering areas, such
as: “electrical and electronics”, “telecommunications” [5], “basic physics” [6], “bio-
medicine” [7], “automation/robotics/classic control” [8, 9], being rare to find something
related to the study of mechanical proprieties of materials, as in [10–13] and even rarer
on plastic deformation, as in [14].
Despite current advances in Information and Communication Technologies (ICTs),
there are few publications about remote laboratories that were developed aiming the
metal forming area and the representation of phenomena linked to the plastic defor-
mation of a metal. It is believed that some factors are responsible for this current
situation, such as: (a) Maintenance of mechanical parts from the remote experiment;
(b) Metal specimens production; (c) Creation of a replacement mechanism for the metal
specimens; (d) Use of higher precision and cost sensors; (e) Use of higher strength and
greater precision actuators; (f) High cost of maintenance. Despite these obstacles, the
challenge of developing a remote laboratory in the metal forming area means a big step
for research in remote experimentation, then generating knowledge and innovation in
the teaching process in engineering courses that are related to this area.
This paper represents a continuation of the outcomes that have been already pub-
lished about the LABCONM (Metal Forming Online Lab) research project. The initial
results were presented in [15, 16]. The LABCONM is an educational laboratory that
provides remote access to one remote educational compression testing machine. This
laboratory was developed specifically to help in the teaching/learning process of metal
flow curves in the metal forming area.
Instead of others publications, this paper describes the LABCONM with more
details and presents the new outcomes based in applications of this lab with Engi-
neering students. This paper aims to describe the LABCONM, that is an educational
laboratory that provides remote access to one remote educational compression testing
machine.

2 Theory About Metal Forming Area

To plan the production of a piece that will be formed, it is necessary to know basic
common aspects of metal forming. Without these aspects, it is not possible to predict or
calculate overall costs, amount of metal necessary, the total energy used in the process,
maximum strength, or total time used in manufacturing process. The flow curve is one
of these fundamentals aspects of metal forming, which in its complexity carries the-
oretical and practical elements, including concepts, equations, symbols and procedures
which are indispensable for the designer.
Flow curves are one of the main ways to analyze the mechanical behavior of a
metal and are critical to design pieces using metal forming. These curves are defined
with data from the plastic zone (above the yield stress) of the stress-strain diagram and

zamfira@unitbv.ro
LABCONM: A Remote Lab for Metal Forming Area 99

they can express the mechanical behavior of metal during the plastic phase. Defor-
mation is a permanent change in the geometric shape of a metal that occurs after
applying stress (r) with intensity above the yield point. The deformation is visible
because it changes the height, length, and the depth of a metal.
Mechanical tests of strength/compression/torsion are made on metal specimens to
obtain data from stress and deformation of a metal. However, when the designer wants
to do calculations and simulations of the metal forming process, the flow curve is not
appropriate, and as a consequence, it is necessary to represent the same data in an
Eq. (1), as it follows:

kf ¼ C:un ð1Þ

kf = Yield Stress [N/mm2]


 
u ¼ ln hh = True Strain [-];
0

C = Resistance Coefficient [N/mm2];


n = Work Hardening Coefficient [-].

3 LABCONM Description

The LABCONM is an educational laboratory that was created to support the theoretical
learning of metal forming with remote experiments. As shown in Fig. 1, LABCONM is
distributed in four main parts connected to the internet, which are:
• Learning Management System (LMS);
• Remote Experiments;
• Web Server;
• Device with internet access for the student;
The LABCONM central portal is a Learning Management System (LMS), which is
actually a web page. Remote experiments are physical experiments that are connected
to the Web Server and are managed by the LMS. It is important to describe that, despite
the “Student Device” is not an item developed for the laboratory, without it the
LABCONM would not be “complete” or “formed”.
In this version of LABCONM was implemented only one remote experiment
(Remote experiment 1), called remote educational compression testing machine
(MDTEC).

3.1 Remote Educational Compression Testing Machine (MDTEC)


The MDTEC (Remote Educational Compression Testing Machine) is capable of
generating flow curves remotely. As described in Fig. 1, the MDTEC is composed of
four main parts: (a) Compression Structure; (b) Hydraulic Unit; (c) Motor Control
Panel; (d) Control Panel and Data Processing. In addition to these described items,
MDTEC uses a Web Server, that is a shared device with other experiments from
LABCONM.

zamfira@unitbv.ro
100 L.B. Michels et al.

Fig. 1. Overview of the LABCONM

zamfira@unitbv.ro
LABCONM: A Remote Lab for Metal Forming Area 101

Compression Structure - MDTEC


In this subchapter is detailed the item “a” from Fig. 1. The Compression Structure (see
Fig. 2) actually is a set of electromechanical devices that have been joined to form a
system capable of performing a compression test remotely.

Fig. 2. Compression structure

The main features of this equipment are: storage, positioning, measurement, com-
pression and disposal of metal specimens. All these features and processes are auto-
mated and remotely operated through preprogrammed commands in the minicomputer
Raspberry Pi and managed and operated by the virtual control panel in the access page.
In MDTEC, the operating speed vf depends on machine characteristics and the
resistance that the machine meets during the test, being impossible to test with strain
_ constant. The MDTEC performs the compression of the specimen and stores
rate ðuÞ
about 8 readings per second (strength and height variation of the metal specimen).
In MDTEC, there is a series of essential elements for its operation (see Fig. 3),
which can be classified into three mechanisms (Accordingly to its function):

zamfira@unitbv.ro
102 L.B. Michels et al.

Fig. 3. Remote educational compression testing machine detailing

• Compression Mechanism: It is composed of a hydraulic actuator, compression


region (base) and upper die. It is function is to deform the specimen;
• Measurement System: It is composed of a displacement sensor (potentiometric ruler
model) and force sensor (load cell model). The main function of these sensors is to
provide data about the compression process of the metal specimens, as well as assist
in determining the specimen height and in the positioning of the hydraulic cylinder.
• Feeding, positioning and disposing mechanism: These mechanisms are composed
of an electromechanical actuation structure driven by belts, which is driven by
stepper motor;
The most important data from the experiment is height variation Dh (mm) and
force (N) applied to the metal specimen. They are the base for the construction of flow
curves.

zamfira@unitbv.ro
LABCONM: A Remote Lab for Metal Forming Area 103

3.2 Learning Management System


The Learning Management System (LMS) used in the LABCONM is the virtual part of
the laboratory (see Fig. 4). It is a web page developed for learning management, which
allows the access to the laboratory from anywhere via the internet. The main menus are
detailed below:

Fig. 4. Overview of the metal forming remote laboratory webpage

• Schedule: This menu link serves to define the user’s time in the experimentation
page.
• Use: This menu displays a page with a summary of the student’s situation. It is a
report of the mandatory learning tasks and online exams proposed.
• Activities: On this web page there are several activities with videos, texts and
images related to the MDTEC experiment and the flow curves.
• Experiments: In the experiments menu the users will find submenus to access the
control panel of the experiments. The control panel is where the students
perform/control/monitor the experimentation process. Currently there is only the
panel of MDTEC (see Fig. 5).
• Scores: In this item, questionnaires are used by students to evaluate the laboratory
or to send the requested activities in the Activities Menu;

zamfira@unitbv.ro
104 L.B. Michels et al.

Fig. 5. View from a smartphone of the MDTEC virtual control panel

zamfira@unitbv.ro
LABCONM: A Remote Lab for Metal Forming Area 105

4 Methodology

Two different type of analysis were defined to validate the laboratory considering the
teaching approach and the operation of the laboratory. In the technical analysis, the
researchers conducted 20 remotely operated tests, where it was verified quality and
repeatability of the data to demonstrate metal flow. In the pedagogical analysis, two
classes, which were attending Metal Forming course, participated. During four weeks,
LABCONM was available for one of these two groups to use the remote laboratory and
to complete the proposed activities related to the experiment. After this period, both
groups attended a calculus exam related to the course. The group that had the
opportunity to access the remote laboratory also completed a satisfaction questionnaire,
where the questions were about various aspects of the laboratory.

4.1 Technical Analysis


In the technical analysis, the results of the flow curves from 20 trials were plotted and
overlaid using the yield stress and true strain data calculated for each metal specimen,
as shown in Fig. 6 the data shown sufficient quality and repeatability, so it is possible
conclude that the MDTEC could be used as an educational experiment.

Fig. 6. Results of 20 trials in MDTEC

4.2 LABCONM Analysis by Students


A group of students that accessed the LABCONM answered a questionnaire composed
by objective questions, which should be answered with scores from 0 to 10. This
questionnaire evaluated the laboratory and the experiment with the following ques-
tions: (a) What is your satisfaction level in using the remote lab for metal forming area?

zamfira@unitbv.ro
106 L.B. Michels et al.

(b) How easy was it to use and operate the LABCONM? (c) How was the experi-
mentation process? (d) How objective and understandable were the proposed ques-
tions? (e) How the activities helped to understand the content of the Mechanical
Conformation course? The answers were presented in the Fig. 7.

Fig. 7. Average evaluation scores from questionnaires

The mean value between all the answers presented in the Fig. 7 was excellent,
ranging between 8.8 and 9.33. The maximum standard deviation reached between the
five questions was 1.9 in objectivity and understanding of the activities, which
achieved the lowest average as well. This point was expected somehow, because it was
presented to the students a new environment with a lot of information. The idea was
that they should try the whole process of interpretation alone and ask questions via
email, thus stimulating the exchange of information. It is believed that this difficulty
was not an issue for the students, because they did not enter in this merit in the
questionnaire for complaints and suggestions.
The most important question to highlight with respect to the Fig. 7 was the average
score of 9.07 obtained by the item related to “How the activities helped to understand
the content of Metal Forming course”. This score confirms once again the focus of the
remote laboratory for metal forming area to aid in learning process of Metal Forming
theory and it offers good arguments to validate the LABCONM as an educational
laboratory.

4.3 Validation Based on Exam Results


To perform the analysis and comparison between the two groups of students, first was
defined that the answers would be classified into three types according to the

zamfira@unitbv.ro
LABCONM: A Remote Lab for Metal Forming Area 107

Table 1. Results of the question about extrusion calculation


Groups Representation Grades Total Average Standard-deviation
Unsatisfying Satisfactory Excellent
(below of 50%) (between 50% (between 75%
and 75%) and 100%)
Without Amount 9 11 32 52 6,6 2,46
access to Percentage 17% 21% 62%
LABCONM
With Amount 0 10 8 18 7,1 2,43
access to Percentage 0% 56% 44%
LABCONM

correctness of the answer, being them: unsatisfactory, satisfactory and excellent. Based
on this criteria, the students were grouped into three subgroups for map the range of
scores, mean values and standard variation, resulting in the data presented in the
Table 1. Through this table, it can be observed that the group that accessed the
LABCONM had an average score of 7.7, which is 7% higher than the average grade for
the group that did not access the remote laboratory. It is important to explain that the
class who did not access the LABCONM is formed by a group of students with better
performance than the group that accessed the LABCOM, as shown in Fig. 8. This
figure shows the result of the groups in the calculus question in the first semester exam.
From this figure, it is possible to conclude that 17% students did not achieve an average
score (above 50%).

65%
56%

28%
17% 17% 17%

Unsatisfactory (less than Satisfactory (Between Excelent (Between 75%


50%) 50% and 75 %) and 100%)

Group without LABCONM access group with LABCONM access

Fig. 8. Results from calculus question – 1st semester exam

However, by the Table 1, where it is possible to verify the results of the second
exam that was applied after the use of the remote laboratory, it is possible to conclude
that none of the students from the group that accessed the LABCONM had an
unsatisfactory performance.
On the other hand, in the group that did not access the LABCONM, 17% of
students in this group had some problem to solve this kind of mathematical question.

zamfira@unitbv.ro
108 L.B. Michels et al.

Therefore, there is an influence of laboratory especially for those students who have
more difficulties in theoretical learning.
Therefore, in a general analysis, it may be affirmed that the class with access to the
laboratory had a larger group of students in the satisfactory and excellent level.

5 Final Considerations

The development of remote laboratory in the metal forming area has some difficulties,
as financial, technology, and maintenance, which are obstacles for the expansion of this
field of study. However, in this article an innovative remote laboratory was described,
which has a Remote Educational Compression Testing Machine capable of performing
real remotely compression tests on metal specimens.
The LABCONM validation showed good quality and repeatability in the MDTEC
results, and as a consequence, it is possible to conclude that the machine can be used
for the generation of flow curves in an educational way. Furthermore, considering the
educational nature of experimental activity proposed, it was verified by the students’
opinion that the laboratory presents a potential to improve learning in the Metal
Forming course. This fact was presented more clearly in the exam scores of the stu-
dents group that had access to LABCONM, because despite of the difficulties that they
have in calculus, none of them obtained an unsatisfactory result. Therefore, based on
these analyzes, it is considered that the LABCONM is a great advance in the field of
remote engineering and virtual instruments and to support the learning in the metal
forming area.

References
1. Fabregas, E., et al.: Development a remote laboratory for engineering education. Comput.
Educ. 57 (2011)
2. Pimentel, A.: A Teoria da Aprendizagem Experiencial como alicerce de estudos sobre
Desenvolvimento Profissional. Estudos de Pscicologia, 159–168 (2007)
3. Ikhlef, A., et al.: Online temperature control system. In: International Conference on Interactive
Mobile Communication Technologies and Learning (IMCL). [S.l.]: [s.n.], pp. 75–78 (2014)
4. Valls, M.G., Val, P.B.: Usage of DDS data-centric middleware for remote monitoring and
control laboratories. IEEE Trans. Indus. Inf. 9(1), 567–574 (2013). Fevereiro
5. Vlasov, I., et al.: Global navigation satellite systems (GNSS) remote laboratory at BMSTU.
In: 2013 2nd Experiment@ International Conference (exp.at 2013), Exp.at. 2013, Coimbra,
pp. 64–67 (2013)
6. Ožvoldová, M., Špiláková, P., Tkac, L.: Archimedes´ principle remote experiment. In: 11th
International Conference on Remote Engineering and Virtual Instrumentation (REV),
Porto-Portugal: [s.n.] (2014)
7. Barros, C., et al.: Remote physiological signals acquisition: didactic experimnets. In: 11th
International Conference on Remote Engineering and Virtual Instrumentation (REV),
Porto-Portugal: [s.n.] (2014)

zamfira@unitbv.ro
LABCONM: A Remote Lab for Metal Forming Area 109

8. Ghorbel, H., et al.: Remote laboratory for control process practical course in eScience
project. In: International Conference on Interactive Mobile Communication Technologies
and Learning (IMCL), Thessaloniki, Greece: [s.n.] (2014)
9. Ayodele, K.P., Inyang, I.A., Kehinde, L.O.: An iLab for teaching advanced logic concepts
with hardware descriptive languages. IEEE Trans. Educ. 58(4), 262–268 (2015)
10. Restivo, M.T., et al.: A Remote Laboratory in Engineering Measurement, vol. 56, no. 12,
pp. 4836–4843, December 2009
11. Marcelino, R., et al.: Extended immersive learning environment: a hybrid remote/virtual
laboratory. Int. J. Online Eng. (IJOE) 6, 46–51 (2010)
12. Michels, L.B., et al.: Using remote experimentation for study on engineering concepts
through a Didactic press. In: 2nd Experiment@ International Conference - Exp’at, Coimbra:
[s.n.], pp. 191–193 (2013)
13. Nasri, I., Ennetta, R.: Determination of resonance frequency and estimation of damping ratio
for forced Vibrations modules using remote lab. In: International Conference on Interactive
Mobile Communication Technologies and Learning (IMCL), Thessaloniki, Greece: [s.n.]
(2014)
14. Terkowsky, C., et al.: Developing tele-operated laboratories for manucfacturing engineering
education. In: International Conference on Remote Engineering and Virtual Instrumentation,
REV2010, Stockholm, pp. 60–70 (2010)
15. Michels, L.B., et al.: Educational compression testing machine for teleoperated teaching of
the metals flow curves. In: Exp’at 2015, Ponta Delgada: [s.n.] (2015)
16. Michels, L.B., et al.: Remote compression test machine for experimental teaching of
mechanical forming. Int. J. Online Eng. 12(04), 20–22 (2016)

zamfira@unitbv.ro
A Virtual Proctor with Biometric Authentication
for Facilitating Distance Education

Zhou Zhang, El-Sayed Aziz, Sven Esche ✉ , and Constantin Chassapis


( )

Department of Mechanical Engineering, Stevens Institute of Technology, Hoboken, NJ, USA


{zzhang11,eaziz,sesche,cchassap}@stevens.edu

Abstract. The lack of efficient and reliable proctoring for tests, examinations
and laboratory exercises is slowing down the adoption of distance education. At
present, the most popular solution is to arrange for proctors to supervise the
students through a surveillance camera system. This method exhibits two short‐
comings. The cost for setting up the surveillance system is high and the proctoring
process is laborious and tedious. In order to overcome these shortcomings, some
proctoring software that identifies and monitors student behavior during educa‐
tional activities has been developed. However, these software solutions exhibit
certain limitations: (i) They impose more severe restrictions on the students than
a human proctor would. The students have to sit upright and remain directly in
front of their webcams at all times. (ii) The reliability of these software systems
highly depends on the initial conditions under which the educational activity is
started. For example, changes in the lighting conditions can cause erroneous
results.
In order to improve the usability and to overcome the shortcomings of the
existing remote proctoring methods, a virtual proctor (VP) with biometric authen‐
tication and facial tracking functionality is proposed here. In this paper, a two-
stage approach (facial detection and facial recognition) for designing the VP is
introduced. Then, an innovative method to crop out the face region from images
based on facial detection is presented. After that, in order to render the usage of
the VP more comfortable to the students, in addition to an eigenface-based facial
recognition algorithm, a modified facial recognition method based on a real-time
stereo matching algorithm is employed to track the students’ movements. Then,
the VP identifies suspicious student behaviors that may represent cheating
attempts. By employing a combination of eigenface-based facial recognition and
real-time stereo matching, the students can move forward, backward, left, right
and can rotate their head in a larger range. In addition, the modified algorithm
used here is reliable to changes of lighting, thus decreasing the possibility of false
identification of suspicious behaviors.

Keywords: Distance education · Virtual proctor · Face detection · Facial


recognition · Stereo matching

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_11

zamfira@unitbv.ro
A Virtual Proctor with Biometric Authentication 111

1 Introduction

The distance education market keeps growing rapidly [1]. While several research threads
(i.e. on real-time creation of virtual environments for virtual laboratories [2, 3], augmen‐
tation of virtual laboratories [4], creation of smart sensor networks [5], etc.) have
contributed to the continued adoption of distance education approaches, the lack of
efficient and reliable proctoring is slowing this adoption process down.
At present, the most popular solutions in distance education for monitoring an
experiment or an examination are human proctors. Human proctors used in distance
education can be teaching assistants, instructors, laboratory administrators and faculty
members. There are also companies that provide the service of monitoring examinations
from a distance (e.g. ProctorU [6]). In most remote proctoring cases, the students take
examinations and perform experiments on a computer and the proctor(s) watch(es) them
from another computer through video cameras. The human proctors must monitor a
screen throughout the entire process. The basic requirement for operating a remote
proctor is that it needs a remote surveillance camera system mounted at the student’s
site. The advantage of this method is that it is similar to traditional classroom education
and therefore provides fewer challenges than technology-assisted methods such as VPs.
However, this method also has two shortcomings. One disadvantage is that the opera‐
tional costs are high since such proctoring services currently usually charge over $60
per student per course while the cost for setting up the surveillance system is high.
Another disadvantage is that the proctoring process is laborious and tedious.
With the further development of computer vision technology, VPs appeared. VPs
are integrated software-hardware solutions that have the potential to contribute to
bringing academic integrity to distance education. They were enabled by the prolifera‐
tion of the high-speed Internet and advanced computer peripherals. They first perform
the authentication of the students by scanning either their faces [7] or their fingerprints
[8]. Then, a camera monitors the environment and/or a microphone records the sounds
within it. Virtual proctoring software used in distance education includes Remote
Proctor Pro [9], Instant-InTM Proctor [10], Proctortrack [7], Proctorfree [9] and
Securexam Remote Proctor [11]. Virtual proctoring has three advantages over human
proctors. First is its low fixed cost. The students only need to set up a webcam and install
the VP software which then performs the authentication of the student and the proctoring
of the educational activity. Typically, the cost of the VP (including webcam, microphone
and software kit) will not exceed $15 per student. The second advantage of virtual proc‐
toring lies in its convenience. There is no need for human proctors, and thus the educa‐
tional activity to be proctored can take place at anytime and anywhere. The third
advantage is in the accurate authentication of the students. The utilization of biometric
technologies enables the accurate recognition of the students, thus ensuring a reliable
authentication [10]. It should be noted that virtual proctoring is still evolving and current
systems are often attracting complaints from the students, mostly because of two short‐
comings exhibited by these systems. First, they impose more severe restrictions on the
students than a human proctor would. The students have to sit upright and remain directly
in front of their webcams at all times. Second, the reliability of the VP highly depends
on the initial conditions under which the proctoring is started. For example, changes in

zamfira@unitbv.ro
112 Z. Zhang et al.

the lighting conditions can cause mistakes in the verification of suspicious behav‐
iors [12].
In order to improve the usability and to overcome the shortcomings of the existing
remote proctoring methods, a VP with biometric authentication and facial tracking
functionality is proposed here. This VP is designed to authenticate the students and
capture suspicious behaviors based on facial recognition and facial tracking. The work‐
flow of this VP is depicted in Fig. 1 and is composed of two main parts: authentication
and supervision. In addition, there is a database which stores the enrolled students’ face
templates indexed with their campus ID. When they use this VP, the students are first
required to scan their face using a webcam. Second, the scanned frame is processed, and
the part of the frame containing the face is cropped out. Third, the face is then compared
with the face template that was stored in the face database and could be retrieved by the
index of student’s campus ID. If the face and the template match, the student is authen‐
ticated and can continue the educational activity. Otherwise, the student is logged out.
After authentication, the frame used for authentication is stored in a newly allocated
memory address and is taken as the new template. Then, the subsequent matching is
based on this new template instead of the template stored in the face database. Then, the
educational activities are monitored by the webcam. During the monitoring period, the
VP samples the live video of the student with a sample rate of 30 frames per second. If
the mismatching percentage between the sampled face and the new template exceeds a
pre-configured threshold value, a suspicious behavior is identified and a video clip is
recorded, which is then used for further verification by the instructor of the examinations
or experiments.

Fig. 1. Workflow of virtual proctor based on facial recognition

2 Design of Virtual Proctor Based on Facial Recognition

2.1 Overview of Proposed Virtual Proctor


The proposed VP was designed based on facial recognition techniques. In order to reduce
the computational cost of the facial recognition while keeping its reliability, the process

zamfira@unitbv.ro
A Virtual Proctor with Biometric Authentication 113

was divided into two stages. The first stage is the detection of the face in a sampled
frame of the student’s live video. Once a face has been detected, the area of the detected
face is cropped out from that frame and used for the following facial recognition. The
second stage is the recognition of the detected face. The cropped face area is compared
with the face template as illustrated in Fig. 1. Because the cropped face area is much
smaller than the whole image area, the computational cost of recognizing the face is
reduced considerably.
Below, the reason why facial recognition was selected instead of other biometric
methods is explained first. Following that, various facial detection algorithms are
discussed, including the algorithm selected here and the method employed to crop out
the face area. Subsequently, a modified facial recognition algorithm based on stereo
matching is introduced that overcomes some of the shortcomings of other facial recog‐
nition algorithms based on template matching or eigenfaces. Finally, the results of some
benchmarks are presented to confirm that the proposed facial recognition algorithm is
reliable.

2.2 Advantages of Facial Recognition for Virtual Proctor


A VP should have the functions of both authentication and real-time monitoring. In the
authentication process, the method used to verify the students’ identity can be based on
their biometric information (such as face snapshot [10], fingerprint [8], palm print [13],
hand geometry [14], iris [15] and/or retina [16]). Following the authentication, the
student’ activities are monitored by consecutively sampling the webcam video.
Compared with other biometrics-based methods, the facial recognition method
employed here has three advantages:
• The hardware is available and affordable. A common webcam, instead of special
biometrics data readers, can meet the hardware requirements for real-time facial
recognition and tracking.
• The algorithms used to implement facial recognition are much simpler than those
used in biometrics-based methods, thus allowing for a higher sampling frequency.
The features used for facial recognition are so notable that they can be identified very
easily. Therefore, the algorithms for the facial recognition are more robust and
simpler than other biometrics-based algorithms.
• Facial recognition is more practical than iris or retina tracking. Facial recognition is
macroscopic in scale while iris recognition and retina scanning are based on micro‐
cosmic features which have strict requirements related to the distance between the
scanners and the eyes.
Based on the above discussion, facial recognition was used to design the proposed VP.

zamfira@unitbv.ro
114 Z. Zhang et al.

2.3 Facial Detection


2.3.1 Selection of Facial Recognition Algorithm
Facial detection is the first step that precedes facial recognition. In fact, there are many
facial detection algorithms with different complexities. The most common methods in
facial detection include [17]:
• Detecting faces in images with a controlled background. The most common approach
in this method is to use the green screen algorithm [18] to crop out the faces. Although
this method is the simplest one, it is not practical to employ it in VPs because one
cannot expect the students to provide a green background.
• Detecting faces by color. This method uses a typical skin color to find face segments.
Obviously, it is not robust when the environment lighting condition is changed. In
addition, it is not universally effective for all kinds of skin colors.
• Finding faces by motion. This method assumes that the face is the only moving object
in consecutively acquired images. Thus, it is not effective in scenarios where there
are other moving objects in the background.
• Finding faces in unconstrained scenes. This method removes the constraints imposed
on the background (for example an intended green background) or the face itself (for
example the markers on faces). Hence, it represents a general and convenient method.
In addition, it can be further divided into tracking based on models (e.g. model-based
facial detection [19], edge-orientation matching based facial detection [20], Hausdorff
distance facial detection [21]) and weak classifier cascades (e.g. boosting classifier
cascades [22], asymmetric AdaBoost and detector cascades [23]).
Obviously, unconstrained methods are more appropriate for VPs. Tracking based on
models is not robust because of the lack of generalization in the definition of human
facial expressions [24]. On the other hand, facial detection using weak classifier cascades
is based on the analysis of human expressions, and hence, it is more general and robust.
The algorithms used here represent a modification of weak classifier cascades.

2.3.2 Innovative Method to Crop Out Face Based on Facial Detection


A basic cascade is a degenerate decision tree. The training process is implemented by
going through a sequence of weak classifiers expressed as functions (h1, h2, …, hn) with
binary outputs (true = 1 and false = 0) as illustrated in Fig. 2 [25]. The set of training
samples X1 is sent to classifier h1, and a large number of samples which make the output
of h1 equal to zero are rejected. Subsequently, the remaining samples are sent to h2, and
so on. After n stages, the number of samples is significantly smaller. The new classifier
is composed of (h1, h2, …, hn). Then, the remaining samples can be taken as the input of
other cascade processing or another detection system. After the training process
described above, a series of strong and accurate classifiers was obtained. For facial
detection, Haar features were used to train the classifier [26, 27]. In addition, the
Adaboost (adaptive boosting) algorithm was employed to find the best threshold for the
Haar features. For convenience reasons, the pre-trained classifier from OpenCV [28]
was used to implement the facial detection. More details can be found elsewhere [29].

zamfira@unitbv.ro
A Virtual Proctor with Biometric Authentication 115

Fig. 2. Schematic depiction of detection cascade

The process described above only finds an estimated area of the faces. It renders the
recognition process difficult since the information provided by the face area is insuffi‐
cient for the subsequent facial recognition. In fact, the estimated area of the faces results
in the loss of the entire background and part of the outline of the face. In order to
compensate for the loss of information and to facilitate the following facial recognition
process, a modification for the facial detection algorithms based on the localization of
the mouth and eyes was implemented. First, the coordinates of the eyes are set as
E1(x1, y1) and E2(x2, y2), and the coordinates of the mouth are set as M(x3, y3). All
coordinates represent the center of the area of the eyes and mouth. The cross product of
vectors E⃖⃖⃖⃖⃖⃖⃖
⃗ ⃖⃖⃖⃖⃖⃖⃖⃗
1 E2 and E1 M is positive if M is above the line E1E2. Based on the golden ratio
of the human face [30], the face area forms a golden rectangle with the eyes at its
midpoint. Then, the vertical ratio equals the distance between the pupils and the mouth
in relation to the distance of the hairline to the chin, i.e. E1 M/H C = 0.36. The horizontal
ratio equals the distance between the pupils in relation to the width of the face, i.e.
E1 E2/L R = 0.46 (see Fig. 3 [30, 31]). The obtained rectangle should be L R × H C, but
the rectangle actually used to facilitate the face recognition is increased by 10 pixels in
each direction in order to avoid loss of the facial information. The part of the code based
on the cascade of OpenCV is listed in Fig. 4.

Fig. 3. Golden ratio of human face

zamfira@unitbv.ro
116 Z. Zhang et al.

Fig. 4. Part of code used to detect face, eyes and mouth

2.4 Facial Recognition

2.4.1 Limitations of Template Matching and Eigenfaces in Facial Recognition


Face recognition methods for the images can be divided into feature-based methods [32]
and holistic methods [33]. Feature-based methods lead to robust results, but they make
the automatic detection of the features difficult to achieve. Obviously, these methods
are inappropriate for the VPs. Holistic methods have the advantage that they concentrate
on the limited regions or points of interest without the distortion of the information of
the images [34]. Their shortcoming is the hypothesis of equal importance of all pixels
in the image. These methods are not only costly but also sensitive to the relationship
between the training samples and the test data, to changes of the pose and to the illu‐
mination conditions. Typical holistic methods include temple match, eigenfaces, eigen‐
features, the combination of eigenfaces and eigenfeatures [35] and 2D matching.
In the template matching algorithm, the selected patch that is taken as the template
traverses the target image. Then, an error function is defined as:

R(x, y) = f [T(x′ , y′ ), I(x + x′ , y + y′ )] (1)

where R is the resulting error, T is the template, I is the target image, (x, y) are the
coordinates of the image in pixels, and (x′, y′) are the coordinates of the template in
pixels.
Different error functions can be specified depending on the prevailing conditions.
After comparison between the template and the target image, the best matches can be
found as global minima or maxima [36].
The eigenface method is an efficient approach for recognizing a face [37]. A high
recognition rate can be achieved with a low dimension d of the eigenvector space since
the recognition rates are stable when the dimension of the eigenvector space equals 8.

zamfira@unitbv.ro
A Virtual Proctor with Biometric Authentication 117

In order to identify the eigenvectors, a principal component analysis was used to find
the directions with the greatest variances of the components of a given dataset. These
variances are called principal components (and are also the eigenvalues associated with
the eigenvectors used in the eigenfaces). Then, a high-dimensional dataset is described
by such a series of correlated variables. The algorithm can be described as follows.
Let X = {x1 , x2 , … , xn }, xi ∈ Rd be a random vector wherein xi are the observations.
The expected value μ of the observations is:

1∑
n
𝜇= x (2)
n i=1 i

The covariance matrix S can be expressed as:

1∑
n
S= (x − 𝜇)(xi − 𝜇)T (3)
n i=1 i

The eigenvalues λi and eigenvectors νi of S are defined by:

Svi = 𝜆i vi , i = 1, 2, … , n (4)

If there are k principal components and the corresponding eigenvectors are labelled
in descending order based on the values of the principal components, then the k principal
components of the observed vector x are given by:

y = W T (x − 𝜇), W = {v1 , v2 , … , vk } (5)

The reconstruction of the eigenvectors from the principal component analysis (PCA)
is given by:

x = Wy + 𝜇 (6)
Following the procedures outlined above, three more steps realize facial recognition.
In the first step, all training samples are projected into the PCA subspace composed of
eigenvectors. In the second step, the query image (i.e. the target image that will be
identified) is projected into the PCA subspace. In the last step, the nearest neighbor
between the projected training images (i.e. the former training samples) and the projected
query image is determined.
Template matching is reliable and simple when the contexts of the images are
constrained. The eigenface method is designed based on a generalization of the faces,
and therefore, it is robust and accurate under conditions that vary only mildly.
Unfortunately, both the template matching method and the eigenface method are
vulnerable to changes in the environment and pose variations of the face. Therefore, the
method of facial recognition employed here is a modification of stereo matching based
on the facial detection results.

zamfira@unitbv.ro
118 Z. Zhang et al.

2.4.2 Facial-Detection-Based Facial Recognition with Stereo Matching


The proposed method used to recognize and track faces is based on stereo matching.
Stereo matching refers to comparing two images taken by nearby cameras and
attempting to map every pixel in one image to the corresponding location in the other
image. The proposed method can be described as follows:
• Detect the facial, eye and mouth area in the captured frame
• Crop out the facial area
• Mark the eyes, mouth as landmarks
• Rectify the captured frame based on the landmarks both in the template and the
captured frame
• Compare the template stored in the face database to the rectified frame (i.e. the frame
with the same row coordinates as the corresponding points in the template) using the
stereo matching algorithm
• Compute stereo correspondence which illustrates the relationship of the points
between the pair of images, and the matching cost which measures the similarity of
pixels
• Identify the students by a pre-set threshold used to control the matching cost, and the
value of this threshold is selected according to the desired accuracy.
The core of these procedures is the stereo matching algorithm. The basic approach of
the matching algorithm can be described as follows:
The template image and the captured images are expressed in grayscale instead of
in R, G, B values. Given the intensity I(x, y) of a point in the template and the intensity
I′(x, y) of the assumed corresponding point in the captured image, the absolute intensity
disparity d(x, y) of the two points can be computed as follows:

d(x, y) = ‖ ′ ‖
‖I(x, y) − I (x, y)‖ (7)

Then, the sum of the absolute intensity differences (SAD) of the intensity in the
template and captured image is:


W

W
SAD(d) = ‖I(x + i, y + j) − I ′ (x + i + d, y + j)‖
‖ ‖ (8)
y=−W x=−W

If the SAD is used directly to obtain the disparity maps (which refer to the apparent
pixel difference or motion between a pair of stereo images), the noise in the disparity
maps is very large since the signal-to-noise ratio is too low. In order to optimize the
stereo matching accuracy, a box filter based on the cross-correlation in the window areas
(of size 2W × 2W) around the landmarks is used:


W

W
SC (x, y, d) = [I(x + i, y + j) − I ′ (x + i + d, y + j)] (9)
j=−W i=−W

zamfira@unitbv.ro
A Virtual Proctor with Biometric Authentication 119

∑ ∑
W+1
W
SC (x, y + 1, d) = [I(x + i, y + 1 + j) − I ′ (x + i + d, y + 1 + j)] (10)
j=−W−1 i=−W

Denoting the sum of a row in the windows as:


W
[ ]
AC (x, y, d, j) = (I(x + i, y + 1 + j) − I ′ (x + i, y + 1 + j − d) (11)
i=−W

Then, the cross-correlation in a 2W × 2W window centered at point (x, y) becomes:

SC (x, y + 1, d) = SC (x, y, d) + AC (x, y, d, W + 1) − AC (x, y, d, −W) (12)

Here, d is the possible floating of the actual location of the captured image (i.e. the
lag in the definition of the cross-correlation).
In Fig. 5, the comparison of the disparity maps obtained with SAD and SAD with
window filter is depicted. It can be seen that the noise can be reduced efficiently with
the window filter.

Fig. 5. Comparison of SAD and SAD with window filter

After obtaining the disparity between the template image and the captured image,
the facial recognition of the users can be performed.
In order to test the proposed method used to recognize the students, a benchmark
analysis was conducted which compares the proposed method with the template
matching and eigenface methods. In this benchmark, “the Sheffield (previously UMIST)
face database” [38, 39] was used to test the performance of different methods with
respect to the pose translation. 20 sample sets from this database were tested from
different view points (0 to 90 in 10 intervals). The frontal view was defined as the 0
viewpoint, and it was taken as the template. The test results are shown in Fig. 6. It is

zamfira@unitbv.ro
120 Z. Zhang et al.

seen that the stereo matching method provides for a higher reliability than the other two
methods when the pose translation is large.

Fig. 6. Template matching, eigenfaces and stereo matching about pose translation

In order to test the reliability of the proposed method under variable illumination
conditions, a benchmark analysis with the “extended Yale face database B” [40, 41] was
conducted. The frontal faces of 15 individuals under 50 different illumination conditions
were examined. The results of the test are illustrated in Fig. 7. The eigenface method is
better than the template matching method, the reliability of which is critically affected

Fig. 7. Template matching, eigenfaces and stereo matching under various illumination conditions

zamfira@unitbv.ro
A Virtual Proctor with Biometric Authentication 121

by the illumination conditions. In contrast, the stereo matching method ensures a high
reliability even under poor illumination conditions.
As mentioned above, the final implementation of the stereo matching method focuses
on three areas: two eye-centered areas and one mouth-centered area. Decreasing the
sizes of the areas used for stereo matching does not only increase the efficiency of the
stereo matching algorithm but also improves the reliability of the facial recognition. In
addition, ‘C++ accelerated massive parallelism’ [42] was used in the implementation
of the stereo matching algorithm, and therefore the speed of the execution of the program
was increased significantly.

3 Definition of Suspicious Behaviors and Test Results

It is difficult to define all possible suspicious behaviors that may represent cheating
attempts. Therefore, a small set of such behaviors was defined in the pilot implementa‐
tion of the proposed VP.
In order to understand this definition, a coordinate system was chosen as depicted in
Fig. 8. First, rotations of the head about the Z axis are considered normal. Suspicious
behaviors of “rotating head” correspond to rotations about either the X or Y axes.
Second, “moving relative to webcam” corresponding to a translation along the X, Y or
Z directions is also suspicious. The suspicious behavior of “rotating head” is judged by
the matching percentage between the captured frame and the template stored in the face
database. The essence of this method is that the face in frontal view is taken as the facial
recognition object. Thus, rotations about the X and Y axes generate obvious differences
between the captured frame and the template. In addition, these differences cannot be
eliminated by face alignment. Therefore, the rotation angle can be estimated according
to the face matching percentage. The calculation of translations is much easier than that
of rotations. The method used here is to track the location of the face and the size of the
face area. Then, the relative location of the face can be determined.

Fig. 8. Definition of suspicious behaviors

zamfira@unitbv.ro
122 Z. Zhang et al.

In order to evaluate the performance of the VP, 50 intentional cheating attempts per
criterion were tested. The test results corresponding to different criteria are listed in
Table 1. These results show that the proposed VP is reliable with respect to pose trans‐
lations and illumination changes.

Table 1. Test results corresponding to different criteria based on 50 cheating attempts


Head rotation and translation ∣Rθ∣ ≤ 10 ∣Rθ∣ ≤ 30 ∣Rθ∣ ≤ 50 ∣Rθ∣ ≤ 60 ∣Rθ∣ > 70
(rotation Rθ in deg, translation ∣Td∣ ≤ 20 ∣Td∣ ≤ 40 ∣Td∣ ≤ 60 ∣Td∣ ≤ 70 any Td
Td in cm)
Accuracy 94% ≤87% ≤76% ≤54% ≤30%

For further assessment of the proposed algorithms used in the VP, a pilot test with
three volunteers was conducted. The details can be found elsewhere [43]. The difference
between this pilot test and the prior experiment involving 50 cheating attempts is that
the illumination conditions were changed while the volunteers were asked to bow their
head. Despite the changing conditions, the VP based on the proposed algorithms worked
well with respect to recognizing and tracking the users.

4 Conclusions and Future Work

In this paper, a VP based on facial recognition was introduced. In order to overcome the
shortcomings of existing facial recognition methods, a stereo matching method was
proposed to improve the reliability of the facial recognition. In order to evaluate the
reliability of this method, two benchmark analyses were conducted. The first analysis
was designed to determine the impact of the stereo matching method on the reliability
for large pose translations. The second analysis was aiming to test the performance of
the stereo matching method under variable illumination conditions. The results proved
that the proposed method has a higher reliability than the existing facial recognition
methods (template matching and eigenfaces).
In the future, the performance of the VP will be enhanced by adding voice identifi‐
cation and recognition functions, adding screen monitoring functionality, targeting more
complicated suspicious behaviors and optimizing the recognition algorithms.
Although the proposed VP still has certain limitations, it performed well under labo‐
ratory conditions. In addition, it has the potential to replace human proctors in both
distance education and traditional classroom settings.

References

1. http://wcet.wiche.edu/initiatives/research/WCET-Distance-Education-Enrollment-Report-2016.
Accessed Nov 2016
2. Zhang, Z., Zhang, M., Chang, Y., Esche, S.K., Chassapis, C.: A smart method for developing
game-based virtual laboratories. In: Proceedings of the ASME International Mechanical
Engineering Conference and Exposition, IMECE 2015, Houston, Texas, 13–19 November
2015

zamfira@unitbv.ro
A Virtual Proctor with Biometric Authentication 123

3. Zhang, Z., Zhang, M., Chang, Y., Esche, S.K., Chassapis, C.: Real-time 3D reconstruction
for facilitating the development of game-based virtual laboratories. Comput. Educ. J. 7(1),
85–99 (2016)
4. Zhang, Z., Zhang, M., Tumkor, S., Chang, Y., Esche, S.K., Chassapis, C.: Integration of
physical devices into game-based virtual reality. Int. J. Online Eng. 9, 25–38 (2013)
5. Qureshi, F., Terzopoulos, D.: Smart camera networks in virtual reality. In: Proceedings of
First ACM/IEEE International Conference on Distributed Smart Cameras, Vienna, Austria,
25–28 September 2007
6. http://www.proctoru.com/. Accessed Oct 2016
7. http://www.proctortrack.com/. Accessed Oct 2016
8. http://www.softwaresecure.com/remote-proctor-pro-faq/. Accessed Oct 2016
9. http://proctorfree.com/. Accessed Oct 2016
10. http://www.biomids.com/proctoring/. Accessed Oct 2016
11. http://remoteproctor.com/rpinstall/orgselector/orgselector.aspx. Accessed Oct 2016
12. http://www.nytimes.com/2015/04/06/technology/online-test-takers-feel-anti-cheating-
softwares-uneasy-glare.html. Accessed Sept 2016
13. Rasmussen, K.B., Roeschlin, M., Martinovic, I., Tsudik, G.: Authentication using pulse-
response biometrics. In: Proceedings of Network and Distributed System Security
Symposium 2014, San Diego, California, USA, 23–25 February 2014
14. Bača, M., Grd, P., Fotak, T.: Basic principles and trends in hand geometry and hand shape
biometrics. In: New Trends and Developments in Biometrics. INTECH Open Access
Publisher (2012)
15. http://proctorfree.com/blog/the-future-is-now-iris-recognition-technology-makes-id-cards-
redundant. Accessed Oct 2016
16. Proctor, R.W., Lien, M.C., Salvendy, G., Schultz, E.E.: A task analysis of usability in third-
party authentication. Inf. Secur. Bull. 5(3), 49–56 (2000)
17. https://facedetection.com/algorithms/. Accessed Oct 2016
18. Horprasert, T., Harwood, D., Davis, L.S.: A robust background subtraction and shadow
detection. In: Proceedings of 4th Asian Conference on Computer Vision, Taipei, Taiwan, 5–
8 January 2000
19. http://www.cs.rutgers.edu/~decarlo/facetrack.html. Accessed Oct 2016
20. Fröba, B., Külbeck, C.: Real-time face detection using edge-orientation matching. In:
Proceedings of International Conference on Audio- and Video-Based Biometric Person
Authentication, Halmstad, Sweden, 6–8 June 2001
21. Jesorsky, O., Kirchberg, K.J., Frischholz, R.W.: Robust face detection using the Hausdorff
distance. In: Proceedings of International Conference on Audio- and Video-Based Biometric
Person Authentication, Halmstad, Sweden, 6–8 June 2001
22. Vasconcelos, N., Saberian, M.J.: Boosting classifier cascades. In: Proceedings of Advances
in Neural Information Processing Systems 23, Vancouver, British Columbia, Canada, 6–9
December 2010
23. Viola, P., Jones, M.: Fast and robust classification using asymmetric adaboost and a detector
cascade. In: Proceedings of Advances in Neural Information Processing System 14,
Vancouver, British Columbia, Canada, 3–8 December 2001
24. Gokturk, S.B., Bouguet, J.Y., Tomasi, C., Girod, B.: Model-based face tracking for view-
independent facial expression recognition. In: Proceedings of Fifth IEEE International
Conference on Automatic Face and Gesture Recognition, Washington D.C., USA, 20–21 May
2002
25. Viola, P., Jones, M.: Robust real-time object detection. Int. J. Comput. Vis. 57(2), 137–154
(2004)

zamfira@unitbv.ro
124 Z. Zhang et al.

26. Wilson, P.I., Fernandez, J.: Facial feature detection using Haar classifiers. J. Comput. Sci.
Coll. 21(4), 127–133 (2006)
27. http://docs.opencv.org/master/d7/d8b/tutorial_py_face_detection.html. Accessed Nov 2016
28. http://opencv.org/. Accessed Nov 2016
29. Zhang, Z., Zhang, M., Chang, Y., Esche, S.K., Chassapis, C.: A virtual laboratory system
with biometric authentication and remote proctoring based on facial recognition. In:
Proceedings of the 2016 ASEE Annual Conference and Exposition, New Orleans, LA, USA,
26–29 June 2016
30. http://www.goldennumber.net/face/. Accessed Nov 2016
31. http://www.goldennumber.net/facial-beauty-new-golden-ratio/. Accessed Nov 2016
32. Brunelli, R., Poggio, T.: Face recognition: features versus templates. IEEE Trans. Pattern
Anal. Mach. Intell. 15, 1042–1052 (1993)
33. Turk, M., Pentland, A.: Eigenfaces for recognition. J. Cogn. Neurosci. 3, 71–86 (1991)
34. Jafri, R., Arabnia, H.: A survey of face recognition techniques. J. Inf. Proces. Syst. 5(2), 41–
68 (2009)
35. Pentland, A., Moghaddam, B., Starner, T.: View-based and modular eigenspaces for face
recognition. In: Proceedings of IEEE Computer Society Conference on Computer Vision and
Pattern Recognition, Seattle, WA, 21–23 June 1994
36. http://docs.opencv.org/2.4/modules/imgproc/doc/object_detection.html. Accessed Nov 2016
37. Menezes, P., Barreto, J.C., Dias, J.: Face tracking based on Haar-like features and eigenfaces.
In: Proceedings of IFAC/EURON Symposium on Intelligent Autonomous Vehicles, Técnico,
Lisboa, Portugal, 5–7 July 2004
38. https://www.sheffield.ac.uk/eee/research/iel/research/face. Accessed Nov 2016
39. Graham, D.B., Allinson, N.M.: Characterising virtual eigensignatures for general purpose
face recognition. face recognition, pp. 446–456. Springer, Heidelberg (1998). doi:
10.1007/978-3-642-72201-1_25
40. http://vision.ucsd.edu/~leekc/ExtYaleDatabase/ExtYaleB.html. Accessed Nov 2016
41. Georghiades, A.S., Belhumeur, P.N., Kriegman, D.J.: From few to many: illumination cone
models for face recognition under variable lighting and pose. IEEE Trans. Pattern Anal. Mach.
Intell. 23(6), 643–660 (2001)
42. https://msdn.microsoft.com/en-us/library/hh265136.aspx. Accessed Nov 2016
43. Zhang, Z., Zhang, M., Chang, Y., Esche, S.K., Chassapis, C.: A virtual laboratory combined
with biometric authentication and 3D reconstruction. In: Proceedings of the ASME
International Mechanical Engineering Conference and Exposition, IMECE 2016, Phoenix,
Arizona, USA, 11–17 November 2016

zamfira@unitbv.ro
From a Hands-on Chemistry Lab to a Remote Chemistry
Lab: Challenges and Constrains

San Cristobal Elio ✉ , J.P. Herranz, German Carro, Alfonso Contreras,


( )

Eugenio Muñoz Camacho, Felix Garcia-Loro, and Manuel Castro Gil


UNED, DIEEC, Madrid, Spain
elio@ieec.uned.es

Abstract. The spread of remote labs in Universities is a current reality. They are
strong e-learning tool which allow students to carry out online experiments over
real equipment and Universities to have e-learning tools for learning methodol‐
ogies such as Blended learning and Distance learning. These remote labs are
developed for many science fields such as electronic, robotic and physic. Never‐
theless it is very difficult to find chemistry remote labs. This paper wants to show
the difficulties of choosing a chemistry lab which can become a remote chemistry
lab, and a first approach of converting a hands-on chemistry lab to remote one.

Keywords: Blended and distance learning · E-learning tools · Hands-on and


remote labs

1 Introduction

Traditionally, Students learnt theoretical knowledge in face to face classrooms and


acquired skills from hands-on laboratories. But in the last decades, this idea has
shifted from face to face classrooms and hands-on labs to online courses and virtual
and remote labs.
Nowadays many universities provide virtual and remote labs where its students can
carry out experiment from anyplace and anytime.
• Virtual labs are simulation programs which allow students carry out online experi‐
ments. There are a great number of them over Internet (Fig. 1), such as:
• Chemistry Labs. For instance Acid-bases solutions from https://phet.colorado.edu
• Physics Labs. For instance Newton’s Cradle from http://www.myphysicslab.com/
• Electronic Labs. For instance basic digital labs from http://meteo.ieec.uned.es/
www_Usumeteog/
• Remote labs are software programs which allow students to carry out experiments
with real equipment at anytime and anyplace. In contrast to virtual labs, remote labs
are working with real instruments [1–3]; therefore, the vast majority of them must
control the access to lab (only one person at the same time). To do this, a set of
services are created around the remote labs, such as: Control of users and Calendar.

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_12

zamfira@unitbv.ro
126 S. Elio et al.

Fig. 1. Virtual labs or simulation web programs

These remote labs also cover a great number of science fields (Fig. 2), such as:
• Robotic labs. For instance i-robot lab from The Labshare Institute in Australia. This
remote lab was designed to allow students to explore the concepts of teleoperation
of robots, accuracy of sensors, localization and mapping [4]. Or the Robotic arm
From UNED which allow students to work with a real robotic arm [5].
• Physic remote labs. For instance Archimedes remote labs from Deusto University
where students of secondary school learn the Archimedes’ Principle [6].
• Electronic remote labs. There are a lot of remote labs in this field. But, only two of
them are going to be described:
– The first one, is VISIR, this labs is really interesting for several reason: more than
one user can access it at the same time, several universities has implemented this
labs and are working together in project such as VISIR+ [7] and PILAR [8]. VISIR
allows wiring and measuring of electronic circuits remotely on a virtual work‐
bench that replicates physical circuit breadboards.
– The second one is the Microelectronics Device Characterization remote lab. This
Measures the DC current-voltage characteristics of microelectronics devices such
as diodes and transistors [9].
This section showed several examples of virtual and remote labs in different science
fields. The next sections is going to focus on describe briefly the architecture of remote
labs and the difficulties to create chemistry remote labs due to its nature.

zamfira@unitbv.ro
From a Hands-on Chemistry Lab to a Remote Chemistry Lab 127

Fig. 2. Remote labs (real equipment)

2 Architecture of Remote Labs and Chemistry Experiments

The vast majority of remote labs are based on a same architecture. This is composed by:
• Web Server contains services such as control of users, calendar and user interfaces.
It also communicates with user and the lab server.
• Lab server contains the program to act with the real equipment and send the result
to web server.
• Real equipment depends on the remote labs. In the above section, it was shown real
instrumentation such as robotic arm, electronic circuits, motors and pipettes.
• Web cam allows students to see the results of acting with the real equipment.
Depending on web cam students can zoom in or out real instrumentation.
This is the hardware architecture but there are global phases that a remote lab should
fulfill (Fig. 3). These Phases are:
• Initial state. Students must find the instrumentation in a state initial. For instance, in
the Archimedes lab of Fig. 2 the balls must be out of the water or in VISIR lab the
entries of the circuits must be in its initial state.
• Experimentation. This phase can be divides into other, such as action over the labs,
storing student’s actions over the equipment, storing results of these actions, etc.
• Results. Lab should show a report of results of the experiments
• Visualization. All what happens during the experimentation process must be watched
by students through web cams, user interfaces, etc.

zamfira@unitbv.ro
128 S. Elio et al.

Fig. 3. Simplification of phases of remote lab

In the case of chemistry experimentation, several constrains are found in some of


them phases. The following subsection will describe them briefly.

2.1 Constrains to Create Chemistry Remote Labs

Chemistry labs work with liquids, solids and gases. These resources are combined to
create new ones. These experiments need a set of requirements that are really difficult
in by phases of a remote lab.
• State initial:
– Many chemistry laboratories works with fluids. These fluids are mixed and some‐
times evaporated, therefore when students finish their experiments the fluids must
be replaced and the instrumentation must be cleaned.
– Many chemistry labs works with solids and liquids. These can vary his weight or
volume. These can also mix giving as a result other chemical compound. There‐
fore, it is really difficult to give back an initial state without human help.
• Experimentation
– Chemistry labs need handle and weigh solid material. For instance, the experiment
of Reaction of zinc with iodine needs Zinc powder, about 0.5 g, sulfuric acid,
about 20 cm3, etc. Implementing the mechanics to do these measurements in an
automatic way is really complicated.
• Visualization
– Chemistry labs which work with gasses and transparent liquids are difficult to
watch with web cams.
– In some Chemistry labs, odors are also important for students. Remote labs are
not able to provide this sense, although it is possible to use gas sensors.

zamfira@unitbv.ro
From a Hands-on Chemistry Lab to a Remote Chemistry Lab 129

All these reasons show the difficulties of designing and developing a chemistry
remote laboratory.

3 Selecting Chemical Experiment

Once all these constrains were keeping in mind, the department of Electrical and
Computer Engineering Department and Chemistry applied to engineering department
from UNED decide to work in staring the conversion of the hands-on Hydrogen-solar
equipment.
This equipment allows students to carry out hydrogen-solar energy cycle experi‐
ments. To do this, equipment provides a set of elements to convert water to hydrogen
and oxygen, to store these in graduated Cylinders and to consume them in a fuel cell
and produce electrical energy and water. This energy can be used to switch on a bulb or
start a motor.

Fig. 4. Hydrogen-solar equipment.

As it has been told, the equipment is a set of hardware elements which allow
performing this chemical process (Fig. 4). Among them:
• Light source. Sun is replaced by a lamp. This lamp simulates renewable energy.
Students and teacher can move closer and farther the lamp to solar panel. This allows
simulating the variation of light radiation on solar panel.
• Solar panel converts the light luminous energy, which is supplied by the lamp, into
electrical energy. Students and teacher can vary the solar panel orientation and simu‐
late different inclinations.
• Electrolyzer decomposes water into hydrogen and oxygen by using the electrical
energy supplied by the solar panel.

zamfira@unitbv.ro
130 S. Elio et al.

• Fuel cell. It consists of two PEM fuel cells that can be connected in series or in
parallel. They are used to generate electricity from the hydrogen and oxygen
produced by the electrolyzer.
• Load module. It consists of an engine, a lamp and a set of resistors that allow using
the electric energy generated by the fuel cell.
• Measuring devices. It is composed by a voltmeter and an ammeter to visualize the
different voltages and intensities of the electric energy produced and consumed in
each of the processes.
Although this lab requires to be filled with water for the initial state, the rest of the
experimentation can be automated for blended and distance learning.

4 Hands-on Hydrogen-Solar Equipment to Remote Lab

This hand-on lab can be converted to remote labs. In this first step the department of
electrical and computer of UNED has been focused on load module which can be
replaced by an IoT device, such as Arduino and/or raspberry pi. These devices can
manage a dimmer which can control the intensity of a lamp (Fig. 5).

Fig. 5. Remote control of load module

Arduino and raspberry pi allow remote labs programmers to create a web page where
students can change the intensity of the lamp.
Along with the modification of Load module a web cam connected directly to
Ethernet will allow students to watch the real instrumentation and chemical process.

5 Conclusion

This paper shows the difficulties of creating chemistry labs. To do this, papers describes:
• A state of art of virtual and remote labs and some of the science field where are
applied.
• Global architecture of remote labs and the phases of a remote lab.
• Constrains that have to be considered if someone wants to develop a chemistry remote
lab.

zamfira@unitbv.ro
From a Hands-on Chemistry Lab to a Remote Chemistry Lab 131

• The selection of a chemistry lab that can minimize these constrains and will become
a remote lab.
• And finally, the initial steps of the department of Electrical and Computer Engi‐
neering Department and Chemistry applied to engineering department from UNED
to create a chemical remote lab.
Although a long road lies ahead, the first steps have been done.

Acknowledgement. The authors acknowledge the support of the eMadrid project (Investigación
y desarrollo de tecnologías educativas en la Comunidad de Madrid) - S2013/ICE-2715, VISIR+
project (Educational Modules for Electric and Electronic Circuits Theory and Practice following
an Enquiry-based Teaching and Learning Methodology supported by VISIR) Erasmus+ Capacity
Building in Higher Education 2015 nº 561735-EPP-1-2015-1-PT-EPPKA2-CBHE-JP and PILAR
project (Platform Integration of Laboratories based on the Architecture of visiR), Erasmus+
Strategic Partnership nº 2016-1-ES01-KA203-025327.

References

1. García-Zubia, J., Orduña, P., López-de-Ipiña, D., Alves, G.R.: Addressing software impact in
the design of remote laboratories. IEEE Trans. Industr. Electron. 56(12), 4757–4767 (2009)
2. Gomes, L., Bogosyan, S.: Current trends in remote laboratories. IEEE Trans. Industr. Electron.
56(12), 4744–4756 (2009)
3. Tawfik, M., Sancristobal, E., Martin, S., Diaz, G., Peire, J., Castro, M.: Expanding the
boundaries of the classroom: implementation of remote laboratories for industrial electronics
disciplines. Ind. Electron. Mag. 7(1), 41–49 (2013). IEEE
4. Labshare Labs. http://www.labshare.edu.au/catalogue/rigtypedetail/?id=42&version=1.3.
Accessed 9 Nov 2016
5. Carro, G., Plaza, P., Sancristobal, E., Castro, M.: A wireless robotic educational platform
approach. In: 13th International Conference on Remote Engineering and Virtual
Instrumentation (REV) (2016)
6. Garcia-Zubia, J., et al.: Archimedes remote lab for secondary schools. In: 3rd Experiment@
International Conference, exp.at 2015 (2015)
7. VISIR+ Project: http://www2.isep.ipp.pt/visir/. Accessed 16 Nov 2016
8. PILAR Project. http://ec.europa.eu/programmes/erasmus-plus/projects/eplus-project-details-
page/?nodeRef=workspace://SpacesStore/2d88ecb1-3db1-4a29-93c1-dd2802eec4f6.
Accessed 16 Nov 2016
9. Microelectronics Device Characterization Lab (MIT). http://ceci.mit.edu/projects/iLabs/
Accessed 16 Nov 2016

zamfira@unitbv.ro
Advanced Intrusion Prevention for Geographically
Dispersed Higher Education Cloud Networks

C. DeCusatis1 ✉ , P. Liengtiraphan1, and A. Sager2


( )

1
Marist College, Poughkeepsie, NY, USA
{casimer.decusatis,Piradon.Liengtiraphan1}@marist.edu
2
BlackRidge Technologies, Reno, NV, USA
tsager@blackridge.us

Abstract. We present the design and implementation of a novel cybersecurity


architecture for a Linux community public cloud supporting education and
research. The approach combines first packet authentication and transport layer
access control gateways to block fingerprinting of key network resources. Exper‐
imental results are presented for two interconnected data centers in New York.
We show that this approach can block denial of service attacks and network
scanners, and provide geolocation attribution based on a syslog classifier.

Keywords: Authentication · Identity management · Attribution

1 Introduction

Higher education institutions in the U.S. are expected to spend about $10.8 billion on
information technology (IT) in 2016 (up from $6.6 billion last year), primarily driven
by investments in enterprise networks [1]. Globally, the higher education market is
expected to spend over $38.2 billion on IT in 2016 alone [2]. According to EduCause,
a nonprofit organization of IT leaders from higher education [3], the leading issue driving
upgrades for these organizations is information security. Security concerns among
higher education institutions appear to be well justified; the environment in which higher
education institutions operate, and the data which they store, has made them prime
targets for cyberattacks. Recent survey data indicates that 35% of all security breaches
take place in higher education [3]. Among those institutions suffering a breach, over
46% verified advanced persistent threat (APT) activity taking place in their environment
[4]. Higher education institutions collect and retain valuable data such as student,
alumni, and faculty personally identifiable information (PII) including medical records;
research data which may be subject to export control regulations; financial and
accounting data including student tuition, loans, and institution accounting records; and
critical infrastructure or intellectual property information including analytic systems
used for grading and research. This type of information is subject to various local,
national, and international security and privacy compliance regulations, including the
NIST 800 series of security guidelines [5]. In some ways, higher education can be
considered a large enterprise; despite this, higher education is not currently classified as

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_13

zamfira@unitbv.ro
Advanced Intrusion Prevention 133

a “mission critical” application by the U.S. federal government [5]. In fact, many large
enterprises employ security policies based on the principle “exclude everything, allow
specific”, while the nature of higher education is just the opposite, and often implements
policies such as “allow everything, exclude specific” in an attempt to promote shared
academic research and education. This can make it particularly challenging to develop
effective security policies for higher education institutions.
A recent example involves the Linux One Community Cloud, a collaboration
between industry and academia to provide free access to an open source Linux devel‐
opment environment [6]. In August 2015, IBM announced a series of enterprise-class
servers which run only the Linux operating system. The Linux One platforms currently
support SUSE, Red Hat, and Ubuntu Linux distributions, along with a variety of
supporting tools such as Apache Spark, MongoDB, and Chef. In order to promote
development and research on this platform, the Linux One Community Cloud makes it
possible for anyone to request a free instance of the Linux One servers and toolsets. This
environment is hosted at the New York State Center for Cloud Computing and Analytics
(CCAC) at Marist College (a private, 6,000 student institution in upstate New York),
and is managed from a IBM development location in Poughkeepsie, NY. However, this
open innovation initiative also means that the cloud hosting Linux One is subject to
continuous cyberattacks from bad actors who attempt to exploit the open access privi‐
leges in this environment. There is a need for an intrusion prevention and authentication
solution which limits access to the cloud development code to only authorized users,
while at the same time preventing malicious reconnaissance attempts to fingerprint the
cloud infrastructure or launch denial of service (DoS) attacks.
In this paper, we present results of a cybersecurity testbed deployed in production
for the Linux One community cloud. Our research addresses the unique cybersecurity
requirements of this environment, including improved authentication as well as identity
and access management within a cloud data center. The key points of novelty for this
work include the use of network-based identities in a hybrid public/private cloud;
specifically, we demonstrate a combination of BlackRidge Technology first packet
authentication and transport layer access control (TAC) technologies. We experimen‐
tally demonstrate user identity management in the Linux One community cloud,
including the novel ability to prevent unauthorized fingerprinting of key network
resources. Further, we have developed original software to parse the logs from these
appliances and related honeypots, performing geolocation and botnet classification. This
work is intended to address the leading concerns expressed in recent surveys of chief
information security officers in academia, and enable replication of our security solution
at other colleges and universities. We deploy BlackRidge Technology TAC virtual
appliances throughout the network which manage user identity based on the first packet
used in transport connection requests. This solution including software developed
specifically for this project which performs geolocation and attribution for all unau‐
thorized access attempts, and enables collection of analytic data on attempted attacks
which can be processed into actionable threat intelligence. Experimental results are
presented, demonstrating that our approach detected and blocked 1,161 unauthorized
access attempts in the first twelve hours of production deployment. Over a period of ten
days, our approach successfully blocked over 18,000 attacks, which we have attributed

zamfira@unitbv.ro
134 C. DeCusatis et al.

to locations in China, Korea, Brazil, Vietnam, and elsewhere. We also demonstrate the
ability to identify insider threats by running our authentication technology inside the
college firewall (an essential enabling feature for a NIST zero trust network [7]). We
present data demonstrating that this approach successfully prevents IP Spoofing and
Denial of Service attacks, and identifies network scanners such as Nessus if they are
operating on the cloud network. This functionality was not possible using conventional
network security approaches.
The paper is organized as follows. Section 1 provides an introduction and motivation
for this work, and an overview of our novel contributions. Section 2 describes TAC and
first packet authentication technologies in more detail. Section 3 provides experimental
results obtained from the Linux One higher education cloud deployment over a 30 day
period. Section 4 includes a summary and conclusions.

2 BlackRidge Technology Transport Access Control (TAC)


Architecture

Our approach is based on a novel combination of two technologies, namely transport


access control and first packet authentication. In our proposed explicit trust model, each
network session is independently authenticated at the transport layer before any access
to the network or protected servers is granted. Unauthorized traffic is simply rejected
from the network, and there is no feedback to a potential attacker attempting to finger‐
print the system. Explicit trust is established by generating a network identity token
during session setup. The network token is a 32 bit, cryptographically secure, single use
object which expires after four seconds. Tokens are associated with identities from
existing Identity Access Management (IAM) systems and credentials, such as Microsoft
Active Directory or the IAM system used by Amazon Web Services [8]. Explicit trust
is established by authenticating these identity tokens on the first packet of a TCP
connection request, before the conventional 3-way TCP handshake is completed and
before sessions with cloud or network resources are established.
Tokens are generated for each unique entity requesting access to a network resource;
these entities are generally a user or device. An in-line virtual security gateway is then
implemented between the equipment being protected and the rest of the network. The
approach is illustrated in Figs. 1 and 2, which show a conventional security architecture
before addition of the TAC gateways and our new approach following addition of the
TAC gateways. In Fig. 1, a conventional security architecture would simply place a
commercially available intrusion prevention system (IPS) such as the Juniper 3600
platform between the untrusted Internet and resources connected to an education
network (for example, the three Linux servers). However, conventional IPS systems
cannot block network reconnaissance and scanning attempts, or perform first packet
authentication when a user requests a secure session. To improve on the conventional
approach, Fig. 2 shows the placement of two BlackRidge Technologies TAC gateways
within a higher education cloud network architecture. A TAC gateway appliance is
connected in the path between this user and the remaining network, and a second gateway
is positioned before the protected resources. The first gateway inserts an identity token

zamfira@unitbv.ro
Advanced Intrusion Prevention 135

in the first packet of the TCP connection request. The second gateway enforces the
network access policy by extracting the token, resolving the token to an identity, and
determining the identity’s authorizations. Trusted users (attempting to access the educa‐
tion network) have identity tokens inserted by Gateway A; untrusted users receive no
such authentication tokens. The TAC gateways are configured to protect sensitive
resources, such as the cluster of Linux servers. When the second gateway receives a
connection request, it extracts and authenticates the inserted identity token and then
applies a security policy (such as forward, redirect, or discard) to the connection request
based on the received identity. This gateway acts as a policy enforcement point trans‐
parent to the rest of the system architecture and backwards compatible with existing
network technologies. Trusted users will be authenticated by Gateway B, allowing them
full access to the Linux server cluster. Untrusted users are not recognized by Gateway
B, and their first packet requesting a new session is dropped, along with all responses
at or below the transport layer. In this manner, the untrusted user is unable to determine
that the Linux server cluster exists, and cannot begin to mount an attack. The attempted
access is logged in an external syslog server, which allocates enough memory to avoid
wrapping and over-writing log entries. Existing security information and event manage‐
ment (SIEM) tools can still be used to analyze the logs or generate alerts of suspicious
activity. We note that continuous logging of all access attempts is consistent with the
approach of a zero trust network (i.e. not allowing any access attempts to go unmoni‐
tored). Conventional denial of service (DoS) and port scanner attacks from an untrusted
user are similarly blocked, effectively cloaking the presence of the Linux server cluster
in this example. Note that the conventional IPS platform is no longer required, but may
remain in place since it is transparent to the TAC gateway authentication process. We
may also add features such as honeypots which accept redirect requests from a failed
access attempt at the TAC gateway (for example, SSH honeypots may be configured in
this manner). This enables the collection of attack data which may subsequently be used
to craft actionable threat intelligence, such as attack signatures. Both the identity inser‐
tion gateway and identity authentication gateway appliances can be implemented as
virtual network functions (VNFs) hosted on a virtual server, router, or similar platform.
This approach has several advantages, including separation of security policy from
the network design (i.e. network addresses and topologies) [7]. This approach works for
any network topology or addressing scheme, including IPv4, IPv6, and networks which
use the Network Address Translation (NAT) protocol and is compatible with dynamic
addressing often used with mobile devices. This approach extracts, authenticates, and
applies policy to the connection requests, not only protecting against unauthorized

Fig. 1. Conventional network IPS

zamfira@unitbv.ro
136 C. DeCusatis et al.

Fig. 2. Deployment of TAC gateways in the education network

external reconnaissance of the network devices but also stopping any malware within
the protected devices from calling home (exfiltration). Security policies can be easily
applied at the earliest possible time to conceal network attached devices from unau‐
thorized awareness. By preventing unauthorized scanning and reconnaissance, TAC
disrupts the attacker’s kill chain, blocks both known and unknown attack vectors, and
stops lateral attack spreading within a data center. This approach is low latency and high
bandwidth since packet content is not inspected. Since the network tokens are embedded
in the TCP session request, they do not consume otherwise useful data bandwidth. The
combination of transport access control and a segmented, multi-tenant network imple‐
ments a layered defense against cybersecurity threats, and contributes to non-repudiation
of archival data. These techniques are also well suited to protecting public and hybrid
cloud resources, or valuable, high performance cloud resources such as enterprise-class
mainframe computers and higher education data centers. Further, this approach can be
applied to software defined networks (SDN), protecting the centralized SDN network
controller from unauthorized access, and enabling only authorized SDN controllers to
manage and configure the underlying network. Further, our implementation of TAC uses
an innovative identity token cache to provide high scalability and low, deterministic
latency. The token cache is tolerant of packet loss and enables TAC deployments in low
bandwidth and high packet loss environments.

3 Experimental Results

The Linux One geographically distributed community cloud (Phase One production
environment) created for these experiments is shown in Fig. 3. This cloud interconnects
two physical data centers, namely the Linux One cloud data center hosted at Marist
College near Poughkeepsie, NY; the IBM data center hosted in their Poughkeepsie, NY
facility. The Marist College and IBM Poughkeepsie data centers are located approxi‐
mately 8.5 km apart in upstate New York.

zamfira@unitbv.ro
Advanced Intrusion Prevention 137

Fig. 3. Linux one community cloud architecture

Users connect to the Linux One Community Cloud via a secure Internet portal to
an Apache web server at the Marist College data center. Content management
servers in this data center host instances of OpenStack (Liberty and Juno releases),
Maria database server, IBM Java Development Kit (JDK), and IBM BlueMix
DevOps Build Engine. These applications are hosted on virtual machines (VMs)
partitioned in an IBM z Systems 113 enterprise server. It is necessary to securely
authenticate the long distance connection between the Marist College data center and
IBM Poughkeepsie data center, (which houses a processing server and content fulfill‐
ment engine), To authenticate traffic between these two data centers, BlackRidge
appliances implementing TAC and first packet authentication were implemented
between these locations as shown in the figure. A physical appliance was installed
at the edge of the IBM Poughkeepsie data center network, and a virtual appliance
hosted in an IBM z13 enterprise server Zvm virtual partition was installed at the
corresponding edge of the Marist College data center network.
To determine the effectiveness of the TAC appliances at cloaking attached systems,
we performed nmap scans of both the Marist College and IBM Poughkeepsie data center
networks before and after implementing the TAC appliances. Representative scans from the
Marist College and IBM Poughkeepsie data centers before implementing TAC are shown
in Figs. 4 and 5, respectively. From these scans, an attacker can clearly see the open port
22 on the Marist network, running OpenSSH 6.6.1, and a traceroute showing network hops
within the IBM network, among other reconnaissance data that would be useful in plan‐
ning an attack on these systems.
A representative scan after implementing TAC on this network is shown in Fig. 6
(results are equivalent for both the IBM and Marist network segments). Note that we
can no longer detect any open ports, including the exposure previously reported on port
22. All attempts to scan these hosts were successfully blocked by first packet authenti‐
cation, and all responses from the host due to these scans were successfully blocked by
TAC. The scan is now unable to determine the host operating systems, port or IP
addresses, or services running on the host. These results show that we can effectively
block fingerprinting of all devices located behind the TAC gateway.

zamfira@unitbv.ro
138 C. DeCusatis et al.

Fig. 4. Marist network scan prior to implementing TAC

In order to better understand the attack vectors being used against this higher educa‐
tion cloud, we created a script in Python 2.7 to parse the syslog from a TAC appliance.
This script uses the Python regular expression operator ReGex to retrieve data from the
syslog including source and destination IP address and port numbers. This data is subse‐
quently processed through a geolocation module which we created for a related project
[7] to generate a report of the ISP, ASN, hostname, latitude, longitude, country, state/
province, and city of each attacker in JSON format. The TAC appliance was
programmed to automatically blacklist any IP address which attempted more than 100
accesses to the network within 30 s. The log parser which we have created also classifies
blacklisted IP addresses as potential DoS attacks or port scanners. We also collect data
on the number of attacks generated from unique IP addresses. All of this data is used to
create a profile of the attacker, which can be correlated with known botnets or hacker
groups.

zamfira@unitbv.ro
Advanced Intrusion Prevention 139

Fig. 5. Marist/IBM network scan after implementing TAC gateways

For example, during the first 12 h of monitoring the Linux One cloud after installing
the TAC appliance, there were numerous unauthorized attempts to access the system.
At this point the TAC system was placed into enforce mode, and successfully blocked
all subsequent unauthorized access attempts. The TAC appliance remained in enforce
mode for the next 10 days; a list of the top attacking IP addresses, and the top 10 attacking
countries, is shown in Figs. 6 and 7, respectively.
For example, analysis of the TAC appliance logs revealed a DoS attack against port
23 (originating from the Shangdong provide in China). We configured the TAC appli‐
ance to block unauthorized access attempts after 10 s of continuous attempts from a
given site, and to keep these sites blacklisted for one hour. Using this technique, we
successfully blacklisted the DoS attacker while continuing to collect log information on
the attack. In this manner, we have demonstrated that the TAC appliance provides
improved protection by identifying and blocking attacks which were previously unde‐
tected on the education network.

zamfira@unitbv.ro
140 C. DeCusatis et al.

Fig. 6. Number of attacks attempted from the top attacking source IP addresses

Fig. 7. Number of attacks attempted by each of the top attacking nations

Further, we assessed the performance logs of the IBM z Systems enterprise server
in the Marist College data center before and after these attacks. Prior to implementing
the TAC appliance, the server attempted to block unauthorized attacks using network
appliances (such as intrusion prevention systems). This approach was replaced with a
single TAC gateway, protecting all VM’s on the server at the point of entry. We further
demonstrated that the TAC appliance was able to block IP spoofing on the network. By
comparing nmap scans of the network before and after implementing the TAC appliance,
we can show that attempts to perform IP spoofing are effectively blocked by the TAC
appliance. A scan of the network using the Spoofer tool (part of BGP-38 recommended

zamfira@unitbv.ro
Advanced Intrusion Prevention 141

by the National Science Foundation [7]) confirmed that both IPv4 and IPv6 packets
attempting to spoof the network were blocked (including private and routable addresses).
In a related test of egress filtering depth, the BGP-38 tracefilter test found the network
unable to spoof valid, non-adjacent source addresses through even the first IP hop.
Additional statistical data on attacks against this system was obtained using Long‐
Tail, an open source botnet classifier which we developed for a related project at Marist
College [7]. This classifier was used to identify SSH brute force botnet attacks against
the Linux One educational network, and to evaluate the effectiveness of blocking these
attacks using a conventional intrusion prevention system and the TAC appliance. For
this test, we first monitored the total number of attacks against an SSH honeypot
deployed in the Marist College network ingress from the IBM Poughkeepsie site; stat‐
istical analysis of these attacks is shown in Table 1. We then evaluated a commercially
available intrusion prevention system, the Juniper SRX 3600, under the same conditions;
results are shown in Table 2. We can see that the IPS helped reduce the number of attacks,
but did not eliminate them completely. Finally, we deployed the TAC appliance under
the same conditions; results are shown in Table 3. In this case, the combination of first
packet authentication and transport access blocking was able to successfully block all
brute force SSH attacks against the network, and demonstrated a significant improve‐
ment over the commercial IPS system alone.

Table 1. Total number of attacks against the Marist education network


Time frame Number Total SSH Average Standard Median Max Min
of days attempts per day deviation
Past day 1 4394 N/A N/A N/A N/A N/A
This month 18 183649 10202.72 5447.31 7022.5 20352 5294
Last month 30 165593 5519.77 7196.19 1194 24666 0

Table 2. Attacks against the Marist education network mitigated by conventional IPS
Time frame Number Total SSH Average Standard Median Max Min
of days attempts per day deviation
Past day 1 30 N/A N/A N/A N/A N/A
This month 18 897 49.83 39.75 35.5 124 0
Last month 30 369 12.30 12.22 10 43 0

Table 3. Attacks against the Marist education network mitigated by TAC gateways
Time frame Number Total SSH Average Standard Median Max Min
of days attempts per day deviation
Past day 1 0 N/A N/A N/A N/A N/A
This month 18 0 0 0 0 0 0
Last month 30 0 0 0 0 0 0

zamfira@unitbv.ro
142 C. DeCusatis et al.

We have also demonstrated that a TAC gateway placed just inside the Marist College
firewall is useful in nonrepudiation of insider threats. When a bad actor inside the Marist
firewall is detected, efforts to trace the source of the attack traditionally stop at the Marist
NAT gateway. It can be a difficult, time consuming process to trace the IP address which
originated such an attack. However, a TAC gateway placed behind the Marist firewall
(on the Marist side of the NAT) can be used to authenticate the attacker’s source IP
address much more quickly and efficiently. This new functionality should be helpful not
only in discouraging insider threats, but also in helping the college comply with requests
and subpoenas from law enforcement agencies investigating such attacks.

4 Conclusions

Recognizing the importance of cybersecurity for higher education, we have developed


a novel approach to intrusion prevention and authentication for multi-site, multi-tenant
educational cloud computing environments. In particular, we have designed, tested, and
implemented this approach for a Linux community public cloud supporting education
and research, spanning two locations in New York. The approach combines BlackRidge
Technology first packet authentication and transport layer access control gateways to
block fingerprinting of key network resources. We have shown experimentally that this
approach can block denial of service attacks and network scanners, and provide geolo‐
cation attribution based on a syslog classifier. Further, this design offers lower server
utilization compared with conventional alternatives. We have also demonstrated that a
TAC gateway placed just inside the higher education institution’s network firewall is
useful in nonrepudiation of insider threats.

Acknowledgments. The authors gratefully acknowledge support of the National Science


Foundation grant Cloud Computing – Data, Networking, Innovation (CC-DNI), area 4, 15-535,
also known as “SecureCloud”.

References

1. McCarthy, S.: Pivot Table: U.S. Education IT Spending Guide, version 1, 2013–2018. IDC
publication GI255747, April 2015. http://www.idc.com/getdoc.jsp?containerId=GI255747
2. Lowendahl, J., Thayer, T., Morgan, G.: Top ten business trends impacting higher education.
Gartner Group white paper, January 2016. https://www.gartner.com/doc/3186325/top–
business-trends-impacting
3. Grama, J.: Data breaches in higher education. Educause Center for Analysis and Research,
May 2014. https://library.educause.edu/resources/2014/5/just-in-time-research-data-
breaches- in-higher-education
4. Fireye white paper: Cyber threats to the education industry, March 2016. https://
www.fireeye.com/content/dam/fireeye-www/current-threats/pdfs/ib-education.pdf
5. Stoneburner, G., Goguen, A., Feringa, A.: Risk management guide for IT systems. NIST
special publication 800-30, September 2012. http://csrc.nist.gov/publications/
PubsSPs.html#800-30

zamfira@unitbv.ro
Advanced Intrusion Prevention 143

6. Guilen, A., Rutten, P.: Driving Digital Transformation through Infrastructure Built for Open
Source: How IBM LinuxONE Addresses Agile Infrastructure Needs of Next Generation
Applications. IDC white paper, December 2016. https://public.dhe.ibm.com/common/ssi/ecm/
lu/en/lul12345usen/LUL12345USEN.PDF. Last accessed 22 Oct 2016
7. DeCusatis, C., Liengtiraphan, P., Sager, A., Pinelli, M.: Implementing zero trust cloud
networks with transport access control and first packet authentication. In: Proceedings of IEEE
International Conference on Smart Cloud, New York, NY, 18–21 November 2016
8. Amazon Web Services Identity and Access Management, April 2016. https://
aws.amazon.com/iam/. Last Accessed 20 May 2016
9. BlackRidge white paper: Dynamic network segmentation, August 2012. http://www.
blackridge.us/images/site/page-content/BlackRidge_Dynamic_Network_Segmentation.pdf

zamfira@unitbv.ro
Remote Laboratory for Learning Basics
of Pneumatic Control

Brajan Bajči ✉ , Jovan Šulc, Vule Reljić, Dragan Šešlija,


( )

Slobodan Dudić, and Ivana Milenković


Faculty of Technical Sciences, University of Novi Sad, Novi Sad, Serbia
brajanbajci@uns.ac.rs

Abstract. In this paper, a remote laboratory for learning the basic principles of
pneumatic control and realizing pneumatic control schemes is described. Goal is
to develop a remote system for our laboratory through which remote participants
(students, engineers, etc.) would be able to learn some basic principles of pneu‐
matic control. The first stage of developing a unique complex pneumatic scheme
with which several smaller, simpler tasks can be realized, as well as a user inter‐
face for the remote laboratory are shown.

Keywords: Distance learning of pneumatics · Remote pneumatic control


laboratory

1 Introduction

Following the constant technological progress and the increasing of electronic and
informatic literacy of the new generation of students, growing number of faculties and
universities around the world are introducing distance learning [1]. Distance learning
greatly increases the quality of teaching activities [2] on it, because the students have
the opportunity to organize their timetable and their activities. In the paper [3], is
described one example of a system that enables distance learning from the field of elec‐
trical engineering. Connecting different smaller electronic schemes are carried out using
one complex scheme in that system. The aim of this paper is to develop a remote labo‐
ratory that will enable learning of the basic principles of pneumatic control for the remote
participants.
Pneumatic systems are often finding their application in various branches of industry
due to a large number of advantages of compressed air [4]. For this reason, drawing of
pneumatic schemes and pneumatic control are studied in secondary schools and at the
universities as well. The remote participants, clients, can be students or engineers from
the industry. The great advantage of using remote laboratory for students is that they
can make a practical exercise in case they are absent from regular classes. The great
advantage of using remote laboratory for engineers who are the carrier power of modern
industry is that they can improve their skills throughout their lives (Life Long Learning
- LLL), without being absent from work. In addition to aforementioned, for all the clients
will be available the descriptions of basic principles of pneumatic control as well as
individual components.

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_14

zamfira@unitbv.ro
Remote Laboratory for Learning Basics of Pneumatic Control 145

At Faculty of Technical Sciences in Novi Sad, a basic of pneumatic control are


studied on study programs of Mechatronics and Industrial Engineering. In addition to
its core activities, our faculty is the authorized didactic center of the German company
FESTO for the Western Balkans since 1984. During this period, Faculty of Technical
Sciences organized a number of licensed seminars in the field of pneumatic and electro-
pneumatic control as well as programmable logical control (PLC) programming for
participants from industry. In the last few years it was observed a trend of slight decrease
in the number of participants of these seminars. By analyzing the causes of this trend it
was concluded that due to the increasing of work obligations, potential participants are
less able to be absent from work and attend seminars that are organized in this center.
It became necessary to develop a remote laboratory that will make experiments available
to those users who, for these reasons, are unable to attend seminars in person.
On the other hand, the Faculty of Technical Sciences, in the last three years, was a
member of the TEMPUS project Building Network of Remote Labs for strengthening
university-secondary schools collaboration – NeReLa whose one of the aims was the
development of remote laboratories. Thanks to the knowledge acquired during the
participation in this project, and bearing in mind the need for making available experi‐
ments of pneumatic control to remote users, a laboratory for distance learning of basic
principles of pneumatic control, which is shown below, is developed.

2 Basics of Pneumatic Control

Pneumatic systems consist of an interconnection of various groups of components.


Those groups of components forms a control path for compressed air flow, starting from
the input or signal components (such as, for example, push-button valves) through
processing components and up to the actuating or power components (such as, for
example, pneumatic cylinders). Pneumatic control schemes are composed of five basic
levels (Fig. 1) which are: 1. Power components (actuators); 2. Control elements; 3.
Processing elements; 4. Input elements; 5. Energy supply elements.
A special group of elements, that are not necessarily a component of every pneumatic
system, are elements that enable the regulation of velocity or pressure of the actuators
and they are marked as a special level with 1a.
The course of basics of pneumatic control, on our faculty, consists of eleven short
examples. All these examples are represented in a scope of a simple pneumatic system.
In that way, students have the opportunity to learn direct and indirect control of single-
acting or double-acting pneumatic cylinder, an application of 2/2, 3/2, 4/2 or 5/2
command valves (mechanically, pneumatically or electrically actuated etc.), and an
application of logic components such as AND or OR module etc. In this paper, all eleven
examples are connected in one pneumatic control scheme, shown in Fig. 1.
During the current way of traditional learning, practitioners are connecting the
system components to each other using pneumatic tubes. In order to transform such
system in remote learning system, clients activates or deactivates a 2/2 electrically actu‐
ated command valves, which are marked red on the control scheme (Fig. 1) and simulate
the physical interconnection of the components in that way.

zamfira@unitbv.ro
146 B. Bajči et al.

Fig. 1. Developed pneumatic scheme

zamfira@unitbv.ro
Remote Laboratory for Learning Basics of Pneumatic Control 147

For better understanding, one example will be shown in extension. If a client want
to indirectly control a single-acting pneumatic cylinder, it is necessary to go through the
next steps (Fig. 1):
1. By activating the 0V1 valve (2/2) is allowed the supply of compressed air to the
service unit (0Z),
2. By activating 1V2 and 1V3 valves (2/2) is allowed the flow of compressed air from
service unit to the electrically activated 3/2 valve (1S2) and to pneumatically acti‐
vated 3/2 valve (1V6),
3. By activating the 1V5 valve (2/2) is allowed the flow of compressed air from elec‐
trically activated 3/2 valve (1S2) to pneumatically activated 3/2 valve, to its control
connector 12 (1V6),
4. By activating the 1V9 valve (2/2) is allowed the flow of compressed air from pneu‐
matically activated valve (1V6) to the single acting pneumatic cylinder (1A),
5. After the simulation of the physical interconnection of the components, by activating
electrically activated 3/2 valve (1S2) the single-acting pneumatic cylinder (1A) will
extract,
6. By deactivating electrically activated 3/2 valve (1S2) the single-acting pneumatic
cylinder (1A) will retract to its initial position.

3 Remote Control of the System

Previously is already mentioned that with using the developed pneumatic scheme in this
paper, it is possible to realize eleven different, smaller exercises related to pneumatic
control. As it can be seen from Fig. 1, a large number of electro-pneumatic command
valves are used. Precisely for this reason, for the realization of remote control of this
system, a controller with large number of digital output signals is required. In this paper,
a controller of modular type, CompactRIO is used for this purpose.
A client can access to our laboratory through the CEyeClon platform [5, 6] and needs
to have installed only the CEyeClon Viewer software. It is necessary to request from
administrators an access key for the experiment. In Fig. 2, ways of communications in
our system are shown. When the client logs into the system, he/she connects to a remote
computer through the internet. That computer is physically connected to the controller.
Communication between the PC and the CompactRIO controller is accomplished by
using TCP/IP protocol. The electro-pneumatic command valves are connected to the
digital outputs of the controller. Live monitoring is enabled via web camera. When the
client logs into the system, it is necessary to launch a file called “Remote Laboratory
for Learning Basics of Pneumatic Control” from the desktop.
The homepage of the user interface then opens in internet browser with a list of
exercises. Exercises on this page are divided into two groups. The first group relates to
the direct control of a pneumatic actuator and consists of two exercises, one for single-
acting cylinder and one for double-acting cylinder. The second group relates to the indi‐
rect control of a pneumatic actuator. Within this group are classified nine different exer‐
cises. The user can choose between two languages, English or Serbian. By selection of
one exercise, a new window opens in the browser. In Fig. 3 the user interface for the

zamfira@unitbv.ro
148 B. Bajči et al.

Fig. 2. Ways of communications

first exercise of indirect control is shown. The first thing that the client can notice on
this page is the title of the exercise. Below the title, the text and the sketch of a physical
realization of the exercise are located.
On the left side of the user interface, below the text of the exercise, is located an area
provided for drawing the pneumatic scheme. Within this area, at the beginning, are
placed only the basic components, necessary for the realization of the selected exercise,
such as cylinders, valves, sensors, etc. A pop-up window appears by pressing the mark

Fig. 3. User interface – indirect control: the first exercise

zamfira@unitbv.ro
Remote Laboratory for Learning Basics of Pneumatic Control 149

of one of the components. That window contains the description and the picture of the
selected component, which enables the client to better understand the basic function of
it. In the top left corner a legend is located that explains the meaning of the color of the
pneumatic tubes on the scheme. These tubes are represented with lines. The line is red
when the tube is under pressure and it is black when the tube is not under pressure. Green
line represents a control signal and a blue line represents an exhausted tube. An area
with the commands for connecting certain components is located below the legend. The
mentioned commands appears and disappears, by pressing them, one after another. By
pressing the command, for example, “Connect the push-button with the command
valve” a line between these components is drawn. Also, at the same time, sending a
command to the controller is carried out. Certain 2/2 valve is activated, for this example,
1V6 showed in Fig. 1, and the push-button and the command valve are physically
connected. A turned on light, on the valve, can then be noticed on the camera.
Once the components are connected, it is necessary to press the 1S1 button at the
bottom of the user interface. A simulation is executed on the pneumatic scheme. All the
components will be activated and the cylinder will extract. The color of the pneumatic
tubes will be changed depending on the flow of the compressed air. Also, the extracting
of the cylinder will be seen on the camera. By releasing the push-button, in this exercise,
the cylinder will retract. The user interface works on the same principle for the other
exercises. At the top of the interface are located two buttons, for changing the exercises.
The basis for the realization of the user interface was JavaScript programming language
while LabView was used for programming the controller.

4 Conclusions

In this paper a remote laboratory for learning basics of pneumatic control is shown. A
unique pneumatic control scheme, used for the realization of several smaller exercises
is developed and presented. The use of the developed user interface is explained. The
CEyeClon platform is used for the realization of remote control. In this way is provided
a complete control over client access to the system. This system can be used as an integral
part of distant learning. Development of remote laboratories, like the one described in
this paper, is very important for the improvement of learning activities. In this way, it
is possible to attract the attention of a large number of new students and engineers, on
the study programs where the laboratories are used as well as enable a further education
for anyone interested in pneumatic control.

References

1. Horvat, A., Dobrota, M., Krsmanovic, M., Cudanov, M.: Student perception of Moodle
learning management system: a satisfaction and significance analysis. Interact. Learn. Environ.
23(4), 515–527 (2015)
2. Rodríguez-Sevillano, A.A., Barcala-Montejano, M.A., Tovar-Caro, E., López-Gallego, P.:
Evolution of teaching tools and the learning process: from traditional teaching to edX courses. In:
13th International Conference on Remote Engineering and Virtual Instrumentation (REV), UNED,
Madrid, 24–26 February 2016, pp. 42–49. IEEE (2016). ISBN 978-1-4673-8245-8 (2016)

zamfira@unitbv.ro
150 B. Bajči et al.

3. Bjekić, M., Božić, M., Rosić, M., Antić, S.: Remote experiment: serial and parallel RLC circuit.
In: 3rd International Conference on Electrical, Electronic and Computing Engineering,
IcETRAN 2016, Zlatibor, Serbia, 13–16 June 2016. ISBN 978-86-7466-618-0
4. Šešlija, D., Milenković, I., Dudić, S., Šulc, J.: Improving energy efficiency in compressed air
systems – practical experiences. Thermal Sci. (2016). ISSN 0354-9836
5. Zurcher, T.: Distance education in energy efficient drive technologies by using remote
workplace. In: 11th International Conference on Remote Engineering and Virtual
Instrumentation (REV), 26–28 February 2014. IEEE, ISBN 978-1-4799-2024-2
6. Zurcher, T., Rojko, A., Hercog, D.: Education in industrial automation control by using remote
workplaces. In: 3rd Experiment@ International Conference Online Experimentation (exp.at
2015), University of the Azores, 2–4 June 2015, Ponta Delgada. IEEE, ISBN
978-989-20-5753-8

zamfira@unitbv.ro
The Augmented Functionality of the Physical
Models of Objects of Study for Remote
Laboratories

Mykhailo Poliakov1(&), Karsten Henke2, and Heinz-Dietrich Wuttke2


1
Zaporizhzhya National Technical University, Zaporizhia, Ukraine
polyakov@zntu.edu.ua
2
Ilmenau University of Technology, Ilmenau, Germany
{karsten.henke,dieter.wuttke}@tu-ilmenau.de

Abstract. Remote laboratory is an important and rapidly growing component


of distance learning systems for engineering specialty. The labs allow for remote
users to enter the data of technical experiment that is transmitted to the server
where it is converted into control signals of physical and (or) virtual model of
the object of the experiment. The level of remote laboratories in engineering
education largely depends on the level of models of objects of study that they
use. The use of physical models in remote laboratories has identified a number
of issues for the creators and operators: they have a limited range of experiments
with the physical model, the complexity of modernization and the high cost of
new models, and others. The aim of the present work: to improve, to extend the
scope of existing physical models. The goal is to be achieved through
increase/add functionality of the physical models through the use of augmented
reality, augmented virtuality and augmented behavior of the object of study. The
work describes the variety and the advantages of hybrid models and interfaces to
enhance the functionality, lists examples of added functionality.

Keywords: Remote laboratories  Physical models  Augmented functionality

1 Introduction

The advantages of distance education stimulate the improvement of its components [1],
among which in the last decade, the rapid development of remote laboratories
(RL) took place [2–4]. These laboratories include a server with a set of physical models
of the object of study. For example, in the laboratories of the Grid of Online Laboratory
Devices Ilmenau (GOLDi) there are physical models (PM) Elevators, 3-Axis-Portal,
Production Cell with devices for their control [5]. Users enter the data of technical
experiment from a remote computer. This information is sent over the Internet to the
server of the laboratory, where it is converted into control signals of physical and
(or) virtual model of the object of the experiment. The progress and results of the
experiment are perceived by the user by means of the user perceptual interface [6]. The
outputs of this interface are a user perceptual image and the flow of user commands. An
example of the User Perceptual Image (UPI) is a computer screen with a WEB – the
image of the physical model and the visual part of the virtual model of the object of the
© Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_15
zamfira@unitbv.ro
152 M. Poliakov et al.

experiment [6]. However, as from the user’s point of view and from the point of view
of the designer of a remote laboratory, the object of the experiment is represented by a
system with physical and virtual elements that interact with each other and with the
environment. For description of such objects in a number of cases the term CPS –
Cyber Physical System is used, which declares the connection of physical objects with
computational algorithms [7]. Despite the importance of solving the issues of managing
remote users of RL and real-time students’ interaction with the model of the object, a
necessary condition for effective use of RL is a sufficient number of experiments with
models of the object of study, as well as the quality and informative value of the User
Perceptual Image.
In Sect. 2 of the article gives an overview of publications on technologies of the
augmented reality that used by the authors to extend the functionality of the PM of RL.
Section 3 describes the models of object of study and interfaces which are involved in
the formation of the image perceived by the user. Conclusions and Acknowledgements
set forth in the Sect. 4 and Acknowledgment section of the article.

2 State of Art

Contemporary researches are aimed primarily at improving the quality and informa-
tional content of the User Perceptual. For this Augmented Reality (AR) technology is
used. AR is a scientific discipline, the essence of which is disclosed in numerous
publications (e.g. [8–12], which holds regular scientific conferences [13], and pub-
lished by the journal [14]).
AR themes are reflected in previous sessions of the International Conference on
Remote Engineering and Virtual Instrumentation. So in works [15, 16] the LEDs’ state
change of physical model of a traffic light is displayed on the remote user’s screen in
the video window. Software RL detects certain changes in the shots of this image,
which are classified as events. Events control the induced overlay video on video
elements of the physical model. In this case, the images of moving vehicles are
induced.
The focus of this work is to use the added functionality of the RL models to
increase the quantity and quality of the experiments without significant changes in the
physical models.
The concept of functional models of objects of study was mentioned in [17] in
connection with the analysis of the structure of the hybrid model. The work presents an
example of the behavior of the physical model of the traffic light in RL RELDES [18].
The experiment is carried out with a hybrid model, which includes a physical model of
a traffic light, complemented behavior of fault diagnosis and simulation of defects of
the traffic light lamps.
In [19] it is proposed to add virtual physical model of the Elevator in the lab
GOLDi [5]. This model was originally used to explore the construction of the FSM
controls. By adding virtual model of flow control commands of users and the virtual
reality of a queue of users waiting for service, experiments with more complex control
algorithms, the experiments on the theories of queues and performance evaluation of
real-time systems become possible. Below are the new results in this direction.

zamfira@unitbv.ro
The Augmented Functionality of the Physical Models of Objects 153

3 Added Functionality for Improving the User Perceptual


Image

Models and interfaces RL involved in the formation of the User Perceptual Image are
shown in Fig. 1.

Fig. 1. Models and interfaces RL involved in the formation of the user perceptual image

The physical interface includes the flow of information from the physical external
environment and physical model of the object of study to their hybrid model, as well as
the flow of control actions on the object of study by a hybrid model.
The virtual interface includes information from the virtual external environment
and the virtual models of the object of study to their hybrid model, the flow of control
actions on virtual objects of study and the flow of information to synchronize the
physical and virtual models of the object of study.
The role of hybrid models is in the selection, switching and integration of the
information coming through the physical and virtual interfaces depending on the
selected operation mode RL. In addition, the hybrid model generates streams of source
data for the work of the media model of the experiment and receives a stream of user
commands for the control of the experiment.
The network interface performs the standard functions of information exchange
between the server and the remote RL user’s computer.
On the remote user’s computer the media (visual, audio, etc.) model of the
experiment performs and the user command to manage a hybrid/virtual/physical model
of the object of study is processed.
Finally, the user interface implements interaction “sensors user – output device”
and “user input device”.
A physical model of the object of study is the means for the study of the control
system, which also includes the control device and the environment.
The interface of the physical model with the rest of the system (physical interface)
is implemented using sensors and actuators composition of which is shown on Fig. 2.
We will distinguish physical parameters sensors and image sensors. An example of the
first are sensors of electric currents and voltages, the speed of movement of the object

zamfira@unitbv.ro
154 M. Poliakov et al.

Fig. 2. Structure of the physical interface

of study, the temperatures of object elements, an example of the second is WEB


camera. The camera receives an image of the physical model and the environment
during the experiment. Although other images are also possible. For example, it can be
acoustic, thermal, magnetic fields images.
If the environmental parameters are significant for control, environment parameters
sensors are applied.
Actuators form a control action on the control object. As a result of these control
actions the values of the signals at the inputs of sensors and (or) image of the control
object are changed. For example, if the actuator is a heater, it will change the tem-
perature and possibly other parameters of the CO. But if the actuator (e.g., a lighting
device, a WEB-camera rotate/move device) operates in conjunction with image sen-
sors, it will change the image of the control object.
In control theory, the values obtained from the sensors, are called observed vari-
ables, and the signals at the outputs of the actuator are referred to as controlled vari-
ables. In General, the control object may have a multitude of unobservable and
uncontrollable variables.
Hardware of the physical model control unit such as programmable logic con-
trollers (PLC) and microprocessor (MP) boards perform analog-digital conversion of
the parameters and control actuators.
Hardware of the image control unit performs a similar operation relative to the
image of the object of study.
A virtual interface is a set of services that complement and in some cases replace,
information about the object of study and the environment received through a physical
interface. There are also services that generate a stream of user commands. A virtual
interface is implemented as a set of software models running on the server RL or the

zamfira@unitbv.ro
The Augmented Functionality of the Physical Models of Objects 155

remote user’s computer. Each module implements a specific set of virtual functions in
the course of the experiment the object of study.
The object and the form of the generated functionality describe the module of
virtual interface. The object of functionality is the object of study, its physical model,
the external environment, including the flow of user’s commands, as well as technical
and software of RL. The types of the generated functionality are the image of the
object, UPI, parameters, and behavior of the object.
Scale of evaluating the conformity extent of the object’s functionality and the
interface’s module can have the following gradation: absent (f0), reduced (f1), equiv-
alent to model (f2), advanced (f3), equivalent to the object (f4), added (f5), and new (f6).
Feature comparison is shown in Fig. 3.

Fig. 3. A comparison of the functionality of the virtual interface: fCO, fPM - functionality of the
controlled object and its physical model

The bases of the coordinates of the simulated object parameters of the generated
functionality are: observable and controlled variables; unobservable and uncontrollable
variables; full range of observed, unobserved, controlled and uncontrollable variables.
Bases of simulated behavior regarding the objectives of the experiment and modes
of use of the object of study: the behavior in a normal mode; the behavior in emergency
mode; the technical state control of the elements of the object of study; the external
environment control.
The bases of the simulated behaviors with respect to the selected type of controls:
the behavior of discrete control on the basis of the FSM; the behavior of continuous
control based on the structure of the control system, the transfer functions of the
elements of the object of study (or its physical model) and regulators; the behavior of a
hybrid control in which the current state of the system formed by means of discrete
control, and the actions in this state are determined by the model of continuous control.
Time basis of simulated images, parameters, and behaviors includes the current
time; historical trend and forecast.
Virtual image generated by the modules of the virtual interface is dependent on the
modules that define the current virtual behavior, and the parameters of the object of
study and the external environment.

zamfira@unitbv.ro
156 M. Poliakov et al.

The main categories that characterize a virtual image are realism/metaphor; degree
of coverage (whole/part), scale; dimension (2D/3D mono/stereo) and the format in
pixels; type of media (text/photo/video); view direction (on the object/from the object);
illumination angle, the number of survey points; visualization object (object of
study/visual model/external environment/trend/design models (FSM graph, chart, UML
diagram, control program text); consistency with the sensors of the user (“visible”/
“invisible visualization”).
The virtual image must satisfy the requirements of ergonomics and technical
aesthetics.
As mentioned above, virtual interface via the generated functionality is transmitted
in a hybrid model. Structural scheme of the hybrid model is shown in Fig. 4.

Fig. 4. Structural scheme of the hybrid model

The configuration of the hybrid model is controlled by RL. Images, parameters, and
status are connected through the source selector. Using the destination selector setting,
the involved modules receive input information, necessary for initialization and
synchronization.
A physical model (PM) and software image control units, the corresponding FSM,
take up important places in the structure of the hybrid model. The presence of archive
data of experiments on the machine carrier can expand the information basis of the
researcher and allows the use of statistical methods of research.
The results of the hybrid model in the form of a stream of images and values of tags
of the media model are transmitted via the network interface to the remote user’s
computer. A standard network is the Internet. In the context of its use in the RL, it must
meet the requirements in the exchange rate, especially if the UPI contains complex
images. The details of the transformation of tag values of the objects display of the
model in the elements of the UPI image are given in [17].
Examples of additional features that provide augmented functionality are given in
the table for the known physical models of an Elevator and Traffic light.

zamfira@unitbv.ro
The Augmented Functionality of the Physical Models of Objects 157

4 Conclusions
1. The specificity of the RL experiments is that their results are perceived by the user
remotely from the object of study. Therefore, the content and technology of creating
the UPI are key to improve the quality and diversity of experiments. Today the main

Table 1. Examples of additional features that provide augmented functionality


The physical UPI type Functionality type The object of
model/RL study
Elevator 4 Video: the elevator model Functionality in the normal FSM control for
floors/GOLDi moving. (UPI1) mode (f2) normal mode
Animation: the lamp
indication, the cabin door
opening/closing (UPI2)
UPI1+(UPI2+animation Functionality in the normal FSM control for
trigger emergency brake and emergency modes (f3 or normal and
and the cable termination f4), defects simulator (f6) emergency modes
process (UPI3))
UPI1, UPI2+augmented Functionality in the normal
Technical state of
visuality users command mode (f2) and the simulation
the equipment of
stream (UPI4) the physical
thread calls the Elevator
from floor. model.
FSM for the
optimal operation
of the Elevator in
the call flow.
Queues of
passengers and
capacity.
The interaction of
software
components in
real-time.
Traffic The sequence of switching Functionality in the normal C – programming
light/RELDES the LEDs (video and mode (f2) of control FSM
animation) (UPI5)
UPI5+augmented visuality Functionality in the normal C – programming
with the time displays mode (f3) of control FSM
(UPI6)
UPI5+emergency modes Functionality in the normal FSM control for
indication (UPI7) and emergency modes (f3 or normal and
f4), defects simulator (f6) emergency modes
UPI5+augmented virtuality Functionality in the normal Traffic light FSM
of the cars stream [16] mode (f2)+the functionality base on the cars
of the highlight events of position and
the video (f6) stream

zamfira@unitbv.ro
158 M. Poliakov et al.

varieties of UPI are “live” WEB – a picture of the object in the experiment and
animated image controlled by the tags of the virtual model.
2. Key technology of improving the UPI is “augmented” technology. Used varieties of
this technology - augmented reality, augmented visuality, as a rule, do not affect the
behavior of the object of study. The behavior of the object supposes the dependence
of the output response from the internal state. Discrete behavior is specified using
the FSM. The continuous behavior is specified using the structure of backward
linkages and transfer functions of the regulators. Moreover, the new behavior leads
to new functionality of the object of study that allows us to speak about “augmented
functionality”.
3. The implementation of augmented functionality in RL is connected with the
interaction of a number of interfaces (physical, virtual, network, and perceptual user
interface) and models (physical, virtual, hybrid and visual). Added functionality is
synthesized with modules virtual interface managed by the hybrid model.
4. The following concepts are associated with added functionality: the concepts of
object functionality, add gradations, the basis of the coordinates of the simulated
parameters, the basis of simulated behavior regarding the objectives of the study/use
of the object, the basis of the simulated behaviors with respect to the selected type
of control and the basis of the time the simulated images, parameters, and behaviors.
These and other categories of functionality were analyzed.
5. It is proposed to use the term “The Media model of the object of study in the
external environment” instead of the term “visible model”. Categories of images,
which are reflected in the UPI while using added functionality, were analyzed.
The proposed methods enhance the functionality of the models object of study is to
be used for expanding the range of experiments with models of remote laboratories
GOLDi in Ilmenau University of Technology and Zaporizhzhya National Technical
University (Table 1).

Acknowledgment. This work was partially carried out within the European Community Project
“Tempus” ICo-op: Industrial Cooperation and Creative Engineering Education based on Remote
Engineering and Virtual Instrumentation 530278-TEMPUS-1-2012- 1-DE-TEMPUS-JPHES.
The authors are grateful to Ilmenau University of Technology (Germany) and the Zaporizhzhya
National Technical University (Ukraine) for the opportunity to work with remote laboratory
GOLDi.

References
1. Azad, A.K.M., Auer, M.E., Harward, V.J. (eds.): Internet Accessible Remote Laboratories:
Scalable E-Learning Tools for Engineering and Science Disciplines, Engineering Science
Reference, 645 p. (2012)
2. Gravier, C., et al.: State of the art about remote laboratories paradigms - foundations of
ongoing mutations. Int. J. Online Eng. (iJOE) 4(1), 1–9 (2008)
3. Remote and virtual tools in engineering: monograph/general editorship, Karsten Henke,
Dike Pole, Zaporizhzhya, Ukraine, 250 p. (2015). ISBN 978–966–2752–74–8

zamfira@unitbv.ro
The Augmented Functionality of the Physical Models of Objects 159

4. Gomes, L., Bogosyan, S.: Current trends in remote laboratories. IEEE Trans. Industr.
Electron. 56(12), 4744–4756 (2009). doi:10.1109/TIE.2009.2033293
5. GOLDi-labs cloud Website: http://goldi-labs.net
6. Richir, S., Fuchs, P., Lourdeaux, D., Millet, D., Buche, C., Querre, R.: How to design
compelling virtual reality or augmented reality experience? Int. J. Virtual Reality 15(1), 35–
47 (2015)
7. Terkowsky, C., Jahnke, I., Pleul, C., Licari, R., Johannssen, P., Buffa, G., Heiner, M.,
Fratini, L., Valvo, E.L., Nicolescu, M., Wildt, J., Tekkaya, A.E.: Developing tele-operated
laboratories for manufacturing engineering education. Platform for eLearning and Telemetric
Experimentation (PeTEX). Int. J. Online Eng. (iJOE) 6, 60–70 (2010). http://dx.doi.org/10.
3991/ijoe.v6s1.1378. REV2010, Vienna, IAOE, Special Issue 1
8. Wikipedia 2016: http://en.wikipedia.org/wiki/Augmented_reality
9. Cao, M., Li, Y., Pan, Z., Csete, J., Sun, S., Li, J., Liu, Y.: Creative educational use of virtual
reality: working with second life. IEEE Comput. Graph. Appl. 34(5), 83–87 (2014)
10. Hughes, C.E., Stapleton, C.B., Hughes, D.E., Smith, E.M.: Mixed reality in education,
entertainment, and training. IEEE Comput. Graph. Appl. 25(6), 24–30r (2005)
11. Schaf, F.M., Pereira, C.E.: Integrating mixed-reality remote experiments into virtual learning
environments using interchangeable components. IEEE Trans. Industr. Electron. 56, 4776–
4783 (2009)
12. Milgram, P., Colquhoun, H.: A taxonomy of real and virtual world display integration. In:
Ohta, Y., Tamura, H. (eds.) Merging Real and Virtual Worlds, pp. 5–30. Ohmsya Ltd.,
Springer (1999)
13. Vlada, M., Albeanu, G.: The potential of collaborative augmented reality in education. In:
The 5th International Conference on Virtual Learning, ICVL 2010. Targu – Mure, Romania,
29–31 October 2010, pp. 39–43 (2010)
14. Int. J. Virtual Reality. http://www.ijvr.org
15. Maiti, A.A.K., Maxwell, A.: Variable interactivity with dynamic control strategies in remote
laboratory experiments. In: International Conference on Remote Engineering and Virtual
Instrumentation, REV2016, Madrid, Spain, 24–26 February 2016, pp. 399–407 (2016)
16. Smith, M., Maiti, A., Maxwell, A.D., Kist, A.A.: Augmented and mixed reality features and
tools for remote laboratory experiments. In: Int. J. Online Eng. (iJOE) 7, 45–52 (2016).
http://dx.doi.org/10.3991/ijoe.v12i07.5851. Vienna, IAOE
17. Poliakov, M., Larionova, T., Tabunshchyk, G., Parkhomenko, A., Henke, K.: «Hybrid
models of studied objects using remote laboratories for teaching design of control systems».
Int. J. Online Eng. (iJOE) 9, 7–13 (2016). http://dx.doi.org/10.3991/ijoe.v12i09.6128. IAOE,
Vienna
18. Parkhomenko, V., Gladkova, O., Ivanov, E., Sokolyanskii, A., Kurson, S.: Development and
application of remote laboratory for embedded systems design. Int. J. Online Eng. (iJOE) 11
(3), 27–31 (2015). IAOE, Vienna
19. Poliakov, M., Larionova, T., Wuttke, H.-D., Henke, K.: Automated testing of physical
models in remote laboratories by control event streams. In: 2016 International Conference on
Interactive Mobile Communication, Technologies and Learning (IMCL), 17–19 October
2016, San Diego, CA, USA. 94 p., pp. 10–13. IEEE 978-1-5090-1197-1/16/$31.00 ©2016

zamfira@unitbv.ro
More Than “Did You Read the Script?”
Different Approaches for Preparing Students for Meaningful
Experimentation Processes in Remote and Virtual Laboratories

Daniel Kruse1 ✉ , Robert Kuska1, Sulamith Frerich1, Dominik May2,


( )

Tobias R. Ortelt2, and A. Erman Tekkaya2


1
Ruhr Universität Bochum, Bochum, Germany
kruse@vvp.rub.de
2
TU Dortmund University, Dortmund, Germany

Keywords: Preparational activities · Remote lab · Interactive training · Online


learning

1 Introduction

Project ELLI (Excellent Teaching and Learning in Engineering Science) is a joint project
of the three German universities RWTH Aachen, TU Dortmund University and Ruhr-
University Bochum. Considering teachers’ and learners’ perspectives, the project aims
to improve existing concepts in higher engineering education and to develop new inno‐
vative approaches. In the past years, a pool of remote and virtual labs has been developed
and set up in order to gain flexibility in the usage of experimental equipment in different
pre-set scenarios. Teachers can either use these virtual and remote laboratories in class
for demonstrating engineering practice whereas the labs can support students to indi‐
vidually discover scientific concepts.

2 Virtual Learning Environment

The use of labs in general can be distinguished in research, development and training
purposes. In engineering education, labs are often used to introduce students to experi‐
mental work or explain a phenomenon in a realistic way. The project ELLI aims for
several improvements in the field of teaching and learning in engineering science. A
main aspect is to establish remote learning experiences.

2.1 Remote Labs Setting Local Bochum


The Project ELLI started a virtual and remote lab project in 2011. At the Ruhr Universität
Bochum, a call for ideas enabled interested professors to hand in their ideas of a lab with
the use of remote or virtual technology. Within all suggested ideas, 10 were selected
through an independent jury and received an investment support. These ideas had to
describe a concept in which the lab should be used. The selected ideas usually focused

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_16

zamfira@unitbv.ro
More Than “Did You Read the Script?” 161

on a specific group with a well-known educational background. Nowadays, the Project


ELLI at the Ruhr Universität Bochum provides a pool of more than 10 remote or virtual
labs in different disciplines and teaching environments [1]. Each lab was built under the
responsibility of a scientific chair with differences in the curriculum of the three partic‐
ipating faculties on three different universities. Therefore, the authors defined a remote
learning processes in its different steps to allow a more modular and interchangeable
development of the provided resources. With labs in material science, e-mobility or
process technology, each lab was built under at least one certain idea of its usage. This
usually aimed for a specific target group of users, in a specific relation to the lab’s
discipline.

2.2 Remote Labs Setting Local Dortmund


At TU Dortmund University, another approach has been followed. In a strong cooper‐
ation between the Institute for Forming Technology and Lightweight Construction and
the Center for Higher Education, a remote lab for manufacturing technology has been
developed. This work is based on successful outcomes achieved within a prior project
called PeTEX [2]. The developed laboratory gives both students and teachers the oppor‐
tunity to conduct experiments in the field of manufacturing technologies especially for
material characterization. Figure 1(right) shows the laboratory with two testing
machines for sheet metal forming and tensile tests. In addition to that, the lab is equipped
with an industrial robot with several grippers for the specimen handling and the needed
equipment for the experiment’s automation and control. In the recent years, the tensile
test has been in focus for its implementation into educational contexts. This test is one
of the most common and efficient tests to get the material properties of the tested
specimen [3]. The determined properties describe the behavior of such material. Further‐
more, the properties can be used in forming applications like FEM-Simulations (e.g.
simulation of forming processes or production processes). This is why it is a very basic
but also an important test in the context of manufacturing technology. The developed
remote lab has been introduced in several educational contexts so far. Now it is used in
lectures as well as in practical training courses and even in completely online delivered

Fig. 1. Remote laboratory at TU Dortmund (right) with the graphical user interface for user-
experiment-interaction (left)

zamfira@unitbv.ro
162 D. Kruse et al.

courses. In order to perform experiments in the remote lab the user can use the specially
designed graphical user interface. Using this interface, it is possible to prepare, start,
pause, stop, watch, and even analyze the ongoing experiment (Fig. 1, left part).

3 Learning Processes in Remote and Virtual Experimentation


Environments

Introducing experimentation exercises with the help of remote or virtual equipment into
educational processes is different to the instruction on classical hands on labs. Whereas
in hands on labs normally a scientific assistance guides or supervises the experimentation
process (and the learning process, too), in some of its parts, the essential of a virtual or
remote lab learning is the non-guided and non-supervised process. This process can be
seen in the following stages:
(1) Orientation
(2) Preparation
(3) Performing an experiment
(4) Report experimental results.
Before performing any type of experiment, preparation is needed [4]. A classic hands
on lab preparation is often based on a scriptum or any kind of document that has to be
read by the students before coming to the lab. Such a scriptum contains the theoretical
background and used methodology as well as technical characteristics and at least the
task that should be performed. The students have to get familiar with the content and to
be prepared to be tested on the experimental content. In virtual or remote labs, things
are a bit different. As the whole experience is meant to be highly independent, all aspects
of the process must work in an intuitive and helpful way, without lowering the necessary
effort for the student’s performance. One of the main differences is the feedback on the
student’s preparation. In a classic hands on lab, this is ‘assessed’ by a supervisor during
a short interview, a discussion or the observation of the physical preparation of the
experiment. Whereas these aspects are fitting for the hands on lab, the lack of a supervisor
in a remote or virtual setting leads to new challenges [5]. Here the two main challenges
are the examination of the necessary preparation and the option of giving feedback about
the process of flexibly setting up an experiment. The following approaches are dealing
with these challenges. As the remote laboratories are developed independently at the
two locations, the student’s preparation will be explain separately.

3.1 Preparation for VRL (Bochum)

While offering remote learning resources, there is the question if a scriptum is still the
best way of preparing students for a remote experiment. The balance between a chal‐
lenging task and a guided experience is crucial for the whole remote learning process.
Therefore, the preparation phase has to be rethought. A setup for performing experiments
in the field of process technology can contain several apparatus and instruments. For
gaining experience in setting up an experimental plant, it must be possible to change

zamfira@unitbv.ro
More Than “Did You Read the Script?” 163

different designs. In a process technology experiment, a pressure drop should be meas‐


ured in different flow states. If this task is performed in a classic hands on lab, the students
may be able to choose the necessary equipment like pumps, pipes and pressure gauges.
Offering a similar experiment in a remote environment leads to the problem that the
plant hast to be put together in advance and standby till it is accessed [6]. In this simple
case, the student would not be able to work out their own setup, maybe make mistakes
and learn about the physical relations between the separate parts of the setup. To remove
this lack of experience in a remote scenario and allow the students to reflect their knowl‐
edge about the upcoming task, a virtual work bench was considered to be helpful to
develop a virtual process scheme.
Process schemes contain a lot of information about a technical setup. There are
several disciplines and different layers that could lead to one main scheme, which
describes the whole setup. As the users of the ELLI remote lab pool are engineering
students, they are familiar with process schemes from different lectures or trainings.
Even if they do not create a scheme on their own, they know about the symbols, connec‐
tion types and how to read flow schemes [5]. Based on the remote scenario and the
assumed fact that a student is already familiar with the task of measuring the pressure
drop of a fluid, the virtual process scheme is used to reflect the student’s state of under‐
standing and preparation.
The virtual process scheme is built to be a virtual experiment developing environ‐
ment (see Fig. 2), like an interactive content object described in [7]. A number of devices
and equipment can be chosen by its schematic symbol. The number of available equip‐
ment is larger than required by the aligned task. First, the students have to identify the
devices necessary for their respective task. In case of a flow testing rig, a pump, pipes
and pressure gauges are needed as well as a device to cause the pressure drop needed to
observe different states of operation. If a schematic symbol is chosen to be used in the
virtual process scheme, it can be located all over the virtual workbench. At least two
symbols need to be placed on the virtual workbench before a connection can be created.
The symbols can accept four connections but only one at each side. A connection is

Fig. 2. A virtual workbench for developing, the repository area at the bottom and the connection
control area on the right side.

zamfira@unitbv.ro
164 D. Kruse et al.

created by choosing the type of connection, choosing its starting point and selecting its
end point. During the process of creating a connection, the symbols placed in the work‐
bench indicate their ability to accept or decline a connection on one of the four sides
with green or red dots, respectively (see Fig. 3).

Fig. 3. Symbols on the virtual workbench during the connection creation process.

In case of a student needing assistance or having finished the setup of a virtual process
scheme, a consistency test runs and checks the flow scheme created. For a virtual process
scheme of a flow testing rig there should be at least one suitable pump, pressure gauges
and a regulation valve connected in one loop. Open connections or missing equipment
is recognized and can be displayed to the student with a hint about how to complete the
setup. A flow scheme containing all necessary equipment with correct connections is
reviewed and reported as complete.
This consistency test can be adjusted in its complexity by adding more information
to each symbol available. Parameters like flow direction, generated pressure, pressure
drop or process fluid parameters can be reviewed in the consistency test. The more
information is respected, the more complex the review process is. However, the accuracy
of the enabled feedback can be enhanced and individualized with this enlarged infor‐
mation [5]. The results of such a consistency test can be used to allow the student’s
access to a real remote lab control or give them advice to review several parts of the
experiment’s documentation [4, 8].
The virtual process scheme is created by using an html 5 framework called phaser
(www.phaser.io). A more common use for this framework is known to develop computer
games for web or mobile applications. Therefore, the functionality of this framework
was highly useful to develop the virtual process scheme. As the code works well on
mobile devices, the virtual process scheme can easily be adapted to mobile use for even
more flexibility.
The explicit example of preparing the remote learning experiment about the meas‐
urement of pressure drop at flow testing rig can be easily adapted to other experiments.
The idea of the virtual process scheme works in each discipline that uses schemes or
drawings to show interaction and connectivity. Examples of use can be electrical circuit

zamfira@unitbv.ro
More Than “Did You Read the Script?” 165

drawings or drawings of mechanical balance of forces. The virtual process puzzle elim‐
inates some of the drawbacks of remote experiments in the field of independent experi‐
ment development and reflection about the state of the student’s preparation. While
eliminating some challenges, it also creates new ones, especially with the consistency
test and its usage to create a meaningful feedback.

3.2 Preparation for VRL (Dortmund)


The Remote Lab of the TU Dortmund University was developed on basis of a classical
hands on lab. As all these achievements are based on this existing lab, the preparation
procedure will be explained in the following in order to show how preparation for non-
remote labs worked so far. In this lab, the students had to determine material parameters
for different materials (steel or aluminum) with a uniaxial tensile test. Therefore, the
students were divided into groups of four students each. Each group was supported by
a research assistant. The lab experience was divided in four steps. In the first step “Prep‐
aration”, a script (up to 20 pages) was given to the students. This script provided a small
repetition of the basic facts and theoretical background in material characterization. With
this in mind the students were able to conduct the experiment and understand its context
as well as its application. The second step “Experiment” started with a little oral assess‐
ment in form of a discussion. During this discussion, the students’ knowledge about the
basic facts was tested. In addition to that, the students were questioned about safety
concepts of the used machines due to safety reasons. After this oral exam, the students
were introduced to the machine and the used software. With this information, the
students conducted their experiments basically on their own, if needed with the help of
a student assistant. During the experimentation, they tested different materials in
different rolling directions to determine material parameters. Afterwards, the data was
stored on an USB-stick. In the next step “Analysis and Interpretation”, the students
determined the material parameters on their own using their own personal devices, like
pc or laptop. The calculated material parameters and their interpretation were the basis
for the next task, a lab report. This written reports consisted of up to ten pages and an
appendix with different plots for example. The report had to be handed in to the super‐
visor three weeks after the lab session. The last step “Examination – Presentation” started
with a check of the lab report by the supervisor. In a short presentation, the students
presented the main output of the experiments to the supervisor and a second examiner.
After a final discussion the lab was over and the final grades were announced to the
group.
The explained procedure could not be adopted one-to-one to the remote lab context.
Especially the face-to-face contact between the students and the supervisor is missing
in the remote context or has to be organized differently. Therefore, new procedures were
developed. On the one hand, an online scenario was developed. On the other hand, a
combination between online and offline preparation was developed.
As indicated above, the remote lab is used in different educational settings so far.
The above-explained type of preparation was put into practice in context of classical on-
campus training courses. The combination of online and offline preparation is divided
into several steps. In this case, the use of the remote lab is shown in the lecture. The task

zamfira@unitbv.ro
166 D. Kruse et al.

of this lab is the determination of material parameters. This material parameters are
needed to conduct a FEM simulation in the next step. Therefore, a first run of the experi‐
ment using the remote lab is conducted in a lecture or exercise. The lecturer controls the
experiment during lecture in front of the audience. The students can ask questions and
discuss their needs in interaction with the lecturer. In a second step, the students need
to book a time slot to conduct the experiment using the ilab server. In order to help the
students and make a smooth start using the remote lab possible, an online video explains
the most important steps. This video is available without any registration to the ilab
server (see: http://iul.eu/remotelabs/). With this information and help, the students can
conduct their experiments using the remote lab. After conducting the experiments, the
data can be downloaded and the material parameters can be calculated on their own
devices.
Another course context, in which the remote lab plays a crucial role, is a completely
online delivered course for international students, which is taken in advance of their stay
in Germany for the master study program [9]. Part of this course is to conduct online
experiments in internationally mixed students groups using the universal testing
machine in the remote lab for a tensile test. As the students are coming from all over the
world, one of the challenges is that their knowledge and competence in experimentation
theory and practice may differ significantly. Whereas for some of the students inde‐
pendently performed experimentation processes may be normal and largely trained, this
is not the case at all for others. It may be even the case that some students are introduced
to experimentation equipment for the first time in their lives. Nevertheless, for the
experimentation with the remote lab at TU Dortmund, it is important to bring the students
on an adequate level of competence in experimentation and material characterization.
The Authors decided to make use of the differences and build heterogeneous and inter‐
nationally mixed students group for the important preparation phase. Within these phase,
the students did not receive a written script with all the important information, but they
were asked to do their own individual research about material characterization, based
on guiding questions. Figure 4 shows pictures given to the students as a starting point
for their research.

Fig. 4. Pictures used to guide students in their research process for material characterization

Taking the pictures shown in Fig. 4 as a starting point, the students are asked to
answers questions as follows:

zamfira@unitbv.ro
More Than “Did You Read the Script?” 167

1. In the first picture, you see the universal testing machine used at the IUL.
1.1. What are important parts?
1.2. How does such a machine work?
1.3. What is the theoretical background of the tensile test?
1.4. What is it used for?
2. The following pictures show stress strain diagrams.
2.1. What do they show?
2.2. What is the difference between the two diagrams?
2.3. How are they worked out?
2.4. What are important areas?
2.5. Which material properties can be gained through the connected data and how?
Using this approach for experimental preparation, it is possible on the one hand that
students themselves can directly develop knowledge about the respective experimenta‐
tion process. On the other hand, they see and learn where they may have important gaps
in knowledge, especially in comparison to other students. Furthermore, and this may be
the most important aspect, they can learn from each other. As they do have totally
different educational backgrounds, they recognize while answering these questions in
their respective group how far their personal concepts in experimentation differ from
the others’ concepts. With the help of each other, the students can leverage their indi‐
vidual knowledge about tensile testing and are finally on the same and needed level for
successful experimentation. To make sure that all students really are on the same level,
they have to present their research results in the following course meeting, and the most
important aspects are discussed again in the whole group. Observing the students during
the following experimentation process and assessing their results, it becomes clear that
they are well prepared for the experimentation by going through the procedure explained
above. Especially during the discussion of the experiment’s results, they benefit from
their former research in advance of the experimentation. Furthermore, they show good
abilities to connect their results with the explanations given in the literature.

4 Actual and Anticipated Outcomes

Using remote learning processes in higher engineering education allow a flexible and
individual learning process. Due to the boundary conditions of the physical pre-set setup,
a creative discovery of scientific concepts lying behind the experimentation process is
limited. With different approaches for student activation and preparation, such as virtual
process schemes (VPS), static remote laboratory setups can be used in scenarios that
give a more flexible experience. With this, students are asked to take more personal
responsibility for their research, their personal learning process and gained knowledge.
They can prove it with several creations in VPS, getting feedback about their ideas by
the system. In a next step, such approaches even can be used for organizing the access
to the laboratory environment based on students’ performance during the preparation
process. For example, the access to the remote lab could be allowed only to those students
who received an adequate reflection/feedback to the tasks before the experimentation.

zamfira@unitbv.ro
168 D. Kruse et al.

Even if there is existing research on the usage of preparation activities, there is still
work to be done. Since the ELLI project starts its second runtime of five years in 2016,
the presented approaches are put into practice and evaluated within the next two years.
Hence, research results are expected to be looking at the question how different students
react on different preparation activities and in which intensity different kinds of such
activities are more or less suitable for different types of remote labs.

5 Summary

With a combination of different preparation activities, the experience of pre-set remote


experiments can be improved in the area of individual, flexible and/or research based
learning. Tools like virtual process schemes allow flexible usage and individual feedback
on the student’s process of learning and understanding. The absence of a procedural
manual for the experiment with all information ready to use triggers a scientific way of
approaching necessary information by research and self-learning. The paper presented
different remote laboratories at the ELLI universities, explained the different preparation
activities, their respective grades of implementation and first evaluation results about
their success.

References

1. Frerich, S., Kruse, D., Petermann, M., Kilzer, A.: Virtual labs and remote labs: practical
experience for everyone. In: Proceedings: IEEE Global Engineering Education Conference
(EDUCON), pp. 312–314 (2014)
2. Terkowsky, C., Jahnke, I., Pleul, C., May, D., Jungmann, T., Tekkaya, A.E.: Pe-TEX@Work:
designing CSCL@Work for online engineering education. In: Goggins, S.P., Jahnke, I., Wulf,
V. (eds.) Computer-Supported Collaborative Learning at the Workplace - CSCL@Work,
Computer-Supported Collaborative Learning Series, vol. 14, pp. 269–292. Springer, New York
(2013). ISBN 978-1-4614-1739-2
3. Tekkaya, A.E.: Metal forming. In: Grote, K.-H., Antonsson, E.K. (eds.) Handbook of
Mechanical Engineering, Chap. 7.2, pp. 554–606. Springer, Heidelberg (2009)
4. Bochicchio, M.A., Longo, A.: The importance of being curricular: an experience in integrating
online laboratories in National Curricula for High Schools. In: Proceedings of 11th
International Conference on Remote Engineering and Virtual Instrumentation (REV), pp. 450–
456 (2014)
5. Graven, O.H., Samuelsen, D.A.H.: Remote laboratories with automated support for learning.
In: Proceedings of 10th International Conference on Remote Engineering and Virtual
Instrumentation (REV), pp. 1–5 (2013)
6. Kruse, D., Frerich, S., Petermann, M., Ortelt, T.R., Tekkaya, A.E.: Remote labs in ELLI: lab
experience for every student with two different approaches. In: Proceedings of IEEE Global
Engineering Education Conference (EDUCON), pp. 469–475 (2016)
7. Wuttke, H.D., Hamann, M., Henke, K.: Integration of remote and virtual laboratories in the
educational process. In: Proceedings of 12th International Conference on Remote Engineering
and Virtual Instrumentation (REV), pp. 157–162 (2015)

zamfira@unitbv.ro
More Than “Did You Read the Script?” 169

8. Dias, F., Matutino, P.M., Barata, M.: Virtual laboratory for educational environments. In:
Proceedings of 11th International Conference on Remote Engineering and Virtual
Instrumentation (REV), pp. 191–194 (2014)
9. May, D., Tekkaya, A.E.: Using transnational online learning experiences for building
international student working groups and developing intercultural competences. In:
Proceedings of American Society for Engineering Education’s 123rd Annual Conference and
Exposition “Jazzed about Engineering Education”, 26th–29th June 2016, New Orleans,
Louisiana, USA (2016). doi:10.18260/p.27171

zamfira@unitbv.ro
Collecting Experience Data from Remotely
Hosted Learning Applications

Félix J. Garcı́a Clemente1(B) , Luis de la Torre2 , Sebastián Dormido2 ,


Christophe Salzmann3 , and Denis Gillet3
1
Departament of Computer Engineering and Technology,
University of Murcia, Murcia, Spain
fgarcia@um.es
2
Departament of Informatics and Automatics, Computer Science School,
UNED, Madrid, Spain
{ldelatorre,sdormido}@dia.uned.es
3
Institute of Electrical Engineering,
Swiss Federal Institute of Technology Lausanne (EPFL),
Lausanne, Switzerland
{christophe.salzmann,denis.gillet}@epfl.ch

Abstract. The ability to integrate multiple learning applications from


different organizations allows sharing resources and reducing costs in the
deployment of learning systems. In this sense, Learning Tools Interop-
erability (LTI) is the main current leading technology for integrating
learning applications with platforms like Learning Management Systems
(LMS). On the other hand, the integration of learning applications also
benefits from data collection, which allows learning systems to implement
Learning Analytics (LA) processes. Tin Can API is a specification for
learning technology that makes this possible. Both learning technologies,
LTI and Tin Can API, are supported by nowadays LMS, either natively
or through plugins. However, there is no seamless integration between
these two technologies in order to provide learning systems with expe-
rience data from remotely hosted learning applications. Our proposal
defines a learning system architecture ready to apply advanced LA tech-
niques on experience data collected from remotely hosted learning appli-
cations through a seamless integration between LTI and Tin Can API. In
order to validate our proposal, we have implemented a LRS proxy plug-in
in Moodle that stores learning records in a SCORM Cloud LRS service,
and a basic online lab based on Easy JavaScript Simulation (EjsS). More-
over, we have tested our implementation using resources located in three
European universities.

Keywords: Learning Management System · Learning Tool Interoper-


ability · Experience API · Learning Analytics

1 Introduction
Nowadays, organizations, companies and universities are collaborating in the
deployment and integration of learning applications. These applications range

c Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6 17

zamfira@unitbv.ro
Collecting Experience Data from Remotely Hosted Learning Applications 171

from simple tools like interactive assessment applications, to others for domain-
specific learning environments like remote laboratories. Thus, students may com-
monly access learning tools which are hosted in other universities or organiza-
tions, and rarely use applications that are actually deployed in their university’s
servers. The ability to integrate multiple learning applications from different
organizations allows sharing resources and reducing costs in the deployment
of learning systems. In this sense, Learning Tools Interoperability (LTI) [7] is
the main current leading technology for integrating learning applications with
platforms like Learning Management Systems (LMS), portals, learning object
repositories, and other educational environments, including Massive Open Online
Course (MOOC) platforms.
On the other hand, the integration of learning applications also requires data
collection as well as tools interoperability. Data collection allows learning sys-
tems to implement Learning Analytics (LA) processes. LA are useful to measure,
analyze and report about learners in order to optimize their learning. For exam-
ple, interactions and steps followed by learners in a remote lab could be used
to analyze the learning experience. Regarding the data collection, Tin Can API
(sometimes known as Experience API or xAPI) [10] is a specification for learn-
ing technology that makes this possible. This API captures data in a consistent
format about learners activities and enables dynamic tracking of activities from
any learning system. In addition, Tin Can API uses a Learning Record Store
(LRS), a data store system that serves as a repository for learning records.
Both learning technologies, LTI and Tin Can API, are supported by nowa-
days LMS, either natively or through plugins. Several works have previously
used Tin Can API in order to apply LA, for example, SmartKlass, [8] a multi-
platform solution that enables data tracking through a dashboard and that can
be embedded in Moodle or any other LMS. Other works have used LTI to get
interoperability between LMS and remote tools, for example, [11] shows how
to develop an external tool for e-Assessment. Also we can find specific solu-
tions that provides ad-hoc integration of both technologies, among them, [1]
describes an e-learning architecture with analytic capabilities aimed at training
Unmanned Autonomous Vehicles (UAV) operators. However, there is no seam-
less integration between these technologies in order to provide learning systems
with experience data from remotely hosted learning applications.
In this sense, our proposal defines a learning system architecture ready to
apply advanced LA techniques on experience data collected from remotely hosted
learning applications through a seamless integration between LTI and Tin Can
API. The key outcomes of our proposal are:

– Collecting learning experience data from remotely hosted learning applica-


tions using well-known standard learning technologies.
– Seamless integration of the learning technologies Learning Tools Interoper-
ability (LTI) and Tin Can API in a Learning Management System (LMS).
– Deployment of the proposed architecture with a remote lab example in Moo-
dle, using a SCORM Cloud LRS service for storing the experience data, and
Easy JavaScript Simulations (EjsS) to build the remote lab.

zamfira@unitbv.ro
172 F.J. Garcı́a Clemente et al.

Following this Sect. 1 on the objective and structure of this paper, the Sect. 2
presents the motivation example, which is used throughout all the paper to intro-
duce the concepts related to our proposal. In the Sect. 3, we describe our pro-
posal, and how to get a seamless integration of the learning technologies. Based
on this, the Sect. 4 shows an implementation of our proposal. Subsequently, the
fifth section discusses a specific deployment, which shows the integration using
resources located in three European universities. Finally, conclusions and future
work are drawn in the Sect. 6.

2 LMS Interoperability and Experience Data


LTI and Tin Can API technologies are widely used to incorporate advanced func-
tionality to e-learning system. The LTI standard aims to deliver a single frame-
work for integrating any LMS (which takes the role of the so called Tool Con-
sumer) with any learning application (the Tool) remotely hosted by another LMS
or learning system (called Tool Provider). The nature of the relationship estab-
lished between a Tool Consumer and a Tool Provider is that the Tool Provider
delegates responsibility for the authentication and authorization of users to the
Tool Consumer. The Tool Consumer will provide the Tool Provider with data
about the user, the user’s current context and the user’s role within that con-
text. This data is secured by the OAuth protocol, so that the Tool Consumer
may trust its authenticity. The Tin Can API standard provides a REST/JSON
web service that allows software clients to read and write learning experiential
data in the form of statement objects. In their simplest form, statements are
in the form of “I did this”, or more generally actor − verb − object. Learning
experiences are recorded in a Learning Record Store (LRS) that can exist within
an LMS or on their own.
In order to illustrate how our solution integrates both technologies, we present
a use case that is composed of two LMS, where one takes the role of Tool Provider

Fig. 1. Use case where the LTI and Tin Can API technologies are used.

zamfira@unitbv.ro
Collecting Experience Data from Remotely Hosted Learning Applications 173

and the other acts as the Tool Consumer, as shown in Fig. 1. The Tool Provider
shares learning applications from simple Javascript-based physics simulations to
complex remote laboratories. The Tool Consumer uses LTI services to provide
local learners with access to remote tools. Learners use their learning space
through web browser and their interactions are stored into an LRS via Tin
Can API.
The learner’s LMS can store learning experience data like the time at which
the learner logged in and out, the time she spent connected, or a session count
into the LRS. LMS collected data could be analyzed by LA software to report
useful information in order to assess and evaluate the learning experience. How-
ever, the data collected by the LMS is of no use if we want to analyze the
learning experience in depth. Other learning data is absolutely necessary, such
as the learner’s interactions with the learning applications, mouse and keyboard
events, button clicks or changes in input elements. Advanced LA software based
in data mining and data analysis can process this kind of interactions for classi-
fication and/or clustering. For example, LA software could automatically group
learners and identify who find more difficulties to interact or solve a task defined
in the learning application.
Tin Can API provides a mechanism to collect learner’s interactions in learn-
ing applications, but it is not currently supported by LMS in this way. LMS
usually use Tin Can API to store data related to learner’s experience extracted
from their own database. Therefore, LMS lack a mechanism to collect experience
data from learning applications. Moreover, the collection of learning experience
data becomes more complex when learning applications are remotely hosted.
Especially, proper authentication and right management mechanisms between
the LMS, the tools, and the LRS are missing.

3 Seamless Integration via LRS Proxy


Our solution allows the learner’s LMS to store its collected data as well as the
learner’s interactions by using an LRS proxy. We define an LRS Proxy in charge
of getting learner’s interactions and translating them to Tin Can statements
as well as sending those statements to the LRS. In this sense, the tool must
include the parameters required to connect to the LRS proxy in order to send the
learner’s interactions. These parameters are included into the LTI configuration
of the Tool Consumer as custom parameters. The Tool Provider provides tools
with these LTI parameters and then the Tool can access the LRS proxy without
any additional authentication process. LRS stores all learners’ interactions and
so that any LA software can use its learning records to analyze the learning
experiences.
Considering the previous use case, Fig. 2 shows the proposed architecture
where the user can see the integration between the LTI launch process and the
storage of the experience data via the LRS proxy. We introduce below the key
aspects to manage seamless integration between LTI and Tin Can API.

zamfira@unitbv.ro
174 F.J. Garcı́a Clemente et al.

Fig. 2. Proposal for seamless integration between LTI and Tin Can API.

3.1 Custom Parameters into LTI Configuration

The LTI link requires a manual configuration process that consists in the
exchange of OAuth credentials. Teachers or course managers are in charge of
the LTI configuration and so, they can decide when a tool is included or shared
in the LMS. When a tool is included in a course, a learner can launch it. The
LMS internal process consists in a Basic LTI Launch Request where the Tool
Consumer provides the learner’s browser with all LTI parameters (OAuth para-
meters, context information, user identification and other learning information)
and then the browser uses them to get access to the tool delivered by the Tool
Provider. When the Tool is loaded in the learner’s browser, it can connect to a
remote lab if it is necessary. In this case, the tool must include the access cre-
dentials, required to get camera images and interact with actuators and sensors.
The LTI parameters below are taken from a Basic LTI sample launch data.

user id = 288816824
resource link id = 18551-bb669-e1e416
resource link title = System Activity
context id = 456434513
launch presentation document target = iframe
launch presentation return url = http://unilabs.dia.uned.es/lab32/return.php
lis person name full = “Felix J. Garcia”
lis person contact email primary = fgarcia@um.es
lti message type = basic-lti-launch-request
lti version = LTI-1p0
tool consumer instance guid = unilabs.dia.uned.es
tool consumer instance name = UNILABS
tool consumer instance description = UNILABS (LMS Moodle)
tool consumer instance url = http://unilabs.dia.uned.es

zamfira@unitbv.ro
Collecting Experience Data from Remotely Hosted Learning Applications 175

oauth consumer key = 12345


oauth signature = QWgJfKpJNDrpncgO9oXxJb8vHiE=
oauth signature method = HMAC-SHA1

The parameter user id uniquely identifies the user, while the lis parameters
contain information about the user account that is performing the LTI launch
request. The specific meaning of the content in these fields is defined by Learning
Information Services (LIS) [6]. The parameters launch presentation describe
the kind of browser window/frame where the Tool Consumer has launched the
Tool. The fields tool consumer instance give details of the Tool Consumer and
the ouath are produced by the signing process. The oauth consumer key para-
meter identifies which Tool Consumer is sending the message allowing the Tool
Provider to look up the appropriate secret for validation.
In addition to the standard LTI parameters, the creator of a LTI link can
add custom key/value parameters to a launch, which are to be included with
the launch of the LTI link. When there are custom parameters, each custom
parameter is included into POST data when a Basic LTI launch is performed.
Creators of LTI links should limit their parameter names to lower case and to
use no punctuation other than underscores.
Our solution proposes to include a set of key/value pairs into the optional
custom section in the LMS that originally authored the link (i.e. Tool Consumer).
These custom parameters define the Tin Can connection between the Tool and
LRS Proxy. For example, the following LTI custom parameters complete the
previous Basic TLI sample launch data.

custom TinCan base endpoint: http://unilabs.dia.uned.es/xapi/


custom TinCan activity id: http://unilabs.dia.uned.es/xapi/ActivitySystem
custom TinCan verbs: changed, clicked, moved

The field custom T inCan base endpoint contains the LRS proxy endpoint
service, while custom T inCan activity id identifies the Tin Can object and
custom T inCan verbs defines the verbs that must be used in the Tin Can
statements. These fields are required. Note that the Tin Can actor is identi-
fied unequivocally by the parameter lis person contact email primary.
These parameters are sent back to the external tool when the tool is launched.
If the LTI link is imported and then exported, the custom parameters should
be maintained across the import/export process unless the intent is to redefine
the link.

3.2 Seamless Access to LRS Proxy


Tools, as are considered here, are web applications that can run within a browser.
Therefore, a learner only needs to use a browser to authenticate against his/her

zamfira@unitbv.ro
176 F.J. Garcı́a Clemente et al.

local LMS (typically a username and password) and then get access to the learn-
ing space where he/she could find Tools directly without additional authentica-
tion process in other remote LMS. This LTI process, based on OAuth protocol
and called single sign-on (SSO), allows users to enter their credentials to gain
access to multiple systems just once.
In the same way, our solution proposes that learner gains access to LRS proxy
using a SSO mechanism and so avoiding a new authentication process. Specif-
ically, when the Tool is launched, it is presented into a type of browser win-
dow/frame (identified by the LTI field launch presentation document target)
embed in the learner’s learning space. That allows the Tool to get access to LRS
Proxy service located into the learner’s LMS using the current session.

3.3 Collecting Experience Data

The following Tin Can statement is an example of learner interaction catched


by an external application and sent to the LRS proxy. This statement means
“the user fgarcia@um.es updated the value of the element processes to 10 in the
activity ActivitySystem”.

{
”actor”: {
”mbox”: ”mailto:fgarcia@um.es”
},
”verb”: {
”id”: ”http://unilabs.dia.uned.es/xapi/verbs/changed”,
”display”:{”en-US”: ”changed”}
},
”object”:{
”id”: ”http://unilabs.dia.uned.es/xapi/ActivitySystem”
},
”result”: {
”extensions”: {
”http://unilabs.dia.uned.es/xapi/extensions/name”: ”processes”,
”http://unilabs.dia.uned.es/xapi/extensions/value”: ”10”
}
}
}

The actor object could have two properties, “name” and “mbox”, but only
“mbox” uniquely identifies the user. Verbs in Tin Can are URIs, and should be
paired with a short display string. Typically, the object will be a tool and the
result will include the extensions fields in order to provide a complete description
about the learner interaction.
Additionally, our solution proposes that the allowed verbs are set by the LTI
custom field custom T inCan verb. Therefore, the external application should

zamfira@unitbv.ro
Collecting Experience Data from Remotely Hosted Learning Applications 177

only send statements with these verbs. In this sense, the application must be
aware about possible verbs that can be requested. Considering the learning
application is running in a web browser, we propose that valid verbs must be
associated to HTML events. For example, the onchange event is related to the
changed verb. In addition, the event is triggered by actions inside a HTML ele-
ment and even it might include relevant action values. These event elements
are included into the statements using the extensions fields, as showed in the
previous example.

4 Implementation

In order to validate our proposal, we have implemented a LRS proxy plug-in in


Moodle [12] that stores learning records in a SCORM Cloud LRS service [9],
and a LRS proxy javascript client on Easy JavaScript Simulation (EjsS) [4].

4.1 LRS Proxy Moodle Plug-In

Plugins enable the addition of new features and functionality to Moodle, such
as new activities, new quiz question types, new reports, integration with other
systems and many more. Specifically, LRS proxy is a web service into a local
plugin.
The following description declares the service including the name that iden-
tifies the plugin, web service functions and internal properties.

$services = array(
’LRS Proxy’ =>array(
’shortname’ =>’lrsproxy’,
’functions’ =>array (’lrsproxy echo text’, ’lrsproxy store statement’,
’lrsproxy store statements’, ’lrsproxy retrieve statement’,
’lrsproxy fetch statements’, ’lrsproxy store activity state’,
’lrsproxy retrieve activity state’, ’lrsproxy fetch activity states’,
’lrsproxy delete activity state’, ’lrsproxy clear activity states’),
’restrictedusers’ =>1,
’enabled’ =>0
)
);

The function lrsproxy echo text is only for testing purposes. The rest of func-
tions are divided into two groups. One is related to the functions for storing,
retrieving and fetching statements. These functions are used by Tools and might
also be used by other applications that can manage Tin Can statements. The other
group is for storing, retrieving and fetching states. These functions might be used
by Tools that wants to save arbitrary documents in the context of a particular
learner and particular Tool, for example, a snapshot of the learner’s experience.

zamfira@unitbv.ro
178 F.J. Garcı́a Clemente et al.

In relation to the internal implementation, the plugin was deployed using Tin
CanPHP [10], which provides a PHP library for implementing the Tin Can API.
Moreover, this library includes examples to show how to use Tin Can endpoints
services available to a SCORM Cloud account.

4.2 LRS Proxy Javascript Client


In order to use the LRS proxy, Tools must include a client that provides functions
to send Tin Can statements and the capabilities to listen to user events and build
statements as well as to parse LTI fields for extracting the LTI link parameters.
In relation to the internal implementation, the client was deployed using
Tin CanJS [10] that provides a JavaScript library for implementing the Tin Can
API. In addition, the client was integrated into the EjsS library in order to catch
HTML events and then create the Tin Can statements. The following code shows
how the move events are catched when the moved verb is set into the LTI link
parameters.

model.addLRSListeners = function(verbs) {
...
if(verbs.indexOf(’moved’) >-1){
document.addEventListener(’mousemove’, model.sendMovedInteraction);
document.addEventListener(’touchmove’, model.sendMovedInteraction);
}
...
};

However, a user who creates a simulation or a remote laboratory with EjsS


does not need to worry about how the Tin Can statements are sent or how
the LTI fields are captured, since the EjsS library does this automatically, in a
transparent way to author.

5 Tests and Discussion

In order to validate our proposal, we have deployed a basic remote laboratory.


Moreover, we have tested our implementation using resources located in three
European universities, as shown in Fig. 3.
The main elements of our scenario are distributed as follows: the local learn-
ing system located in the UNILabs Moodle server at UNED, the remote learning
system in a different Moodle server at UMU and the remote applications at the
EPFL. The implementation of this basic remote laboratory was an application
that gets the load average of a computer activity during one minute and its
number of processes in runnable state, and allows turning off/on CPU cores as
well as adding/removing processes.

zamfira@unitbv.ro
Collecting Experience Data from Remotely Hosted Learning Applications 179

Fig. 3. Testing scenario with a basic online laboratory.

Figure 4 presents the user interface for the application that was deployed by
EjsS in order to show a real-time graphics with the system activity and two
form inputs with the number of CPU cores online and the number of processes
running, as well as buttons to increase or decrease both input values.

Fig. 4. Graphics interface for the application.

In relation to this remote laboratory, note that the load average is a mea-
sure of system activity, calculated by the operating system and expressed as a
fractional number. In order to ensure adequate performance, the load average
should ideally be less than the number of CPU cores in the system. However,
learners can change the number of processes or CPU cores online to visualize
how the load average evolves.

zamfira@unitbv.ro
180 F.J. Garcı́a Clemente et al.

The Moodle configuration for the LTI link is shown in the Sect. 3.1 and an
example of Tin Can statement generated by this application is shown in the
Sect. 3.3. SCORM Cloud provides a simple interface for LRS endpoint service
configuration and statement viewer. However, it could be replaced by other LRS,
for example, Learning Locker [5].
Other existing remote laboratories can be included into the architecture
if the LRS proxy client is integrated in the application, i.e. if the application
processes the custom LTI fields and sends the Tin Can API statements to the
LRS proxy. Although this functionality is implemented into EjsS library, it could
be extracted and used independently.
Moreover, LRS could be shared by several Learning Management Systems
and so, they could even share LA tools in the future. In this way, organizations
can increase their goals of sharing resources and reducing costs in the deployment
of learning systems.
Finally, while our implementation uses Moodle, the same elements could be
deployed with other LMS, for example openEdX [2] or Graasp [3]. In fact, since
our proposal is based on standard technologies, the integration between different
LMS is supported.

6 Conclusions and Future Directions

Current learning technologies permit the deployment and sharing of learning


applications in different learning systems, but it is necessary to find a correct
way to integrate all these technologies in order to avoid complex architectures or
confusing authentication process. In this sense, our proposal shows how to get a
seamless integration of the main learning technologies in a Learning Management
System.
As future work, we plan to consider the deployment of learning analytic tools
in order to get an online and offline feedback. Specifically, we are working on
tools based on data mining and data analysis that will be available to provide
teachers with a just in time feedback.

References
1. Dodero, J.M., González-Conejero, E.J., Gutiérrez-Herrera, G., Peinado, S., Tocino,
J.T., Ruiz-Rube, I.: Trade-off between interoperability and data collection perfor-
mance when designing an architecture for learning analytics. Future Gener. Com-
put. Syst. 68, 31–37 (2017)
2. edX. Open edX: Open Courseware Development Platform. https://open.edx.org/.
Accessed 31 Oct 2016
3. EPFL React Group: Grassp project. http://graasp.eu/. Accessed 31 Oct 2016
4. Clemente, F.J.G., Esquembre, F.: EjsS: A JavaScript library and authoring tool
which makes computational-physics education simpler. In: Poster Presented at the
XXVI IUPAP Conference on Computational Physics (CCP), Boston, USA (2014)
5. HT2 Labs: Learning locker. https://learninglocker.net/. Accessed 31 Oct 2016

zamfira@unitbv.ro
Collecting Experience Data from Remotely Hosted Learning Applications 181

6. IMS Global Learning Consortium: IMS global learning information services best
practice and implementation guide. http://www.imsglobal.org/lis/. Accessed 31
Oct 2016
7. IMS Global Learning Consortium: Learning tools interoperability. https://www.
imsglobal.org/activity/learning-tools-interoperability. Accessed 31 Oct 2016
8. Learning Analytics Technologies for Education: KlassData. http://klassdata.com/.
Accessed 31 Oct 2016
9. Rustici Software: SCORM cloud. https://cloud.scorm.com/. Accessed 31 Oct 2016
10. Rustici Software: Tin Can API. https://tincanapi.com/. Accessed 31 Oct 2016
11. Sierra, A.J., Martı́n-Rodrı́guez, A., Ariza, T., Muñoz-Calle, J., Fernández-Jiménez,
J.J.: LTI for interoperating e-Assessment tools with LMS. In: Methodologies and
Intelligent Systems for Technology Enhanced Learning, 6th International Confer-
ence, pp. 173–181. Springer, Switzerland (2016)
12. UNED Labs: Moodle LRS proxy. https://github.com/UNEDLabs/moodle-local
lrsproxy. Accessed 31 Oct 2016

zamfira@unitbv.ro
“Remote Wave Laboratory” with Embedded
Simulation – Real Environment
for Waves Mastering

Franz Schauer1,2(&), Michal Gerza1, Michal Krbecek1,


and Miroslava Ozvoldova1,2
1
Faculty of Applied Informatics, Tomas Bata University in Zlin,
760 05 Zlin, Czech Republic
fschauer@fai.utb.cz
2
Faculty of Education, University of Trnava, 918 43 Trnava, Slovak Republic

Abstract. The paper describes a new remote experiment in REMLABNET -


“Remote Wave Laboratory” constructed on the ISES (Internet School Experi-
mental System). The remote experiment contributes to understanding of concepts
of harmonic waves, their parameters (amplitude, frequency and period, and phase
velocity) and dependence of the instantaneous phase on time and path covered.
Also it serves for the measurements and understanding of the concept of the phase
sensitive interference and the superposition of parallel/perpendicular waves.

Keywords: ISES  Remote Wave Laboratory  Embedded multiparameter


simulation  Wave phenomena  Parameters of waves  Interference 
Superposition

1 Introduction

Waves and their phase sensitive interference and superposition are important phe-
nomena constituting a major problem in students’ teaching of waves and optics, due to
the necessary students’ imagination. Then, the phenomena of interference of waves and
their superposition are difficult to understand. The proposed “Remote Wave Labora-
tory” is aimed at real measurements of the phase and the most frequent phenomena of
phase-sensitive wave superposition on real physical instrumentation with multiple use
and applications. As a teaching tool for better understanding of real measurements
serves the embedded real multiparameter simulation of the observed phenomena
introduced for the first time in our remote experiments.

2 Purpose or Goal

The whole system of the remote experiment (RE) “Remote Wave Laboratory” is
conceived to enable demonstrating the basic concepts of wave phenomena, as:
– The concept of the basic parameters of harmonic waves - the amplitude, the fre-
quency, the period, the initial phase, the phase velocity and the wavelength,

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_18
zamfira@unitbv.ro
“Remote Wave Laboratory” with Embedded Simulation 183

– The concept of the instantaneous phase (corroborated with our electronic phase
laboratory) as the function of the elapsed time and the path covered by the wave (in
relation to two periodicities of waves - in time and in space),
– The concept of the phase sensitive interference and the superposition of
parallel/perpendicular waves.

3 Approach and Schematic Arrangement


3.1 Theory of Wave Laboratory
Let us suppose the signal detected by both acoustic detectors is u1 ¼ a sinðxtÞ and
u2 ¼ a sinðxt þ DuÞ.
• Parallel interference of both signals gives in general

x1 þ x2 ¼ a sinðxtÞ þ a sinðxt þ DuÞ ¼ A sinðxt þ D/Þ; ð1Þ

where the amplitude A and the initial phase D/ of the resulting signal is
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
A¼a 2ð1 þ cosðDuÞÞ; ð2Þ

sinðDuÞ
tgðD/Þ ¼ : ð3Þ
1 þ cosðDuÞ

• Perpendicular superposition of both signals gives x ¼ a sinðxtÞ and


y ¼ a sinðxt þ Du þ p=2Þ gives in general

y2  2xycosðDuÞ þ x2 ¼ a2 sinðDuÞ2 ; ð4Þ

which reduces in particular cases Du = 0 and p/2 into the straight line and circle,
respectively, in other cases of Du gives an ellipse.
• Phase measurements - it is then possible to determine the phase shift of both
signals (waves) Du from the position of the ellipse as shown Fig. 1.
 
Y
Du ¼ arcsin ð5Þ
H

3.2 Students’ Results in Computer Oriented Hands-On Laboratory


Computer oriented hands-on laboratory exercise on acoustic waves has been for con-
siderable time the part of students’ laboratory. Its arrangement is similar as that in
Fig. 3, with positioning of the sound detector manually. The students’ results are in
Fig. 2, where (a) depicts the phase shift dependence of both signals from detectors Du
on the wave path difference of both waves Dx for wavelength k = 35.3 cm, and in
(b)–(d) are the superpositions of two waves’ signals for phase shifts Du = 0 rad (b),
Du = p/2 rad and (c) Du = p rad (d). The upper panel shows the signals of both the

zamfira@unitbv.ro
184 F. Schauer et al.

Fig. 1. Scheme for the phase shift Du of two waves determination using Eqs. (4) and (5)

Fig. 2. Examples of student’s work on hands - on experiment (a) Phase shift Du dependence on
the difference of the wave path Dx, (b)–(d) Superposition of two waves for phase shift
Du = 0 rad (b), Du = p/2 rad (c), Du = p rad (d), all for the wavelength k = 35.3 cm; the upper
panel shows signal of both waves, the middle panel phase-sensitive superposition-Lissajous
figures (for perpendicular waves) and bottom panel phase-sensitive interference signal (for
parallel waves)

zamfira@unitbv.ro
“Remote Wave Laboratory” with Embedded Simulation 185

Microphones ISES display

Loudspeaker posiƟoning
ISES microphones

AC generator

ISES plate and


ISES relays modules
Motor and posiƟon board
supply

Fig. 3. Schematic arrangement (upper panel) and the real RE “Remote Wave Laboratory”
(lower panel) with the loudspeaker as the acoustic wave source, two acoustic detectors 1 and 2
and the driving motor for moving detector 2, producing the phase shift Du of both signals,
corresponding to the detectors´ distance Dx

sound waves, the middle panel the phase-sensitive superposition - Lissajous figures (for
perpendicular waves) and the bottom panel shows the phase-sensitive interference
signal (for parallel waves).

zamfira@unitbv.ro
186 F. Schauer et al.

3.3 Arrangement of Remote “Remote Wave Laboratory”


The arrangement of the remote experiment (RE) “Remote Wave Laboratory” is in
Fig. 3 both in schematical and real experimental arrangement. The acoustic wave
source generates the planar wave, which is detected by two detectors, one of them is
movable by the motor drive in a controlled way, producing the phase shift Du of both
coherent signals, corresponding to the detectors distance Dx. Both signals are
phase-sensitively added/superimposed in parallel/perpendicular directions.
The ISES USB module with the controlling PC controls the remote experiment
serving of both RE and embedded simulation (ES) control. The system is built on the
Internet School Experimental System (ISES) components [1]. The whole arrangement
is placed on the optical bench and it consists of the loudspeaker as the wave source and
two miniature microphones as the signal detectors, one of them is movable by the step
drive. Both signals are displayed, together with their phase-sensitive interference signal
(for parallel, linearly polarized progressive waves) and phase-sensitive superposition
Lissajous figures (for perpendicular, linearly polarized progressive waves) with the
corresponding data outputs for data processing (see Fig. 2). The .psc file controlling
program and the web page were built using ISES environment Easy Remote ISES (ER
ISES) for compiling the control RE programs [1].

Fig. 4. Example of the view of the RE web page “Remote Wave Laboratory”, measured data,
(left) and simulation of the observed phenomenon (right); from upper graph: both
signals-perpendicular superposition- interpherence; the position of the movable detector is
visible in the live stream

zamfira@unitbv.ro
“Remote Wave Laboratory” with Embedded Simulation 187

3.4 Embedded Simulation of the Wave Laboratory


In Fig. 4a is the web page of RE “Remote Wave Laboratory” (from Fig. 3) with the
measured data output (left) and the output of the embedded simulation (right).
As the part of the solution of embedded simulations in our ISES RE we used the
mathematical solver built in RE Measureserver and its.psc file. The solver provides
solving of a wide range of arithmetic operations and solves differential equations [4].
The example of the use for the response of the RLC circuit to voltage perturbation is
shown in Fig. 5 [2], here was used for simple plotting of calculated quantities
according Eqs. (1–4).
The Measureserver unit is a significant software part of the ISES RE concept. It is
the processing and communicating server located between the physical hardware and
connected clients. The Measureserver core is designed as an advanced finite-state
machine to setup and process the logical instructions solving prescribed activities. Its
functioning is based on the control program that comes from the.psc file loaded to the
Measureserver before its startup.

Fig. 5. The general mathematical unit for ISES remote experiments enabling both arithmetical
operations and differential equations solutions

zamfira@unitbv.ro
188 F. Schauer et al.

When the client starts RE, the Measureserver begins the communication with
ISES HW. Then the RE is ready to perform all the required measurements according to
the web page instructions given by the client. Then, the Measureserver obtains
experimental data from the ISES modules (meters, sensors and probes) and transports
them again on the client’s web page for the analyses [3].
The embedded ES works in a similar way, replacing the ISES module by the
mathematical solver and providing the data for graphical comparison with the mea-
sured data. The difficult problem was the synchronization of both measured and sim-
ulated data into one time dependent graphical representation to study the role of model
parameters on the resulting signals.

4 Conclusions

The remote environment “Remote Wave Laboratory” provides the following knowl-
edge from waves:
– Phase of the wave as a function of the covered distance (with respect to the ref-
erence signal and its linearity),
– To examine parameters of the wave - the phase velocity and the wavelength in a
medium, the amplitude, the frequency and the period of the wave,
– To examine the concept of the coherence of two acoustic waves,
– To show the phase sensitive interference of two parallel waves and find the con-
ditions for extremes,
– To show the phase sensitive superposition of two perpendicular waves and to find
the phase shift and amplitude to the reference wave,
– To find the integer quotient of frequencies of an unknown waves.

Acknowledgement. The support of the project of the Swiss National Science Foundation
(SNSF) - “SCOPES”, No. IZ74Z0_160454 is highly appreciated. The support of the Internal
Agency Grant of the Tomas Bata University in Zlin No. IGA/FAI/2016 for PhD students is
acknowledged.

zamfira@unitbv.ro
“Remote Wave Laboratory” with Embedded Simulation 189

References
1. Ozvoldova, M., Schauer, F.: Remote laboratories in research-based education of real world.
In: Frankfurt, F.S. (ed.), p. 157. Peter Lang International Academic Publisher (2015) ISBN
978-80-224-1435-7
2. Gerza, M., Schauer, F., Dostal, P.: Embedded simulations in real remote experiments for ISES
e-Laboratory. In: EUROSIM 2016, Oulu, Finland, pp. 653–658. ISBN 978-1-5090-4119-0
3. Gerza, M., Schauer, F.: Intelligent processing of experimental data in ises remote laboratory.
Int. J. Online Eng., 58–63 (2016). ISSN 1861-2121. Austria
4. Inspiration of Prof. F. Esquembre in Solver Compiling is Appreciated

zamfira@unitbv.ro
Remote Laboratories: For Real Time Access to Experiment
Setups with Online Session Booking, Utilizing a Database
and Online Interface with Live Streaming

B. Kalyan Ram1 ✉ , S. Arun Kumar1, S. Prathap1, B. Mahesh2,


( )

and B. Mallikarjuna Sarma2


1
Electrono Solutions Pvt. Ltd., #513, Vinayaka Layout, Immadihalli Road, Whitefield,
Bangalore 560066, India
{kalyan,arun,prathap}@electronosolutions.com
2
Independent Consultants, Bangalore, India
maheshryu1@yahoo.com, sarma.mallikarjuna@gmail.com

Abstract. This paper discusses the physical implementation of lab experiments


that are designed to be accessed from any web-browser using clientless remote
desktop gateway apache guacamole with the support of remote desktop protocol.
Which also facilitates live streaming of the experiments using axis cgi api, online
slot booking for students to book their respective sessions and apache Cassandra
database for users details storage.
Here, we shall address all aspects related to the system architecture and infra‐
structure needed to establish a Real time Remote access system for a given
machine (in this case being electric machines - which otherwise could be extended
to any machine). This is being built to evaluate the system feasibility to implement
a complete machine health monitoring system with remote monitoring and control
capability, though the current implementation is aimed at students being able to
perform the experiments related to machines lab.

Keywords: Remote labs · Engineering laboratory experiments · Apache


guacamole · Remote desktop protocol · Live streaming · Axis cgi api · Online slot
booking · Apache Cassandra database

1 Introduction

Laboratory experiments are the integral part of Engineering Education. The main focus
is to gain access to these lab experiments over the internet using various integration
tools. Remote laboratory (also known as online laboratory, remote workbench) is the
use of telecommunications to remotely conduct real (as opposed to virtual) experiments,
at the physical location of the operating technology, Enabling the students to utilize these
technology from a separate geographical location. Supported by resources based on new
information and communication technologies, it is now possible to remotely control a
wide variety of real laboratories.

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_19

zamfira@unitbv.ro
Remote Laboratories: For Real Time Access to Experiment Setups 191

2 Architecture of Guacamole

In cloud computing environment, there are various important issues, including standard,
virtualization, resource management, information security, and so on. Among these
issues, desktop computing in virtualized environment has emerged as one of the most
important ones in the past few years. Currently, users no longer use a powerful, more-
than-required hardware but share a remote powerful machine using light weight thin-
client. A thin-client is a stateless desktop terminal that has no hard drive. All features
typically found on the desktop PC, including applications, sensitive data, memory, etc.,
are stored back in the server when using a thin client. These thin clients may not neces‐
sarily be a totally different hardware but can also be in the form of PCs. Thin clients,
software services, and backend hardware make up thin client computing, a remote
desktop computing model [1]. Guacamole is not a self-contained web application and
is made up of many parts. The web application is actually intended to be simple and
minimal, with the majority of the grunt work performed by lower-level components.
Users connect to a Guacamole server with their web application. The Guacamole client,
written in JavaScript, is served to users by a web server within the Guacamole server.
Once loaded, this client connects back to the server over HTTP using the Guacamole
protocol. The web application deployed to the Guacamole server reads the Guacamole
protocol and forwards it to guacd, the native Guacamole proxy. This proxy actually
interprets the contents of the Guacamole protocol, connecting to any number of remote
desktop servers on behalf of the user [2] (Fig. 1).

Fig. 1. Guacamole architecture

2.1 Guacamole Protocol


The web application does not understand any remote desktop protocol at all. It does not
contain support for VNC or RDP or any other protocol supported by the Guacamole

zamfira@unitbv.ro
192 B. Kalyan Ram et al.

stack. It actually only understands the Guacamole protocol, which is a protocol for
remote display rendering and event transport. While a protocol with those properties
would naturally have the same abilities as a remote desktop protocol, the design prin‐
ciples behind a remote desktop protocol and the Guacamole protocol are different: the
Guacamole protocol is not intended to implement the features of a specific desktop
environment. As a remote display and interaction protocol, Guacamole implements a
superset of existing remote desktop protocols [1].
Adding support for a particular remote desktop protocol (like RDP) to Guacamole
thus involves writing a middle layer which “translates” between the remote desktop
protocol and the Guacamole protocol. Implementing such a translation is no different
than implementing any native client, except that this particular implementation renders
to a remote display rather than a local one.

2.2 GUACD
• guacd is the heart of Guacamole which dynamically loads support for remote desktop
protocols (called “client plug-ins”) and connects them to remote desktops based on
instructions received from the web application.
• guacd is a daemon process which is installed along with Guacamole and runs in the
background, listening for TCP connections from the web application. guacd also does
not understand any specific remote desktop protocol, but rather implements just
enough of the Guacamole protocol to determine which protocol support needs to be
loaded and what arguments must be passed to it. Once a client plug-in is loaded, it
runs independently of guacd and has full control of the communication between itself
and the web application until the client plug-in terminates (Fig. 2).

Fig. 2. Guacamole server

zamfira@unitbv.ro
Remote Laboratories: For Real Time Access to Experiment Setups 193

2.3 Remote Desktop Gateway


A remote desktop gateway provides access to multiple operating environments using an
HTML5 capable browser without the use of any plug-ins. Users connect to a Guacamole
server with their web browser. The Guacamole client, written in JavaScript, is served to
users by a web server within the Guacamole server. Once loaded, this client connects
back to the server over HTTP using the Guacamole protocol. The web application
deployed to the Guacamole server reads the Guacamole protocol and forwards it to
guacd, the native Guacamole proxy. This proxy actually interprets the contents of the
Guacamole protocol, connecting to any number of remote desktop servers on behalf of
the user. Remote Desktop Protocol (RDP) provides remote login and desktop control
capabilities that enable a client to completely control and access a remote server. The
protocol is implemented by Microsoft Corporation based on ITU-T T.120 family proto‐
cols. The major advantage distinguishing the RDP from other remote desktop schemes,
such as the frame-buffer approach, is that the protocol is based on preferably sending
graphic device interface (GDI) information from a server, instead of full bitmap
images [3].

3 Remote Labs Implementation

The implementation of remote lab involves designing a hardware infrastructure that


supports the remote access feature through the technology infrastructure mentioned
herewith (Fig. 3).

Fig. 3. Remote labs architectural block diagram

This specific remote laboratory setup is made up of Motor - Generator setups, PLC
trainer setup and Process control trainer setup.

zamfira@unitbv.ro
194 B. Kalyan Ram et al.

These setups are designed to be accessed remotely by an authorized user through a


browser interface. Currently, the system is tested for different browsers namely Google
Chrome, Microsoft Internet Explorer and Mozilla Firefox and has found to be compat‐
ible for these browsers accordingly. The architecture is designed to support most
commonly used browsers.

4 Cassandra Database

Apache Cassandra is a highly scalable, high-performance distributed database designed


to handle large amounts of data across many commodity servers, providing high avail‐
ability with no single point of failure. It is a type of NoSQL database.
Cassandra has become popular because of its outstanding technical features. Given
below are some of the features of Cassandra:
– Elastic scalability - Cassandra is highly scalable; it allows to add more hardware to
accommodate more customers and more data as per requirement.
– Always on architecture - Cassandra has no single point of failure and it is contin‐
uously available for business-critical applications that cannot afford a failure.
– Fast linear-scale performance - Cassandra is linearly scalable, i.e., it increases your
throughput as you increase the number of nodes in the cluster. Therefore it maintains
a quick response time.
– Flexible data storage - Cassandra accommodates all possible data formats including:
structured, semi-structured, and unstructured. It can dynamically accommodate
changes to your data structures according to your need.
– Easy data distribution - Cassandra provides the flexibility to distribute data where
you need by replicating data across multiple data centers.
– Transaction support - Cassandra supports properties like Atomicity, Consistency,
Isolation, and Durability (ACID).
– Fast writes - Cassandra was designed to run on cheap commodity hardware. It
performs blazingly fast writes and can store hundreds of terabytes of data, without
sacrificing the read efficiency.
The design goal of Cassandra is to handle big data workloads across multiple nodes
without any single point of failure [4]. Cassandra has peer-to-peer distributed system
across its nodes, and data is distributed among all the nodes in a cluster.
– All the nodes in a cluster play the same role. Each node is independent and at the
same time interconnected to other nodes.
– Each node in a cluster can accept read and write requests, regardless of where the
data is actually located in the cluster.
– When a node goes down, read/write requests can be served from other nodes in the
network.
In Cassandra, one or more of the nodes in a cluster act as replicas for a given piece
of data. If it is detected that some of the nodes responded with an out-of-date value,
Cassandra will return the most recent value to the client. After returning the most recent

zamfira@unitbv.ro
Remote Laboratories: For Real Time Access to Experiment Setups 195

value, Cassandra performs a read repair in the background to update the stale values.
The following figure shows a schematic view of how Cassandra uses data replication
among the nodes in a cluster to ensure no single point of failure (Fig. 4).

Fig. 4. Structure of Cassandra database

5 Single Sign-On Application

The first time that a user seeks access to an application, the Login Server:
– Authenticates the user by means of user name and password
– Passes the client’s identity to the various applications
– Marks the client being authenticated with an encrypted login cookie
In subsequent user logins, this login cookie provides the Login Server with the user’s
identity, and indicates that authentication has already been performed. If there is no login
cookie, then the Login Server presents the user with a login challenge. To guard against
sniffing, the Login Server can send the login cookie to the client brow er over an
encrypted SSL channel. The login cookie expires with the session, either at the end of
a time interval specified by the administrator, or when the user exits the browser. It is
never written to disk. A partner application can expire its session through its own explicit
logout.
1. Single Sign-On Application Programming Interface (API)
(a) The Single Sign-On API enables:
(i) Applications to communicate with the Login Server and to accept a user’s
identity as validated by the Login Server
(ii) Administrators to manage the application’s association to the Login Server
(b) There are two kinds of applications to which Single Sign-On provides access:
(i) Partner Applications
(ii) External Applications

zamfira@unitbv.ro
196 B. Kalyan Ram et al.

2. Partner Applications
Partner applications are integrated with the Login Server. They contain a Single Sign-
On API that enables them to accept a user’s identity as validated by the Login Server.
3. External Applications
External applications are web-based applications that retain their authentication
logic. They do not delegate authentication to the Login Server and, as such, require a
user name and password to provide access. Currently, these applications are limited to
those which employ an HTML form for accepting the user name and password. The user
name may be different from the SSO user name, and the Login Server provides the
necessary mapping (Fig. 5).

Fig. 5. Single Sign-On

6 Port Forwarding

In computer networking, port forwarding or port mapping is an application of network


address translation (NAT) that redirects a communication request from
one address and port number combination to another while the packets are traversing a
network gateway, such as a router or firewall. This technique is most commonly used
to make services on a host residing on a protected or masqueraded (internal) network
available to hosts on the opposite side of the gateway (external network), by remapping
the destination IP address and port number of the communication to an internal host.
Port forwarding allows remote computers (for example, computers on the Internet) to
connect to a specific computer or service within a private local-area network (LAN). In
a typical residential network, nodes obtain Internet access through a DSL or cable

zamfira@unitbv.ro
Remote Laboratories: For Real Time Access to Experiment Setups 197

modem connected to a router or network address translator (NAT/NAPT). Hosts on the


private network are connected to an Ethernet switch or communicate via a wireless LAN.
The NAT device’s external interface is configured with a public IP address. The
computers behind the router, on the other hand, are invisible to hosts on the Internet as
they each communicate only with a private IP address [6]. When configuring port
forwarding, the network administrator sets aside one port number on the gateway for
the exclusive use of communicating with a service in the private network, located on a
specific host. External hosts must know this port number and the address of the gateway
to communicate with the network-internal service. Often, the port numbers of well-
known Internet services, such as port number 80 for web services (HTTP), are used in
port forwarding, so that common Internet services may be implemented on hosts within
private networks.
Typical applications include the following:
– Running a public HTTP server within a private LAN
– Permitting Secure Shell access to a host on the private LAN from the Internet
– Permitting FTP access to a host on a private LAN from the Internet
– Running a publicly available game server within a private LAN
Usually only one of the private hosts can use a specific forwarded port at one time,
but configuration is sometimes possible to differentiate access by the originating host’s
source address.

7 A Record

An A record maps a domain name to the IP address (IPv4) of the computer hosting the
domain. Simply put, an A record is used to find the IP address of a computer connected
to the internet from a name. The A in A record stands for Address. Whenever you visit
a web site, send an email, connect to Twitter or Facebook or do almost anything on the
Internet, the address you enter is a series of words connected with dots. For example, to
access any website you enter a URL for instance www.google.com. At the name server
there is an A record that points to the IP address 8.8.8.8. This means that a request from
your browser to www.google.com is directed to the server with IP address 8.8.8.8. A
Records are the simplest type of DNS records, yet one of the primary records used in
DNS servers [7]. You can actually do quite a bit more with A records, including using
multiple A records for the same domain in order to provide redundancy. Additionally,
multiple names could point to the same address, in which case each would have its own
A record pointing to the that same IP address.

8 Video Streaming API

The HTTP-based video interface provides the functionality for requesting single and
multipart images and for getting and setting internal parameter values. The image and
CGI requests are handled by the built-in web server. The mjpg/video.cgi is used to
request a Motion JPEG video stream with specified arguments. The arguments can be

zamfira@unitbv.ro
198 B. Kalyan Ram et al.

specified explicitly, or a predefined stream profile can be used. Image settings saved in
a stream profile can be overridden by specifying new settings after the stream profile
argument [8].

9 Tomcat Web Application Deployment

Deployment is the term used for the process of installing a web application (either a 3rd
party WAR or your own custom web application) into the Tomcat server. Web appli‐
cation deployment may be accomplished in a number of ways within the Tomcat server.
– Statically, the web application is setup before Tomcat is started
– Dynamically; by directly manipulating already deployed web applications (relying
on auto-deployment feature) or remotely by using the Tomcat Manager web appli‐
cation
The Tomcat Manager is a web application that can be used interactively (via HTML
GUI) or programmatically (via URL-based API) to deploy and manage web applica‐
tions. There are a number of ways to perform deployment that rely on the Manager web
application. Apache Tomcat provides tasks for Apache Ant build tool. Apache Tomcat
Maven Plug-in project provides integration with Apache Maven. The desired environ‐
ment should define a JAVA_HOME value pointing to your Java installation. Addition‐
ally, you should ensure the Java javac compiler command run from the command shell
that your operating system provides.

10 Network Architecture

The network architecture consists of ISP-Connection, firewall, load-balancer, Switch,


Server system and thin clients.

10.1 Firewall

A firewall is a network security system designed to prevent unauthorized access to or


from a private network. Firewalls can be implemented in both hardware and software,
or a combination of both. Network firewalls are frequently used to prevent unauthor‐
ized Internet users from accessing private networks connected to the Internet, espe‐
cially intranets. All messages entering or leaving the intranet pass through the firewall,
which examines each message and blocks those that do not meet the speci‐
fied security criteria.

10.2 Load-Balancer
A load balancer is a device that acts as a reverse proxy and distributes network or appli‐
cation traffic across a number of servers. Load balancers are used to increase capacity
(concurrent users) and reliability of applications. They improve the overall performance

zamfira@unitbv.ro
Remote Laboratories: For Real Time Access to Experiment Setups 199

of applications by decreasing the burden on servers associated with managing and


maintaining application and network sessions, as well as by performing application-
specific tasks. Load balancers are generally grouped into two categories: Layer 4 and
Layer 7. Layer 4 load balancers act upon data found in network and transport layer
protocols (IP, TCP, FTP, UDP). Layer 7 load balancers distribute requests based upon
data found in application layer protocols such as HTTP. Requests are received by both
types of load balancers and they are distributed to a particular server based on a config‐
ured algorithm (Fig. 6).

Fig. 6. Network architecture

Some industry standard algorithms are:


– Round robin
– Weighted round robin
– Least connections
– Least response time
Layer 7 load balancers can further distribute requests based on application specific
data such as HTTP headers, cookies, or data within the application message itself, such
as the value of a specific parameter. Load balancers ensure reliability and availability
by monitoring the “health” of applications and only sending requests to servers and
applications that can respond in a timely manner.

zamfira@unitbv.ro
200 B. Kalyan Ram et al.

10.3 Managed Switches


Switches are crucial network devices, so being able to manplate them is sometimes
important in dealing with information flow. Traffic may need to be controlled so that
information is transmitted according to its level of importance, urgency and any opera‐
tional requirements. This is the key reason for including managed switches alongside
unmanaged switches. Whereas an unmanaged switch is sufficient to deal with normal
networking, where traffic is managed solely by servers, a managed switch becomes
useful when it becomes important to filter traffic more precisely.

10.4 Remote Lab Server

The server machine runs on windows server 2012 and makes use of the remote desktop
service to configure and host software developed to control the hardware systems from
server machine.

10.5 Thin Clients

Thin client is a lightweight computer that is purpose-built for remote access to a server
(typically cloud or desktop virtualization environments). It depends heavily on another
computer (its server) to fulfill its computational roles. The specific roles assumed by the
server may vary, from hosting a shared set of virtualized applications, a shared desktop
stack or virtual desktop, to data processing and file storage on the client’s or user’s behalf.
This is different from the desktop pc (fat client), which is a computer designed to take
on these roles by itself.
Thin clients occur as components of a broader computing infrastructure, where many
clients share their computations with a server or server farm. The server-side infrastruc‐
ture makes use of cloud computing software such as application virtualization, hosted
shared desktop (hsd) or desktop virtualization (vdi). This combination forms what is
known today as a cloud based system where desktop resources are centralized into one
or more data centers. The benefits of centralization are hardware resource optimization,
reduced software maintenance, and improved security.

10.6 Heartbeat/Health Information System with SMS Alert


The status of the systems is unknown by the system administrator until and unless he
monitors the results physically. So by sending packets to each systems and receiving an
acknowledgement that the packet is recieved, similar to a two way hand shake algorithm.
We developed a service where the packets are send and recieved. Once these packets
are unable to be send from any of the systems or none of the systems are receiving these
packets an sms alert will be given to the system administrator phone.

zamfira@unitbv.ro
Remote Laboratories: For Real Time Access to Experiment Setups 201

11 User Statistics of Remote Labs

Statistics is the study of numerical information, which is called data. People use statistics
as tools to understand information. Learning to understand statistics helps a person react
intelligently to statistical claims. Statistics are used in the fields of business, math,
economics, accounting, banking, government, astronomy, and the natural and social
sciences. Over all session statics is put in the admin portal. Where the admin will have
the privilege to check the overall user sessions, how many session are booked and
cancelled. The scheduler help to book a slot at the required time as per the user needs.
And the lab can be accessed at the particular time slot booked by the users (Fig. 7).

Fig. 7. Scheduler

Recently, many educational institutions have acknowledged the importance of


making laboratories available on-line, allowing their students to run experiments from
a remote computer. While usage of virtual laboratories scales well, remote experiments,
based on scarce and expensive rigs, i.e. physical resources, do not and typically can only
be used by one person or cooperating group at a time. It is therefore necessary to admin‐
ister the access to rigs, where we distinguish between three different roles: content
providers, teachers and students [10]. A scheduler is a software product that allows an
enterprise to schedule and track computer batch tasks. These units of work include
running a security program or updating software [11]. A scheduler starts and handles
jobs automatically by manipulating a prepared job control language algorithm or through
communication with a human user.
Based on the scheduler designed. The start time from when the user has started to
access his lab and at the time the user has used the session is been recorded in the
Cassandra database. Using the scheduler we are able to track even the overall session
booked and using these data a statistical graphs are plotted as shown below (Figs. 8 and 9).

zamfira@unitbv.ro
202 B. Kalyan Ram et al.

Fig. 8. Session portal

Fig. 9. Session statistics

These graphical representation of the admin portal consists of:


– new sessions per week
– new session per week
– average session per day
– cancelled session this week
System based usage statistics is also recorded using the scheduler where the number
if time the system has been accessed from a particular start date and time to a particular
start date and time. Which can be seen in the below image. This statistics can be very
useful for user monitoring and also the system usage can be recorded.

zamfira@unitbv.ro
Remote Laboratories: For Real Time Access to Experiment Setups 203

12 Resource Allocation and Utilization

In computing, resource allocation is necessary for any application to be run on the


system. When the user opens any program this will be counted as a process, and therefore
requires the computer to allocate certain resources for it to be able to run. Such resources
could have access to a section of the computer’s memory, data in a device interface
buffer, one or more files, or the required amount of processing power. A computer with
a single processor can only perform one process at a time, regardless of the amount of
programs loaded by the user (or initiated on start-up). Computers using single processors
appear to be running multiple programs at once because the processor quickly alternates
between programs, processing what is needed in very small amounts of time. This
process is known as multitasking or time slicing. The time allocation is automatic,
however higher or lower priority may be given to certain processes, essentially giving
high priority programs more/bigger slices of the processor’s time. On a computer
with multiple processors different processes can be allocated to different processors so
that the computer can truly multitask. We should allocate system resources in such a
way that the above conflicts doesn’t happen which might affect the performance of the
software. Also proper maintenance of each of the systems is to be ensured to provide
appropriate uptime to this system (Fig. 10).

Fig. 10. Resource utilization

13 Conclusion

Remote labs are the natural choice for accessing physical laboratories online to enhance
the accessibility of both Software and Hardware infrastructure in Engineering colleges
[12]. In the context of India, the data shows that the utilization of Laboratory resources
is very low and the accessibility of laboratory resources to the students is sparse [13].
The topics presented in this paper addresses the technological architecture and the tools
needed for implementation of an effective Remote Lab Infrastructure from the perspec‐
tive of OS independent, Browser independent and Application independent solution.

zamfira@unitbv.ro
204 B. Kalyan Ram et al.

References

1. Wang, S.-T., Chang, H.-Y.: Development of web-based remote desktop to provide adaptive
user interfaces in cloud platform. World Acad. Sci. Eng. Technol. Int. J. Comput. Electr.
Autom. Control Inf. Eng. 8(8), 1572–1577 (2014)
2. http://guacamole.incubator.apache.org
3. Tsai, C.-Y., Huang, W.-L.: Design and performance modeling of an efficient remote
collaboration system. Int. J. Grid Distrib. Comput. 8(4) (2015)
4. Cassandra. https://www.tutorialspoint.com/cassandra/cassandra_introduction.htm
5. SSO. https://docs.oracle.com/cd/A97337_01/ias102_otn/portal.12/a86782/concepts.htm
6. Port Forwarding. https://en.wikipedia.org/wiki/Port_forwarding
7. Introduction to A-record. https://support.dnsimple.com/articles/a-record/
8. VideoAPI. http://www.axis.com/files/manuals/vapix_video_streaming5237_en_1307.pdf
9. Apache Tomcat. http://tomcat.apache.org/
10. Gallardo, A., Richter, T., Debicki, P., et al.: A rig booking system for on-line laboratories.
In: IEEE EDUCON Education Engineering– Learning Environments and Ecosystems in
Engineering Education Session T1A, p. 6 (2011)
11. Scheduler. https://www.techopedia.com/definition/25078/scheduler
12. Kalyan Ram, B., Arun Kumar, S., Mallikarjuna Sarma, B., Bhaskar, M., Chetan Kulkarni,
S.: Remote software laboratories: facilitating access to engineering softwares online. In: 13th
International Conference on Remote Engineering and Virtual Instrumentation (REV), p. 394
(2016)
13. Kalyan Ram, B., Hegde, S.R., Pruthvi, P., Hiremath, P.S., Jackson, D., Arun Kumar, S.: A
distinctive approach to enhance the utility of laboratories in Indian academia. In: 12th
International Conference on Remote Engineering and Virtual Instrumentation (REV), p. 235
(2015)

zamfira@unitbv.ro
Web Experimentation on Virtual and Remote
Laboratories

Daniel Galan1(B) , Ruben Heradio2 , Luis de la Torre1 , Sebastián Dormido1 ,


and Francisco Esquembre3
1
Departamento de Informática y Automática, Facultad de Informática,
UNED, Madrid, Spain
{dgalan,ldelatorre,sdormido}@dia.uned.es
2
de Ingenierı́a de Software y Sistemas Informáticos Computer Science School,
UNED, Madrid, Spain
rheradio@issi.uned.es
3
Departament of Computer Engineering and Technology,
University of Murcia, Murcia, Spain
fem@um.es

Abstract. Laboratory experimentation is essential in any educational


field. Existing software allows two options for performing experiments:
(1) Interacting with the graphic user interface (it is intuitive and close
to reality, but it has certain constraints that cannot be easily solved), or
(2) scripting algorithms (it allows more complex instructions, however,
users have to handle a programming language). This paper presents the
definition and implementation of a generic experimentation language for
conducting automatic experiments on existing online laboratories. The
main objective is to use an existing online lab, created independently,
as a tool in which users can perform tailored experiments. To achieve
it, authors present the Experiment Application. Not only unifies the
two conceptions of performing experiments; it also allows the user to
define algorithms for interactive laboratories in a simple way without the
disadvantages of the traditional programming languages. It is composed
by Blockly, to define and design the experiments, and Google Chart, to
analyze and visualize the experiment results. This tool offers benefits to
students, teachers and, even, lab designers. For the moment, it can be
used with any existing lab or simulation created with the authoring tool
Easy Java(script) Simulations. Since there are hundreds of labs created
with this tool, the potential applicability of the tool is considerable. To
illustrate its utility a very well-known system is used: the water tank
system.

Keywords: Experimentation language · Experiments · Virtual labora-


tories · Remote laboratories · Easy java(script) Simulations · Javascript ·
Blockly

1 Introduction
Students need to understand the theoretical and practical fundamental concepts
in order to achieve a quality education in any field, hence, experimentation in

c Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6 20

zamfira@unitbv.ro
206 D. Galan et al.

traditional laboratories is essential [5]. The high costs associated with equipment,
space, and maintenance staff, impose certain constraints on resources. Virtual
and Remote laboratories (VRLs) try to overcome these limitations [4,14]. Differ-
ent empirical studies [3,19] have shown that both, VRLs and traditional labora-
tories, can obtain similar learning outcomes. Furthermore, VRLs provide inter-
esting additional advantages: they support experimentation about unobservable
phenomena and avoid health risks, such as radioactivity, chemical reactions, or
electricity [6,9].
A laboratory is meant to offer experimentation possibilities. Experimentation
can be defined as the process of extracting data from a system by exerting it, not
only through its inputs, but also through the model parameters. Traditionally,
users of VRLs were expected to perform experiments by scripting algorithms in
a certain simulation language or by interacting with the controls and buttons of
the applications graphical user interface (GUI).
Most of the modern modeling or simulation tools already include script-
ing facilities that allow users to script certain types of experiments [12,15,16].
Among them it can be found ACSL [2], EcosimPro [1], and Dymola [8]. Advanced
Continuous System Language (ACSL) was one of the first commercially available
modeling and simulation tools designed for simulating continuous systems. ACSL
includes a programming language that supports creating experiments. Dymola
also support a script facility that makes it possible to load model libraries, set
parameters, set start values, simulate and plot variables by executing scripts.
However, the major drawback is that if these tools are used to create a labora-
tory with educational purposes, final users (students mainly) will have to know
how the laboratory was implemented and handle fluently a specific programming
language just to perform any experiment.
Due to this disadvantages, most of the VRLs for educational purposes are
geared towards performing experiments by interacting with the GUI. For these
labs, visualization and interactivity are features of special importance, [11,13].
It is highly recommended the use of images or animations in order to help users
to understand more easily the system under study. Current developments in
interactivity allows users to visualize the response of the system to any external
or internal change, [10,18]. These features, rich visual contents and the possibility
of an instantaneous visualization of the system response make VRLs a human-
friendly tool to learn, helping users to achieve practical experience.
Despite all these improvements, there are certain limitations that need to be
solved. Consider, for example, a VL with a PI control of the level of water in
a tank. A typical process for an experiment in which several PIs are compared
could be:

1. Set initial conditions.


2. Let the system evolve until the exact moment when the level reaches the
initial set point with a 5% tolerance.
3. Determine the time elapsed in step 2.
4. Repeat steps 1 through 3 one hundred times with different sets of PI para-
meters.
5. Perform an analysis of the results thus obtained.

zamfira@unitbv.ro
Web Experimentation on Virtual and Remote Laboratories 207

This set of actions cannot be executed with the accuracy needed or in reason-
able time by just interacting with the GUI. For example, pausing the simulation
in an exact moment is practically impossible. Other repetitive tasks such as tak-
ing tens of measurements to perform an analysis of the results are tedious and
provide no educational value so it is preferable not to ask for them.
Alternatively, it would be preferable to code the experiment using a flexible,
intuitive and user-friendly experimentation language, then run it automatically
and finally visualize the results or the plots. In other words, the solution will be
joining the two conceptions of how to perform experiments in a lab (interaction
and script programming).
The main goal of this work is to enrich existing VRLs with an application
that enables the creation and execution of automated experiments. To achieve
this objective, a new Application Programming Interface (API) and a set of func-
tions which VRLs should conform to provide the desired experimentation capa-
bilities, has been designed. On the basis of the general specifications obtained
from the most commonly used simulation languages, authors have added some
new requirements to achieve an universal, full-fledged specification that provides
more general and flexible features.
In order to test the viability of the proposed experimentation application,
authors’ implementation uses JavaScript labs developed with the modeling tool
Easy java(script) Simulations (EjsS), [11]. EjsS is a software tool that helps the
user with the creation of interactive simulations in Java, or JavaScript. EjsS has
been designed to be used by scientists without special programming skills, and
has proven to simplify the creation of simulations for scientific and engineering
purposes. An excellent proof of the EjsS potential is the ComPADRE repository
[7], which hosts free online resource collections, supporting students and teachers.
Among these resources, users can find more than 500 applications created with
EjsS. These labs are enriched with the capability to execute experiments, which,
in the presented approach, are scripts coded with Blockly, [17]. Blockly is an
easy and intuitive graphical programming language.
Despite the huge advantages and great utilities offered by EjsS there was
not a way to use them for allowing users to create simulation experiments. This
limitation is not restricted to EjsS; PhET simulations, [20], also available to
download for free, present the same problem too.
The paper is structured as follows. Section 2 presents the Experiment Appli-
cation and its benefits. Section 3 discusses the implementation of the language
and the blocks needed to represent experiments. Section 4 shows an example
that uses the experimentation language in practice. Finally, Sect. 5 discusses the
results and describes further work.

zamfira@unitbv.ro
208 D. Galan et al.

2 The Experiment Application


Four elements compose the Experiment Application (ExApp):

1. Blockly Editor to design the experiment.


2. Google Charts visualization to analysis the experiment results.
3. The API to share information between the VRL and the experiment.
4. The experimentation language.

The first two elements (Blockly and Google Charts) that comprise the appli-
cation GUI (see Fig. 1) are explained in this section. The other two, the API
and the experimentation language are explained in Sect. 3. Notice that the lab
is not part of the application. If the lab, whether a virtual lab, a remote lab,
an hybrid lab or a simulation, implements the API proposed by the authors the
ExApp can be used.
In the first place, Blockly is the selected tool for the design of the experiments.
It is a free and open source library that adds a visual code editor to web and
Android apps. The Blockly editor uses graphical puzzle-like blocks to represent
concepts like variables, logic expressions, loops, and any element of a traditional
programming language. It allows users to apply programming principles with-
out having to worry about syntax or the laboratory structure. Blockly is used in
lots of learning applications as: Blockly Games (a set of educational games that

Fig. 1. ExApp GUI (Blockly Code and Google Charts) with a virtual lab modeling a
bouncing ball

zamfira@unitbv.ro
Web Experimentation on Virtual and Remote Laboratories 209

teach programming concepts), MIT’s App Inventor (to create applications for
Android), Code.org (to teach introductory programming to millions of students
in their Hour of Code program), Wonder Workshop (to control their Dot and
Dash educational robots), the Open Roberta project (to program Lego Mind-
storms EV3 robots), or ScratchyCAD (a web based parametric 3D modeling tool
which allows users to create 3D objects). Using Blockly to create experiments
for VRLs rather than other programming languages is a valuable asset from the
experience of the authors. As VRLs can be used by any person, with or without
programming skills, Blockly is the easiest way to start creating algorithms to
conduct experiments. Furthermore, this code editor offers interesting features
that favor the web use, maintaining the power of traditional languages (imple-
mented with JavaScript, minimal type checking supported, easy to extend with
custom blocks, localized into 50+ languages, ...).
The data analysis is provided by Google Charts, [21]. This free and open
source library is used to visualize data on a website. Google Charts provide a
large number of ready-to-use chart types. It is able to represent from simple
line charts to complex hierarchical tree maps. It is highly customizable and
supports dynamic data and controls to create interactive dashboards. It also
offers functions to import and export data to other formats. As Blockly, it is a
JavaScript library so its incorporation to an online tool is simple and clean.

2.1 Benefits from Using ExApp


The beneficiaries of ExApp, from an educational role perspective, can be divided
into three groups:

1. Lab designers. They are in charge of creating the model, the view, deciding
which variables are going to be visualized in charts and adding some interac-
tive elements to control the execution of the lab by changing some variables
or internal functions. If the designer uses a tool that implements the API
proposed in this paper (EjsS, for example), he/she will not need to change a
single thing in the lab implementation in order to use ExApp. Furthermore,
the designer could focus only in the model definition and the view, charts or
any interactive element are not longer needed to control the lab. Since ExApp
has access to every variable, the final user can decide the way to work with
the lab and the data to show in the charts. This means that the time needed
to create a lab is reduced and the experiences proposed to the students are
not limited by the design.
2. Teachers. They have to define the lab experiences for the students. If the
lab is open and not restricted by the designer pretensions, the teacher will
have plenty of possibilities to propose different kind of experiments to the
students. From simple algorithms to discover the important variables of a
system, to create from the scratch a controller for the level control of a water
tank. Deploying ExApp and the lab on a web page is as simple as preparing
an HTML with the two elements. Authors’ next step is to include ExApp
as a Moodle plugin. In this way, the lab, the ExApp, the experiment files

zamfira@unitbv.ro
210 D. Galan et al.

and the results would be managed by Moodle. The correction of these type
of interactive experiments using Blockly is as easy as running the student’s
file and evaluating the results obtained. Regarding the evaluation, teachers
may give value to whether the correct result is obtained as well as how the
student reached to that solution. Teachers have the possibility to analyze the
experiments structure, to study the algorithms used and to perform the stu-
dents experiments as many times as needed just with one click. An additional
advantage is that the time needed by teachers to explain how to use the tool is
extremely low comparing with other simulation tools that allow the creation
of experiment scripts. Even more, Blockly and other similar tools as Scratch
are currently being used in elementary schools. This means near future users
will not need any extra explanation about how to used it, because students
will be familiar with these tools.
3. Students. They are the final users of the lab and ExApp. Currently, Blockly
is the first step to start learning programming skills, so even students with no
programming knowledge will find ExApp an easy tool to code their scripts.
Blockly offers visualization features as highlighting blocks which are executed
at a certain time so it is very easy to follow the execution flow of the exper-
iment and correct possible mistakes. For the same lab, students can face
different assignments depending on their skills which promote, among oth-
ers, imagination to solve the assignments, learning interest, critical thinking,
being challenged and inquiry-based learning. Also, by scripting the experi-
ment, students avoid tedious or repetitive tasks that lack of any educational
value. They are able to exchange, compare y confront experiments with teach-
ers or other students. Visualizing, collecting and analyzing results is easier
thanks to Google Charts.

3 Implementation
To achieve the objective of controlling every aspect of a VRL an interface between
ExApp and the VRL is needed. Such API should then contain the following
elements:

1. Elements to initialize and configure ExApp.


2. Elements to access VRLs’ variables.
3. Elements to specify algorithms.
4. Elements to control the execution of the VRL.
5. Elements to analyze the results.

These elements, how they conform the experimentation language and the
way they are implemented in ExApp, are described in the following subsections.

3.1 Elements to Initialize and Configure ExApp


An initialization process is needed to configure correctly ExApp and to link it
to a lab. This means that ExApp has to receive the object that contains the

zamfira@unitbv.ro
Web Experimentation on Virtual and Remote Laboratories 211

variables and lab functionality (in EjsS labs, the model variable). Once ExApp
and the lab are linked, all variables from the lab system are classified by type
and prepare for their use in the code. Optionally, an XML file can be configured
to show more or less Blockly blocks in order to create from the most simple
to the most complex algorithm. By default, the XML is configured with all the
possible blocks.

3.2 Elements to Access VRLs’ Variables


The lab has to implement two functions to set and get the variables
of the model. EjsS labs, for example, implement these functions using a
JSON Object, model.userUnserialize({variable,value}) to set and the function
model.userSerialize() to get the values of the variables.
The experimentation language implements two blocks using these two func-
tions (see Fig. 2). The one at the top shows a message with value of the selected
variable (t) and the one in bottom sets the value of the statement linked to it.
In this case, it assigns a value of 3 to the variable g. Model variables appear in
the variable chooser of the block automatically, as seen in Fig. 2.

Fig. 2. Setting and getting variables from the lab

3.3 Elements to Specify Algorithms


Experiments usually require specific algorithms that use variables from the lab,
but also, additional functions and variables defined by users, so the API should
allow it. Thanks to the JavaScript features this is easy to implement.
To do this, the experimentation language provides different blocks to create
standard algorithmic constructions to allow users to write complex algorithms,
if required. Figure 3 shows declaration of a new function in the code, how to call
it and a few blocks to create different types of statements.

zamfira@unitbv.ro
212 D. Galan et al.

Fig. 3. Defining and calling a function using Blockly

3.4 Elements to Control the Execution of the VRL

The API should implement different ways to control the lab execution. If it is
a lab which evolution depends on time, instructions to start, pause or stop the
lab are necessary. The In every step do block can be used to executed a code
in every step of the simulation. Also, more complex functions are needed, like
events (do something when a given condition is met). For example, “run the sim-
ulation until the level of the tank is greater than 10” or “run the simulation and
increase the set point by 50% when t = 10”. The lab should implement the func-
tion model.addEvent(conditionCode, actionCode) and model.addFixedRel(Code)
in order to allow these type of statements.
Figure 4 shows how the experimentation language implements the functions
to add code to every step and to add events to the lab. First of all, the lab is
reset and then the code, print variable z, is added to the step. After that an
event is added. The condition is 3 minus the variable t from the lab. And the
action consist in pausing the execution of the VRL. After this, the lab is started.
When the variable t equals 3, the lab will pause.

Fig. 4. Controlling the lab execution using Blockly

zamfira@unitbv.ro
Web Experimentation on Virtual and Remote Laboratories 213

3.5 Elements to Analyze the Results


The API should provide a function to visually compare output data from lab
variables produced in form of charts, graphs, etc. For instance, users can be
interested in comparing the plots of the evolution in time of the response of a
controller with different tuning parameters. Google Charts is the tool used to
visualize data. These data are tables with the values of the selected variable in
one column and the time variable of the model in another column.
The experimentation language provides three blocks to implement this func-
tionality. Figure 5 shows the three of them (left part of the image) and the chart
obtained (at the right part). The first block is to declare which model variables
are going to be recorded, the sample period for it and the function names. It
is possible to declare as many variables as needed. Once the recording variables
have been declared, the start recording and the stop recording blocks are used
to define the time intervals within which of those variables would be recorded.
Once the experiment starts, the chart will visualize the selected variables.

Fig. 5. Analyzing data with Blockly

4 Example of Use
This section presents several examples to show the usefulness of ExApp and the
advantages of it use described throughout the paper. Each of these examples
contains a brief description of the experiment, the code of the experiment, their
results and the advantages of working with the experiment editor. Experiments
have been developed for the water tank system VL. Their general features and
functionality are detailed below.

4.1 The Water Tank System


The water tank has two valves simulating the input flow (Qin) and the output
flow (Qout). The mathematical model of plant is shown in Eq. 1 (Fig. 6).

zamfira@unitbv.ro
214 D. Galan et al.

Fig. 6. Simulation of the water tank

dh (Qin − Qout)
= (1)
dt A
where h represents the current tank water level and A the cross section of the
tank. The input flow, Qin, and the output flow, Qout, are given by Eqs. 2 and 3
respectively.

Qin = K1 ∗ a1 (2)

Qout = K2 ∗ a2 ∗ 2 ∗ g ∗ h (3)

where K1 represents the first valve input flow, a1 the first valve perturbation.
K2 is the second valve output flow, a2 the second valve perturbation, the gravity
is represented by g and h the current tank water level.

4.2 The Virtual Lab

The virtual laboratory was developed using Easy java(script) Simulations (EjsS).
EjsS simulations are created by specifying a model to be run by the EjsS engine
and by building a view to visualize a graphical representation of the system
modeled and to interact with it. Department of Computer Science and Automatic
of UNED commonly uses EjsS simulations as virtual or remote laboratories.
The main intention of these examples is to show the ExApp power and useful-
ness, for this reason the virtual lab is as simple as possible. The GUI only shows
the tank, the in and out pipes and the level of water in the tank. Notice that

zamfira@unitbv.ro
Web Experimentation on Virtual and Remote Laboratories 215

there are no interactive controls, plots or variable indications. The lab imple-
ments the Eqs. 1, 2 and 3, consequently, the parameters and variables of this
equations are the only ones implemented in the lab.

4.3 Example 1: Obtaining the Equilibrium Point of the System


Obtaining the equilibrium point of the system is a basic experiment which can
be proposed to the lab users. Given a system f such that:
x (t) = f (x(t)) (4)
A particular state xe is called an equilibrium point if
f (xe ) = 0 (5)
Applying this to the water tank system, Eq. 1 must be equal to 0, which implies
that the input flow has to be equal to the output flow. The equilibrium point is
determined by the level of water in that moment. ExApp offers an easy way to
visualize and obtain values from the system variables. Figure 7 shows an experi-
ment to obtain the equilibrium point by checking with an event if the input flow
(Qin) is equal to the output flow (Qout). When this occurs the virtual lab is
stopped and the value of the water level is display. Two charts are showed, the
one in the left side shows the evolution over time of the water level of and the
one in the right the evolution of Qin and Qout over time.

Fig. 7. Example 1: obtaining the equilibrium point

zamfira@unitbv.ro
216 D. Galan et al.

4.4 Example 2: Creating a Level Controller for the Water Tank

A typical experiment proposed to lab users is to study and compare different


controllers for the water tank system. For that purpose, without the ExApp, the
designer has to implement each controller in the virtual lab, which means that
the learning experience is limited to observe the behavior of those controllers.
However, using the ExApp, it is possible for any user to design its own controller
and to experience with it, even in a lab without controller implementations, as
is the case in question.
To illustrate this, a simple Proportional-Integral (PI) controller is imple-
mented using the experimentation blocks. The PI algorithm computes and trans-
mits an output signal, U, every sample time, T, to the final control element (the
inlet flow in this case). The computed U from the PI algorithm is influenced

Fig. 8. Example 2: creating a PI controller

zamfira@unitbv.ro
Web Experimentation on Virtual and Remote Laboratories 217

by the controller tuning parameters and the controller error, e(t). PI controllers
have two tuning parameters to adjust, K and Ti. PI controllers provide a bal-
ance of complexity and capability that makes them by far the most widely used
algorithm in process control applications. The PI controller implemented for this
experiment has the form shown in Eq. 6.

K
U = Ke(t) + e(t)dx (6)
Ti
Figure 8 shows the experiment script and the charts obtained at the end of the
experiment. The experiment is divided in five parts for better readability. The
first one is used to create and define the charts. The initialization part set the
initial values to prepare the lab for the experiment. The controller implementa-
tion part is the main script of the experiment. By using the “In every step do”
block the input flow is calculated using Eq. 6. Because of the limitations of the
valve some conditions are added for not allowing negative input flows and setting
a top maximum value. Events are used to change the set-point value at 50 s and
to finish the experiment at 100 s. Once everything is defined, the experiment is
executed. The chart at the left part shows the level and the set-point over time
and the one at the right part shows the input flow and the output flow over time.

5 Conclusion
To expand the experimental activities that can be carried out in virtual and
remote laboratories, a new web application and its corresponding implementa-
tion has been presented in this paper. Current alternative approaches to auto-
mate experiments require to interact and consequently to use the same code
that implements the laboratory, which implies the use of the same language in
which the lab is written. In contrast, in the authors approach, it is possible to:
access and modify all the laboratory variables, create algorithms and functions,
and control the execution of the experiment. Additionally, users can execute the
experiment step by step or run the whole script with a modifiable interval of time
between code sentences. The developed web application will be implemented as
a Moodle plugin in the near future. From the authors point of view, these type of
plugins can change the way of performing experiments, creating new experiences
in Learning Management Systems (LMS).
To illustrate the potential and ease of use of the language, an example has
been described in detail: the water tank system.
Authors are in the process of testing the initial design creating different types
of experiments of practical use in teaching Automatic Control and Physics. Ini-
tial results show that this implementation is both simple and flexible, supplying
users with a great deal of control over the running simulation. The combination of
JavaScript and Blockly has been crucial in making the proposed implementation
very natural. The way EjsS implements the VRLs allows external applications
to easily access all its variables without any required modification in the lab
applications already developed. In a more general context, authors believe the

zamfira@unitbv.ro
218 D. Galan et al.

API proposed in this work can effortlessly be adapted to different lab implemen-
tations or to any future standard protocol.

Acknowledgments. This work has been funded by the Spanish Ministry of Econ-
omy and Competitiveness under the projects EUIN2015-62577, DPI-2013-44776-R and
DPI2016-77677-P.

References
1. EA internacional ecosimpros website. http://www.ecosimpro.com/
2. MGA software ACSL reference manual, version 11 (1995)
3. Brinson, J.R.: Learning outcome achievement in non-traditional (virtual and
remote) versus traditional (hands-on) laboratories: a review of the empirical
research. Comput. Educ. 87, 218–237 (2015)
4. Brodersen, A.J., Bourne, J.R.: Virtual engineering laboratories. J. Eng. Educ. 83,
279–285 (1994)
5. Cellier, F.E., Greifeneder, J.: Continuous System Modeling. Springer, New York
(2013)
6. Chiu, J.L., DeJaegher, C.J., Chao, J.: The effects of augmented virtual science
laboratories on middle school students’ understanding of gas properties. Comput.
Educ. 85, 59–73 (2015)
7. Christian, W., Esquembre, F., Barbato, L.: Open source physics. Science
334(6059), 1077–1078 (2011)
8. Mattsson, S.E., Brck, D., Elmqvist, H., Olsson, H.: Dymola for multi engineer-
ing modeling and simulation. In: Proceedings of the 2nd International Modelica
Conference (2002)
9. de Jong, T., Linn, M.C., Zacharia, Z.C.: Physical and virtual laboratories in science
and engineering education. Science 340, 305–308 (2013)
10. Dormido, S., Dormido-Canto, S., Dormido, R., Sánchez, J., Duro, N.: The role of
interactivity in control learning. Int. J. Eng. Educ. 21(6), 1122 (2005)
11. Esquembre, F.: Adding interactivity to existing Simulink models using Easy Java
simulations. Comput. Phys. Commun. 156, 199–204 (2004)
12. Feisel, L., Peterson, G.D.: A colloquy on learning objectives for engineering edu-
cational laboratories. In: ASEE Annual Conference and Exposition, Montreal,
Ontario, Canada (2002)
13. Heck, B.S.: Future directions in control education [guest editorial]. IEEE Control
Syst. 19(5), 36–37 (1999)
14. Heradio, R., de la Torre, L., Galan, D., Cabrerizo, F.J., Herrera-Viedma, E.,
Dormido, S.: Virtual and remote labs in education: a bibliometric analysis. Com-
put. Educ. 98, 14–38 (2016)
15. Law, A.M., Kelton, W.D.: Simulation Modeling and Analysis, 2nd edn.
McGrawHill, New York (1991)
16. Law, A.M., Kelton, W.D.: Simulation Modeling and Analysis. McGrawHill,
New York (2001)
17. Marron, A., Weiss, G., Wiener, G.: A decentralized approach for programming
interactive applications with javascript and blockly. In: Proceedings of the 2nd
Edition on Programming Systems, Languages and Applications Based on Actors,
Agents, and Decentralized Control Abstractions, pp. 59–70. ACM (2012)

zamfira@unitbv.ro
Web Experimentation on Virtual and Remote Laboratories 219

18. Sánchez, J., Morilla, F., Dormido, S., Aranda, J., Ruipérez, P.: Virtual and remote
control labs using Java: a qualitative approach. IEEE Control Syst. 22(2), 8–20
(2002)
19. Sun, K.T., Lin, Y.C., Yu, C.J.: A study on learning effect among different learning
styles in a web-based lab of science for elementary school students. Comput. Educ.
50(4), 1411–1422 (2008)
20. Wieman, C.E., Adams, W.K., Perkins, K.K.: PhET: simulations that enhance
learning. Science 322(5902), 682–683 (2008)
21. Zhu, Y.: Introducing Google chart tools and google maps API in data visualization
courses. IEEE Comput. Graph. Appl. 32(6), 6 (2012)

zamfira@unitbv.ro
How to Leverage Reflection in Case of Inquiry
Learning? The Study of Awareness Tools
in the Context of Virtual and Remote
Laboratory

Rémi Venant, Philippe Vidal, and Julien Broisin(B)

Institut de Recherche en Informatique de Toulouse,


Université Toulouse III Paul Sabatier,
118 route de Narbonne, 31062 Toulouse Cedex 04, France
{remi.venant,philippe.vidal,julien.broisin}@irit.fr

Abstract. In this paper we design a set of awareness and reflection


tools aiming at engaging learners in the deep learning process during
a practical activity carried out through a virtual and remote labora-
tory. These tools include: (i) a social awareness tool revealing to learn-
ers their current and general levels of performance, but also enabling
the comparison between their own and their peers’ performance; (ii) a
reflection-on-action tool, implemented as timelines, allowing learners to
deeply analyze both their own completed work and the tasks achieved
by peers; (iii) a reflection-in-action tool acting as a live video player to
let users easily see what others are doing. An experimentation involv-
ing 80 students was conducted in an authentic learning setting about
operating system administration; the participants evaluated the system
only slightly higher than traditional computational environments when it
comes to leverage reflection and critical thinking, even if they evaluated
the system as good in terms of usability.

Keywords: Virtual and remote laboratory · Computer science · Aware-


ness tool · Reflection

1 Introduction
In the context of inquiry learning that leads to knowledge building and deep
learning [11], Virtual and Remote Laboratories (VRL) gain more and more inter-
est from the research community, as the Go-Lab European project that involved
more than fifteen partners demonstrates it. However, research in this area mainly
focus on the technical and technological issues instead of emphasizing the peda-
gogical expectations to enhance learning. Yet, some research conducted around
remotely controlled track-based robots [17] showed that, among other benefits,
reflection and metacognition could emerge [21].
On the other hand, during the last decade, a significant number of researchers
studied how awareness tools could be used to promote reflection. A wide variety

c Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6 21

zamfira@unitbv.ro
How to Leverage Reflection in Case of Inquiry Learning? 221

of ideas and initiatives emerged, from dashboards exposing various statistical


data about the usage of the learning environment by learners [14] to visual
reports about physiological data to foster learners’ self-understanding [12].
Our previous works introduced Lab4CE, a remote laboratory for computer
education. The main objective of this environment is to supply remote collabora-
tive practical sessions in computer science to learners and educators. It provides
them with a set of tools and services aiming at improving pedagogical capabili-
ties while hiding the technical complexity of the whole system. In this paper, we
design new awareness and reflection tools to investigate the following research
question: how the design of both individual and group awareness tools could
leverage reflective thinking and peer support during a practical activity?
To tackle the above research question, our methodology consists in (i) design-
ing and integrating a set of awareness and reflection tools into our existing remote
lab, (ii) setting up an experimentation of the enhanced Lab4CE environment in
an authentic learning context, and (iii) analyzing the experimentation results.
These three steps constitute the remaining of the paper. They are preceded by
a brief presentation of our remote lab and followed by conclusions.

2 Lab4CE: A Remote Laboratory for Computer


Education

Our previous research on computer supported practical learning introduced


Lab4CE [5], a remote laboratory for computer education. In this section, we
focus on its main learning features and expose the learning analytics capabilities
that represent the basis for the awareness tools exposed later in this article.

2.1 Educational Features

The Lab4CE environment stands on virtualization technologies to offer to users


virtual machines hosted by a cloud manager (i.e., OpenStack1 ). It exposes a set
of scaffolding tools and services to support various educational process. Learners
and tutors are provided with a rich learning interface illustrated on Fig. 1 and
integrating the following artifacts: (i) a web Terminal gives control on the remote
virtual resources required to complete the practical activity, (ii) a social pres-
ence tool provides awareness about the individuals working on the same practical
activity, (iii) an instant messaging system ensures exchanges of text messages
between users, and (iv) an invitation mechanism allows users to initiate collab-
orative sessions and to work on the same virtual resources; learners working on
the same machine can observe each other’s Terminal thanks to a streaming sys-
tem. Finally, the Lab4CE environment includes a learning analytics framework
in which all interactions between users and the system are recorded.

1
http://www.openstack.org.

zamfira@unitbv.ro
222 Venant et al.

Fig. 1. The Lab4CE learning interface

2.2 Learning Analytics Features

The Lab4CE learning analytics framework is detailed in [29]. Basically, it man-


ages data about interactions between users, between users and remote resources,
and between users and the Lab4CE learning interface.
We adopted the “I did this” paradigm suggested by ADL [8] to represent and
store the tracking data. The data model we designed allows to express, as JSON-
formatted xAPI statements, interactions occurring between users and the whole
set of artefacts integrated into the Lab4CE GUI. For instance, interactions such
as “the user rvenant accessed its laboratory on 11 of November, 2016, within the
activity Introducing Shell ”, or “the user rvenant executed the command rm-v
script.sh on his resource Host01 during the activity Introducing Shell ” can be
easily represented by the data model.
Traces are enclosed within a learning record store (LRS) so they can be
easily reused either by learning dashboards and awareness tools for visualization
purposes, or by other components to compute indicators. Within our framework,
an enriching engine is able to generate, starting from the datastore, valuable
information that make sense from the educational point of view. To enrich a trace
with valuable indicators, this component relies on an inference model composed
of a solver and of a set of rules, where the former applies rules that specify how
a given indicator must be inferred.

zamfira@unitbv.ro
How to Leverage Reflection in Case of Inquiry Learning? 223

On the basis of this framework, the next section introduces (self-)awareness


tools aiming at initiating reflective learning within the Lab4CE environment.

3 The Awareness Tools


The visualization tools exposed here are based on instructions carried out by
learners, and aim at making learners aware of their learning experience.

3.1 The Social Comparison Tool


Theoretical Basis and Objectives. The analysis by learners of their own
performance can be supported by self-awareness tools exposing to learners, on
the basis of their learning paths within instructional units, various information
about their level of knowledge. These learning analytics tools build dashboards
to return feedback about students’ overall results [24], their global level of per-
formance [1], strengths and weakness [15], or about precise concepts through
computer-based assessments [23]. These tools all evaluate learners’ performance
by addressing acquisition of theoretical concepts and knowledge. However, in the
context of practical activities, such evaluation techniques become inappropriate
as they do not evaluate how learners are able to reuse and apply their theoretical
knowledge when they are faced with a concrete and practical situation (i.e., level
of practice).
In addition, recent research show that learners should also become engaged
in a social analysis process to enhance their reflection [30]. Comparative tools are
designed to make each learner’s performance identifiable, and therefore to allow
individuals to compare their own and their partners’ activity. Such types of tools
consist of social comparison feedback that allow group members to see how they
are performing compared to their partners [22]. These social awareness tools
present awareness information in various ways and bring students the feeling of
being connected with and supported by their peers [19].

Design and Learning Scenario. Evaluating learners’ level of practice implies


the evaluation of the interactions between users and the learning artifacts of
the Lab4CE environment. In the present study, we focus on the evaluation of
interactions between users and remote resources, since this type of interaction is
highly representative of the learners’ level of practice. In particular, we address
the syntactic facet so as to identify weather a command carried out by a learner
has been successfully executed on the target resource. The technical rightness
indicator should be evaluated as right (respectively wrong) if it has been (respec-
tively has not been) properly executed on the resource; in that case, the value
of the indicator is set to 1 (respectively 0). In a Shell Terminal, the output of
a command can be used to detect the success or failure of its execution; the
implementation details are given in the next section.
The social comparison tool we designed thus reuses the technical rightness
indicator to reflect to users their level of practice. Previous research showed that

zamfira@unitbv.ro
224 Venant et al.

visualization tools dealing with such data have to require very few attention to
be understood and beneficial for learners [27]. We adopted a simple color code
(i.e., green if the indicator is set to 1, red if it is set to 0) to represent, as progress
bars, learners’ performance. The tool distinguishes the learners’ level of practice
during the session within the system (i.e., since they logged in the system - see
progress bar My current session in Fig. 2), and their level of practice taking into
account the whole set of actions they carried out since they started working
on the given practical activity (i.e., not only the current session, but also all
previous sessions related to the given activity - see progress bar My practical
activity in Fig. 2). This tool also comprises a progress bar to reflect the level of
practice of the whole group of learners enrolled in the practical activity (i.e., all
the sessions of all users - see progress bar All participants in Fig. 2). Each time a
command is executed by a learner, the progress bars are automatically updated
with a coloured item (see next section). Finally, the social presence tool (see
Sect. 2.1) exposing the users currently working on the same practical activity
has been enhanced: the session level of practice of each user is displayed using a
smaller progress bar (see bottom right corner of Fig. 1).

Fig. 2. The social comparison tool exposing learners’ performance

Through the current and general progress bars, learners can get aware of the
progression of their level of practice regarding a given activity; they are also
able to compare their current level with their average level. In conjonction with
the group progress bar, learners can position themselves in relation to peers and
become more engaged in learning tasks [18]. In addition, the progress bars of
the social presence tool allow learners to identify peers that perform better, and
thus to get support from them using other awareness tools (see further).
Let us note that the indicator on which the social comparison tool stands
on, i.e., the technical rightness, is not specific to computer science. In most of
STEM disciplines, such an indicator may be captured: a given instruction is
executed (respectively not executed) by an equipment if it is (respectively is
not) technically/semantically well-formulated.

zamfira@unitbv.ro
How to Leverage Reflection in Case of Inquiry Learning? 225

Implementation. To infer the technical rightness indicator, our approach con-


sisted in identifying the various error messages that may occur within a Shell
Terminal when a technically wrong command or program is executed. According
to our findings, we specified four rules to detect an error: R1 reveals errors arising
when the argument(s) and/or option(s) of a command are incorrect; R2 triggers
the error occurring when a command entered by the user does not exist; R3 and
R4 indicate if the manual of a command that does not exist has been invoked.
Finally, the indicator is processed according to a mathematical predicate based
on these rules and that returns 0 if no errors were detected for a given command.
Once this indicator is inferred by the enriching engine, the enriched raw trace
is decoded and stored into the LRS (see Sect. 2.2). The social comparison tool
then adopts the publish-subscribe messaging pattern to retrieve and deliver these
information. The server side of the Lab4CE system produces messages composed
of a pair timestamp-technical rightness as soon as a new trace is stored into the
LRS, and publishes these messages into various topics; the progress bars act as
subscribers of these topics. The current and general progress bars are updated
in near real time (i.e., just after a user executes a command), whereas the group
artifact is updated on an hourly basis only.
The social comparison tool allows learners to self-analyze their levels of per-
formance, as well as those of their peers, but the visualization approach we
adopted prevents them to deeply analyze their own and peers’ actions. While
exposing performance, the tool presented below thus provides details about the
actions carried out by users on resources.

3.2 The Reflection-on-Action Tool

Theoretical Basis and Objectives. According to [4], reflection is a complex


process consisting of returning to experiences, re-evaluating the experiences, and
learning from the (re)evaluation process in order to adapt future behaviour. This
model makes learners self-aware of their learning progress, and capable of taking
appropriate decisions to improve their learning [10]. It is also in line with the
research conducted by [30] who found that analyzing and making judgements
about what has been learned and how learning took place are involved in the
reflective process. These tasks can only be achieved by learners themselves, but
their engagement in reflection can be initiated and fostered by technology in the
context of online learning through reflection-on-action tools [30].
Reflection-on-action can be defined as the analysis of process after the actions
are completed [10], or as “standing outside yourself and analyzing your perfor-
mance” [16]. [9] recommends various strategies to engage learners in reflection-
on-action such as imitation by learners of performance especially modeled for
them, or replay of students’ activities and performance by teachers. Since some
approaches consider that reflective thinking implies something other than own
thinking [30], the tool presented here acts at both the individual and social lev-
els, and aims at supporting reflection-on-action by offering users the opportunity
to return to what they and their peers have learned, and how.

zamfira@unitbv.ro
226 Venant et al.

Design and Learning Scenario. The tool features visualization and analysis
of detailed information about interactions between users and remote resources.
Users are able to consult the commands they carried out during a particular
session of work, or since the beginning of a given practical activity. The tool has
been designed to let users easily drill down into deeper and fine-grained analysis
of their work, but also to let them discover how peers have solved a given issue.
Figure 3 shows the graphical user interface of this tool: the top of the interface
exposes a form to allow users to refine the information they want to visualize,
whereas the main panel exposes the selected data. To facilitate the projection
of the information, the filtering features include the possibility to select a given
user, a particular session of work and, if applicable, one or several resources
used during the selected session. The actions matching with the selected criteria
are then exposed to users as timelines. Each node of a timeline represents a
command, and is coloured according to its technical rightness. In addition, the
details of a command can be visualized by putting the mouse over the matching
node; in that case, the date the command has been carried out, the action and
the output are displayed into the area appearing on Fig. 3.
This reflection-on-action tool allows users to browse the history of the actions
they carried out, and thus brings learners into a reflective learning situation
where they can analyze their practical work sessions in details. In addition,

Fig. 3. The reflection-on-action tool

zamfira@unitbv.ro
How to Leverage Reflection in Case of Inquiry Learning? 227

learners can easily focus, thanks to the coloured-coded artifact, on the difficulties
they experienced. Also, combined with the social presence tool, learners are able
to easily seek immediate help from peers by analyzing the commands executed
by users currently performing well into the system.

Implementation. The reflection-on-action tool stands on the traditional client-


server architecture. Based on the configuration of the data to analyze, the tool
builds the matching query and sends a request to the Lab4CE server side. The
set of commands comprised into the response, encoded using the JSON format, is
then parsed to display green or red nodes according to the value of the technical
rightness indicator.

3.3 The Reflection-in-Action Tool


Theoretical Basis and Objectives. In contrast with reflection-on-action,
which occurs retrospectively [20], reflection-in-action occurs in real-time [26].
This concept has been originally introduced by [25]: when practitioner fails,
(s)he analyzes own prior understandings and then “carries out an experiment
which serves to generate both a new understanding of the phenomenon and a
change in the situation” [25] (p. 68). Skilled practitioners often reflect-in-action
while performing [16]. [13] successfully experimented a test-driven development
approach to make computer science students move toward reflection-in-action.
In our context, users can reflect-in-action thanks to (web) Terminals: they can
scroll up and down the Terminal window to analyse what they just done, and
then run a new command and investigate the changes, if any.
However, as stated earlier, research suggested that collaboration, and more
especially interaction with peers, supports reflection in a more sustainable way
[3]. The objective of the tool presented below is to strengthen reflection-in-action
through peer support by letting users be aware of what others are doing. When
students face difficulty, uncertainty or novelty, we intend to let them know how
their peers complete tasks. Even if synchronous communication systems might
contribute to this process, users need also a live view on both the actions being
carried out by peers, and the remote resources being operated, to correlate both
information and make proper judgements and/or decisions.

Design and Learning Scenario. The reflection-in-action tool we designed is


illustrated on Fig. 4, and acts as a Terminal player where interactions occurring
between users and remote resources during a session and through the web Ter-
minal can be watched as a video stream: both inputs from users and outputs
returned back by resources are displayed character by character. The tool fea-
tures play, pause, resume and stop capacities, while the filtering capabilities of
the reflection-on-action tool are also available: users can replay any session of
any user to visualize what happened within the web Terminal. When playing
the current session stream of a given user, one can get aware, in near real time,
of what the user is doing on the resources involved in the practical activity.

zamfira@unitbv.ro
228 Venant et al.

During a face-to-face computer education practical session, learners are used


to look at the screen of their partners in order to get the exact syntax of source
code or to find food for reflection. Our awareness tool aims to reproduce this
process in a remote setting. In Fig. 4, the user connected to the system is watch-
ing the current session of the learner jbroisin. Since the stream of data played
by the tool is updated just after an action is executed by a user through the web
Terminal, the user is provided with a near live view about what jbroisin is doing
on the remote resource, and how it reacts to instructions. Also, combined with
the tools presented before, the reflection-in-action tool leverages peer support:
learners can easily identify peers performing well, and then look at their web
Terminal to investigate how they are solving the issues of the practical activity.

Fig. 4. The reflection-in-action tool

Implementation. This awareness tool implements both the publish-subscribe


messaging pattern and the client-server architecture, depending on the practical
session to process: the former is used in case of current sessions (i.e., live video
streams), whereas the latter is dedicated to completed sessions. When a live
session is requested, the matching topic is dynamically created on the Lab4CE
server side, and messages are published as soon as commands are carried out by
the user being observed. The process suggested to retrieve a completed session
has been described in Sect. 3.3: a query is sent to the server side, and then results
are parsed and interpreted by the tool.

zamfira@unitbv.ro
How to Leverage Reflection in Case of Inquiry Learning? 229

The three tools presented in this section have been designed, coded and
integrated into the existing Lab4CE environment. An experimentation based on
the enhanced system has then been set up; the design, results and analysis of
this study are exposed below.

4 Experimentation

The experimentation presented here investigates the impact of the awareness


and reflection tools designed in the previous sections on students’ perception of
learning during a practical activity, according to the five following scales: rel-
evance, reflection, interactivity, peer support and interpretation. Our objective
was to compare students’ perception of learning while using two different envi-
ronments: the enhanced Lab4CE system and the traditional computers usually
available to students to perform practical activities.

4.1 Design and Protocol

The experiment took place in the Computer Science Institute of Technology


(CSIT), University of Toulouse (France), and involved 80 first year students
(with a gender repartition of 9 women and 71 men, which reflects the distribution
of CSIT students) enrolled in a learning unit about the Linux operating system
and Shell programming. The experimentation was conducted for three face-to-
face practical sessions that lasted 90 min. These sessions were all related to Shell
programming: students had to test Shell commands into their Terminal, and
then to write Shell scripts to build interactive programs. Students had also to
submit two reports: one about the first session, and the other about the second
and third sessions (the work assigned to students required two practical sessions
to be completed). These reports had to be posted on a Moodle server four days
after the matching session, so that students could work during week-ends and
have extra-time to complete their tasks.
Two groups of students were randomly created. One group of students (i.e.,
the control group: N = 48, 6 women, 42 men, mean age = 18.8) had access, as
usual, to the Debian-based computers of the institution to carry out the practical
activities. The other group (i.e., the Lab4CE group: N = 32, 3 women, 29 men,
mean age = 18.6) was provided with the enhanced Lab4CE environment; each
student had access to a Debian-based virtual machine during each practical
session, and their interactions with the remote lab were recorded into the LRS.
Two different teachers made a live demo of the Lab4CE features to the Lab4CE
group during the first 10 min of the first session.
At the end of the last practical session, both groups of students were asked
to fill the Constructivist Online Learning Environment Survey (COLLES). This
questionnaire [28] includes twenty four items using a five-point Likert scale (i.e.,
almost never, seldom, sometimes, often, almost always) to measure students per-
ception of their learning experience. The COLLES has been originally designed
to compare the preferred learners experience (i.e., what they expect from the

zamfira@unitbv.ro
230 Venant et al.

learning unit) with their actual experience (i.e., what they did receive from the
learning unit). In our experimentation, learners actual experience of both groups
has been compared: the control group evaluated the Linux computers, whereas
the Lab4CE group had to evaluate our system. In addition, the System Usabil-
ity Scale (SUS), recognized as a quick and reliable tool to measure how users
perceive the usability of a system [6], has been delivered to students.

4.2 Results and Analysis


COLLES. Among the Lab4CE group, 22 students fulfilled the questionnaire,
while 36 learners of the control group answered the survey. The whisker plot of
Fig. 5 shows the distribution of answers relative to five of the six scales evaluated
through the COLLES and also exposes, for each of them, the class mean scores,
first and third quartiles of each group of users.

Fig. 5. COLLES survey summary

The first scale (i.e., relevance) expresses the learners’ interest in the learning
unit regarding future professional practices. The Lab4CE group evaluated the
system with a slightly higher mean score and a higher concentration of scores
distribution. Since this category deals more with the topic of the learning unit
itself than the supporting environment, high differences were not expected.
The second scale relates to reflection and critical thinking. Even if the tradi-
tional environment assessed by the control group does not provide any awareness

zamfira@unitbv.ro
How to Leverage Reflection in Case of Inquiry Learning? 231

and/or reflection tools, the plots do not show a significant difference between
both groups, but slightly higher mean score and median for the Lab4CE group
only. We make here the hypothesis that learners did not realize they were engaged
in the reflection process while consulting the Lab4CE awareness tools. Indeed,
according to the system usage statistics, a mean of almost 42% of the students of
the Lab4CE group have used the reflection-on-action tool to review each of their
own sessions. On the other hand, we think that students of the control group
have considered the reflection processes occurring within the classroom instead
of considering the processes generated through the computer system only.
Feedback from both groups are quite equivalent regarding the interaction
scale which measures the extent of learners’ educative dialogue and exchange of
ideas. Here, results from the Lab4CE assessment were expected to be higher than
those returned by the control group as Lab4CE provides a chat where students
can exchange instant text messages, and a total of 166 messages have been posted
during the 3 sessions. In addition, almost 30% of the Lab4CE students have
worked at least once with a peer using the collaborative feature (see Sect. 2.1).
Again, we think that students are not aware of being involved in an interaction
task when exchanging ideas with peers.
Results about the peer support are also quite the same for both groups, even
slightly lower in the Lab4CE group. Beside our previous hypothesis that can
explain such unexpected results (here again, 47% of the Lab4CE students have
used the reflection-on-action tool), this scale reveals a potential improvement of
our platform. Learners have significantly used the reflection tools to analyze the
work done by peers, but the system does not currently provide learners with such
awareness information. The peer support scale is about the feeling of learners on
how peers encourage their participation, or praise or value their contributions.
We believe that providing students with awareness information about analysis
performed by peers on their work would increase that perception.
The last scale evaluates how messages exchanged between students, and
between students and tutors, make sense. Scores from the Lab4CE group are
characterized by a higher concentration of distribution and a little higher class
mean. These results tend to confirm that providing students with reflection tools
helps them to get a better comprehension of their interactions with each other.
In addition to the statistics commented in the previous paragraphs, interest-
ing data are the number of peers sessions analysis the day the first report had
to be submitted: almost 43% of the Lab4CE students analyzed at least one ses-
sion of a peer using the reflection-on-action tool. We assume that these learners
didn’t know how to achieve the objectives of the practical work, and thus sought
for help from peers sessions: the mean level of performance of users whose the
session has been analyzed is 90 (for a highest score of 100).
Finally, the social comparison tool which, by default, is hidden within the
user interface (see Fig. 1), has been displayed by most of users at each session
even if this rate slightly decreases when the level of performance increases. This
finding is in line with research about social comparison tools. Their impact on
cognitive and academic performance has been thoroughly examined, and main

zamfira@unitbv.ro
232 Venant et al.

results showed that informing learners of their own performance relative to others
encourages learning efforts and increases task performance [18].

System Usability Scale. The score of the SUS has been computed according
to [7]. The SUS score was 62.4 for the control group, while a SUS score of a
73.6 was attributed to the Lab4CE system. According to [2], the Linux-based
computers have been evaluated as below than acceptable systems in terms of
usability, while Lab4CE has been qualified as good regarding this criteria.

5 Conclusions and Perspectives


We designed a set of awareness and reflection tools aiming at engaging learners
in the deep learning process. These tools have been successfully integrated into
the Lab4CE system, our existing remote laboratory environment dedicated to
computer education, before being experimented in an authentic learning context.
The objectives of this experimentation were to evaluate, in a face-to-face prac-
tical learning setting, students’ perception of learning when performing tasks
using the enhanced Lab4CE system, and to compare these measures with their
perception of learning when using traditional practical environments. Even if
the face-to-face setting might have had a negative impact on the Lab4CE envi-
ronment evaluation, students rated both environments at the same levels of
relevance, reflection and interpretation.
From this experimentation, we identified new awareness tools that might be of
importance to leverage reflection, such as a notification system alerting learners
that peers are analyzing their work, or dashboards highlighting analysis of their
works based on their performance. Finally, the analysis of the experimentation
results also emphasize the low levels of interactivity and peer support within
our system. We will dive into these broader areas of investigation through the
design and integration of scaffolding tools and services such as private message
exchanges, recommendation of peers that may bring support, or help seeking.

References
1. Arnold, K.E., Pistilli, M.D.: Course signals at Purdue: using learning analytics to
increase student success. In: Proceedings of the 2nd International Conference on
Learning Analytics and Knowledge, pp. 267–270. ACM (2012)
2. Bangor, A., Kortum, P., Miller, J.: Determining what individual SUS scores mean:
adding an adjective rating scale. J. Usability Stud. 4(3), 114–123 (2009)
3. Boud, D.: Situating academic development in professional work: using peer learn-
ing. Int. J. Acad. Dev. 4(1), 3–10 (1999)
4. Boud, D., Keogh, R., Walker, D.: Reflection: Turning Experience into Learning.
Routledge, New York (2013)
5. Broisin, J., Venant, R., Vidal, P.: Lab4CE: a remote laboratory for computer edu-
cation. Int. J. Artif. Intell. Educ. 25(4), 1–27 (2015)
6. Brooke, J.: SUS: a retrospective. J. Usability Stud. 8(2), 29–40 (2013)

zamfira@unitbv.ro
How to Leverage Reflection in Case of Inquiry Learning? 233

7. Brooke, J., et al.: SUS-a quick and dirty usability scale. Usability Eval. Ind.
189(194), 4–7 (1996)
8. Advanced Distributed Learning (ADL) Co-Laboratories: Experience API. https://
github.com/adlnet/xAPI-Spec/blob/master/xAPI-About.md. Accessed 21 Nov
2016
9. Collins, A., Brown, J.S.: The computer as a tool for learning through reflection.
In: Learning Issues for Intelligent Tutoring Systems, pp. 1–18. Springer, New York
(1988)
10. Davis, D., Trevisan, M., Leiffer, P., McCormack, J., Beyerlein, S., Khan, M.J.,
Brackin, P.: Reflection and metacognition in engineering practice. In: Using Reflec-
tion and Metacognition to Improve Student Learning, pp. 78–103 (2013)
11. De Jong, T., Linn, M.C., Zacharia, Z.C.: Physical and virtual laboratories in science
and engineering education. Science 340(6130), 305–308 (2013)
12. Durall, E., Leinonen, T.: Feeler: supporting awareness and reflection about learn-
ing through EEG data. In: The 5th Workshop on Awareness and Reflection in
Technology Enhanced Learning, pp. 67–73 (2015)
13. Edwards, S.H.: Using software testing to move students from trial-and-error to
reflection-in-action. ACM SIGCSE Bull. 36(1), 26–30 (2004)
14. Govaerts, S., Verbert, K., Klerkx, J., Duval, E.: Visualizing activities for self-
reflection and awareness. In: International Conference on Web-Based Learning,
pp. 91–100. Springer, Heidelberg (2010)
15. Howlin, C., Lynch, D.: Learning and academic analytics in the realizeit system. In:
E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare,
and Higher Education, pp. 862–872 (2014)
16. Jonassen, D.H.: Instructional design theories and models: a new paradigm of
instructional theory. Des. Constr. Learn. Environ. 2, 215–239 (1999)
17. Kist, A.A., Maxwell, A., Gibbings, P., Fogarty, R., Midgley, W., Noble, K.: Engi-
neering for primary school children: learning with robots in a remote access lab-
oratory. In: The 39th SEFI Annual Conference: Global Engineering Recognition,
Sustainability and Mobility (2011)
18. Kollöffel, B., de Jong, T.: Can performance feedback during instruction boost
knowledge acquisition? Contrasting criterion-based and social comparison feed-
back. Interact. Learn. Environ. 24(7), 1–11 (2015)
19. Lowe, D., Murray, S., Lindsay, E., Liu, D.: Evolving remote laboratory architectures
to leverage emerging internet technologies. IEEE Trans. Learn. Technol. 2(4), 289–
294 (2009)
20. Matthew, C.T., Sternberg, R.J.: Developing experience-based (tacit) knowledge
through reflection. Learn. Individ. Differ. 19(4), 530–540 (2009)
21. Maxwell, A., Fogarty, R., Gibbings, P., Noble, K., Kist, A.A., Midgley, W.: Robot
RAL-ly international-promoting stem in elementary school across international
boundaries using remote access technology. In: The 10th International Conference
on Remote Engineering and Virtual Instrumentation, pp. 1–5. IEEE (2013)
22. Michinov, N., Primois, C.: Improving productivity and creativity in online groups
through social comparison process: new evidence for asynchronous electronic brain-
storming. Comput. Hum. Behav. 21(1), 11–28 (2005)
23. Miller, T.: Formative computer-based assessment in higher education: the effective-
ness of feedback in supporting student learning. Assess. Eval. High. Educ. 34(2),
181–192 (2009)
24. Prensky, M.: Khan academy. Educ. Technol. 51(5), 64 (2011)
25. Schön, D.A.: The Reflective Practitioner: How Professionals Think in Action. Basic
Books, New York (1983)

zamfira@unitbv.ro
234 Venant et al.

26. Seibert, K.W.: Reflection-in-action: tools for cultivating on-the-job learning condi-
tions. Org. Dyn. 27(3), 54–65 (2000)
27. Sweller, J.: Cognitive load theory, learning difficulty, and instructional design.
Learn. Instr. 4(4), 295–312 (1994)
28. Taylor, P., Maor, D.: Assessing the efficacy of online teaching with the construc-
tivist online learning environment survey. In: The 9th Annual Teaching Learning
Forum, p. 7 (2000)
29. Venant, R., Vidal, P., Broisin, J.: Evaluation of learner performance during prac-
tical activities: an experimentation in computer education. In: The 14th Interna-
tional Conference on Advanced Learning Technologies, ICALT, pp. 237–241. IEEE
(2016)
30. Wilson, J., Jan, L.W.: Smart Thinking: Developing Reflection and Metacognition.
Curriculum Press, Carlton (2008)

zamfira@unitbv.ro
Role of Wi-Fi Data Loggers in Remote Labs
Ecosystem

Venkata Vivek Gowripeddi1(&), B. Kalyan Ram2, J. Pavan1,


C.R. Yamuna Devi1, and B. Sivakumar1
1
Dr. Ambedkar Institute of Technology, Bangalore 560056, KA, India
vivek.vg@hotmail.com, pavanj278@gmail.com,
yamuna_devicr@yahoo.com, sivabs2000@yahoo.co.uk
2
BITS-Pilani KK Birla Goa Campus, Goa 403726, India
kalyanram.b@gmail.com

Abstract. All data are important and useful but what is more important is the
way this data is used. Wi-Fi Data-logger is a major step towards making use of
data for effective management of a remote lab. The purpose is to build a
real-time data-logger with Wi-Fi capabilities to remotely monitor the equipment
status and environmental conditions inside a remote lab containing high-end
electrical and electronic machinery. This device should be adaptive, flexible,
easy to use and should give deterministic results to take action.
The structure of Wi-Fi data logger consists of two zones: (a) de-
vice-level-hardware zone and (b) server-level-software zone.
(a) A micro-controller is connected to various sensors such as Temperature,
Humidity, Gas, motion sensors and to fault testing lines of the equipment and
peripherals. The data is continuously obtained in real time is pumped through
Wi-Fi over TCP/IP or UDP protocols to a server computer.
(b) It consists of a simple program running on the server computer to receive
the data from micro-controller through Wi-Fi and organize it. This program
also has a script running which throws up possible a warning in case of
malfunctioning and possible solution with step-wise instructions is
displayed.
Key Outcomes include: (a) Seamless integration of the device with the
existing machinery requiring minimal effort (b) Protection to components
(c) Over 40% reduction in the time required to detect and fix an issue achieved
by impeccable synchronous effort of device and software.
Thus, these Wi-Fi data-loggers enhance the way remote labs operate by
taking care of safety issues and increasing the stability of the whole remote labs
architecture. This technology can pave way for more complex architecture of
remote labs and the evolution of Wi-Fi data-logger technology will result in
evolution of remote labs.

Keywords: Remote labs  Internet of Things (IoT)  Wireless monitoring 


Real time  Safety  Revolutionary

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_22
zamfira@unitbv.ro
236 V.V. Gowripeddi et al.

1 Introduction

Remote labs are undoubtedly the future of laboratory education as they provide
opportunities for effective and holistic learning for students and researchers with
limited access to laboratories by providing them with that extra flexibility and time
required to complete that experiment or make that breakthrough or pull out an amazing
research paper like this one [1]. Remote labs have been increasing in numbers with
advanced technology day by day with new labs set up in the places spanning all
domains [2]. Remote labs serve as a bridge between virtual and real labs as well as
serving as they can be used not only in the field of education, but also for doing any
measurement-task with real laboratory instruments [3].
The general architecture of a remote lab consists of an experiment set-up inside a
room whose structure includes a computer with hardware connected to it, webcam,
microphone, feedback of experimental results back to the computer [4]. More impor-
tantly everything inside connected to the outside world through the internet or the
intranet accessed through a webserver as shown in Fig. 1.

Fig. 1. Architecture of remote labs

Data loggers are small devices with capability to accumulate mass data through
acquisition cards and store them in memory before dumping to mass storage device.
Data loggers can be made more purposeful with the advancement of technology and
their adaptive implementation can lead to collection of critical data which can be of
immense importance [5]. However, data loggers are generally overlooked by most
researchers due to their simplicity and deemed by industrialists to be an extra feature
rather a required feature. We wish to change that by showing the important role that
data loggers play in remote labs ecosystem and their huge influence and boost to the
system.

zamfira@unitbv.ro
Role of Wi-Fi Data Loggers in Remote Labs Ecosystem 237

2 Approach

In this section, the build of the data logger whose description has been told towards the
end of introduction is discussed in depth. Figure 2 shows the architecture of the data
logger in general with two branches of its build – (a) Hardware level and (b) Software
Side. Each of the two branches will be dealt in detail in this section and the whole
approach to build a data logger and implementing it in the remote lab ecosystem will be
discussed.

Fig. 2. Wi-Fi data logger architecture

2.1 Preliminary Steps

Choosing the Laboratory. The key to choosing a laboratory is by analyzing its down
time and checking for the feasibility of installing the data logger with minimal cost and
infrastructure changes. Our primary criteria to choose a laboratory are that it should
have a sufficiently high down time and low installation cost.
For this, different laboratories were identified and their downtimes (Down Time Per-
centage = Total Downtime/Total time) as well as approximate cost factor (Cost of
installation of data logger/Cost of remote lab set up). Based on this data, the best of the
lot was chosen by Factor of Decision which is weighted sum of the DTP and Cost
Factor together according to their proportionality as shown in Fig. 3.
Factor of decision is 25% when downtime is 25% and cost factor is 25%. Lesser the
cost and more the downtime, Factor of decision goes up. So, for a cost-effective
installation should be greater than 25% and for a more effective purpose, above 35%
was chosen. As shown in the Fig. 3. Five out of the eight have a Factor above 35% and
these laboratories were first choice for data logger installation. This understanding
gives us a clear idea of laboratories and a head start by helping us the best ones on
which the data logger can have its most impact on.

zamfira@unitbv.ro
238 V.V. Gowripeddi et al.

Fig. 3. Choosing the laboratory using the factor of decision

Identifying the Failures. Failures can be due to varied reasons ranging from simple
overheating to equipment malfunction. In this section, some of the failures across the
whole remote labs ecosystem are listed out.
1. Sudden Variation in power can cause the system to fail.
2. Faulty machine lines can lead to malfunctioning.
3. Failure of temperature maintenance system can cause severe damage to compo-
nents due to overheating or overcooling.
4. Increased humidity and water deposition might brick the system.
5. Poor maintenance of infrastructure is an important cause of damage.
6. Use of components or hardware products which are not rated sufficiently high for
parameters like current, voltage, temperature can cause burning out of the com-
ponents resulting in a major failure.
7. Other failures can be attributed to rapid, violent, and unexpected changes that can
occur.
8. Loose connections can also be an issue.
9. Mechanical stress between components can lead to damage of critical moving
parts.
10. Error in software code that can put the system in infinite loop.
11. If proper security measures are not in place it can lead to misuse.
12. Overflow of memory can hang the system.

Finding the Suitable Solution


1. Providing constant power supply to enhance the performance.
2. Constant monitoring of the fault lines for quick error detection can solve the issue.
3. By installing proper safety measures and cooling systems to avoid failure.
4. For better performance, the hardware components must be rated high on param-
eters like voltage, humidity and temperature.

zamfira@unitbv.ro
Role of Wi-Fi Data Loggers in Remote Labs Ecosystem 239

5. To understand unexpected failures, an error detection system must be designed and


controlled using feedback elements.
6. Quality of the solder materials must be good and the solder joints must be strong
enough to avoid physical damage or broken connection.
7. Advanced temperature and humidity sensors should be used to give precise data.
8. Software code should be developed with proper techniques.
9. Memory issue should be taken care of clearing the buffer regularly.
10. Software should be maintained and updated regularly.

2.2 Hardware Configuration

Identifying the Suitable Hardware. It is important to choose the hardware such that
it transcends across most types of laboratories and laboratory equipment and the only
change that would be required will be change in the implementation of code and
connections (Table 1).

Table 1. Hardware compatibility table


Laboratory type Type of microcontroller
Type A Type B Type C Type D
Microcontroller Lab 2 ✔ ✔
Electrical DC machines lab ✔ ✔ ✔
Measurement Lab 2 ✔ ✔ ✔
Process Trainer Kit (PTK) ✔ ✔

As you can see in the given table, Type B is adaptable to more labs than Type A, so
Type B is preferred over Type A. Similarly, Type C is preferred over Type D. Even if
cost of Type B is slightly greater than that of Type, it is worthy of using as it saves up
on cost of spares [6]. Figure 4 shows an example of a Wi-Fi based microcontroller.1

Fig. 4. Adafruit – Cortex M3 with Wi-Fi microcontroller

1
The mentioned microcontroller was used for microcontroller lab and the product contains a Cypress
WICEDTM chip. It is sold by Adafruit Industries based in New York.

zamfira@unitbv.ro
240 V.V. Gowripeddi et al.

Choosing Necessary Components


See (Table 2).

Table 2. List of components used (basic idea).


Components list Range
Temperature sensor −80 °C to +70 °C
Humidity sensor 0 to 130 g/m3 Output: 0–13 mV
Infrared sensor 760 nm wavelength
Passive infrared sensor Up to 20 m
Voltage sensors 0–30 V
Current sensors 0.2–1.6 A & 2–10 A
Microcontroller 32 bit
Battery 12 V
Box case with heat sink Special PVC material (Resistant up to 150 °C)
Buck boost converter 12–24 V and 12–5 V
Mains supply 240 V, 10 A
SD card, hard disk 32–128 GB, 1 TB

Adding Components to Board. This is a 3-stage process where in the components are
put in a circuit on a breadboard to test their working. This is illustrated by the Fig. 5.
Then the components are soldered manually onto the PCB board and made as shown.

Fig. 5. Adding components to board: (a) Testing the hardware design on breadboard
(b) Soldering the components onto a circuit (c) Design of PCB and production

Encapsulating and Enclosing the Hardware Platform. This step involves packing
the whole hardware side in a high-grade case by making required openings for Inputs
and outputs through the device. Figure 6(a) and (b) clearly describe the packaging and
its features.

zamfira@unitbv.ro
Role of Wi-Fi Data Loggers in Remote Labs Ecosystem 241

Fig. 6. (a) Hardware packaged in a IP60 box (b) Openings through the case are well sealed.

2.3 Software Configuration

Choosing the Right Software for Hardware Side as Well as Client Side. This step
involves choosing the software that is most suited to embedded programming [7] and
client side software application [8].
Arduino is an open-source electronics platform based on easy-to-use hardware and
software.
LabVIEW is an integrated development environment designed specifically for
engineers and scientists building measurement and control systems.
Arduino is chosen for:
• Simplicity
• Strong Hardware – Software interaction
• Code at an Embedded C level
• Open Source and a huge Community for support
• Large database of libraries and binaries
LabVIEW is chosen for:
• Excellent Design in form of front panel and block diagram
• Built in Libraries and tools
• Precision measurement reading
• Highly Customizable

Programming the Hardware. Arduino IDE was used to program the microcontroller
by embedding C code onto the device. Figure 7 illustrate how the same device can be
adapted to read different parameters which makes the device universal and adaptive.
Programming the Client Side. Client side programming is done through LabVIEW,
various loops, conditions are designed. All the conditions, restrictions and boundary
conditions are set in the LabVIEW block diagram with indicators and user output data
on Front Panel. Figure 8 illustrates LabVIEW programming logic done on block
diagram window of LabVIEW software.

zamfira@unitbv.ro
242 V.V. Gowripeddi et al.

Fig. 7. (a), (b) and (c) show how with a few lines of modification in the code different
parameters can be read.

Fig. 8. LabVIEW programming

2.4 Server Configuration


To facilitate remote monitoring the LabVIEW front panel can be made as a standalone
running program on a server which can be accessed through remote desktop connection
on an operator’s PC [9]. Option of using existing remote lab server or setting up an
exclusive server for datalogger are available.
Local Server. This is the simplest approach where an already existing remote lab
server can be used for running the Monitoring VI. This option requires no additional
cost and is not recommended as failure of remote server night lead to failure of the
whole system.
Exclusive Server. An exclusive server for the data logger monitoring can be installed
to provide a more robust architecture as failure in main server will not affect monitoring
and is recommended for long term installation.
Auto – Alerting through SMS and Email. Using MQQT, conditions can be given
such that email and SMS notification are sent to concerned personal [10], which is
depicted in future sections.

zamfira@unitbv.ro
Role of Wi-Fi Data Loggers in Remote Labs Ecosystem 243

3 Working
3.1 Different Cases of Operation
Following set of Figures illustrate how monitoring front panel looks like in different
cases of operation.
Figure 9 shows a typical remote lab monitoring screen where everything looks
okay. Temperature and Humidity are under control. Fault lines are off and a message is
displayed that indicating the same.

Fig. 9. Normal state of operation

Figure 10 shows a warning state of operation where the temperature is higher than
usual but temperature is not high enough to cause a damage to components, message is
displayed for operator indicating the same and instructions are provided for operator to
sort this error in simple terms.

Fig. 10. Warning state of operation

zamfira@unitbv.ro
244 V.V. Gowripeddi et al.

Figure 11 illustrate the error state of operation where the lines are faulty and red
light indicates the machine has stopped running. Message is displayed indicating the
same and a solution is provided for the operator. Since, this condition may require
expertise, notification using auto alerting is sent to concerned personnel.

Fig. 11. Error state of operation

3.2 Different Labs in Operation


Figure 12 illustrates how parameters vary from lab to lab and the same can be dis-
played on the monitoring screen. This particular example shows the graph of power
consumption along with other necessary parameters. It can be seen that there is a
sudden spike in power consumption [11] and this is depicted in real time, warning
displayed as well.

Fig. 12. Monitoring different kinds of labs by displaying related information

zamfira@unitbv.ro
Role of Wi-Fi Data Loggers in Remote Labs Ecosystem 245

3.3 Viewing the Recordings


In case of failure, logs are generated and can be viewed later on. The logs are generated
and stored on a local storage such as a SD card or hard disk. These can be retrieved
later when the system is back online through TCP/IP or physically [12]. This storage
serves as black box and the data can be analyzed to find the reason for failure.

4 Outcomes

4.1 Seamless Integration


From Fig. 13 gives an overview see how cost of datalogger compares against total cost
of remote lab and how man hours required to install a data logger fares against man
hours required to build a remote lab. From the figure, which has logarithmic Y axis, it
can be observed that typically cost of a datalogger varies between 8–14% to that total
cost of remote lab and man hours required to setup a datalogger is less than 1/15th to
that of remote lab. This proves the motto of “Seamless Integration” [13]. With minimal
cost incurred and minimum effort, data logger can be installed to most remote labs.

Fig. 13. Shows how cost of datalogger and its installation fares against total remote lab cost for
different kinds of labs.

zamfira@unitbv.ro
246 V.V. Gowripeddi et al.

4.2 Reduced Time to Detect a Failure and Rectify It


Since the laboratory is continuously monitored for various physical parameters and
fault lines, in case of an error, warning or error is shown which was depicted earlier in
the Working Section earlier, the time required to detect an error is drastically lower
than before. As illustrated in Fig. 14, for most labs it just takes 1/3rd of the usual time
required to detect and fi failure unlike without datalogger, I think this is the most
important feature of the whole architecture as it drastically improves the availability of
lab for use and helps in easy maintenance of infrastructure.

Fig. 14. Compares the time required to detect a failure and correct it with datalogger (in red) vs.
Without datalogger (in blue)

4.3 Increased Working Efficiency


As the time required to detect the failure and correct it is minimized, efficiency of the
system goes up. In Fig. 15 it can be seen that downtime for most of the labs in less than
4% which translates to an efficiency of more than 96% compared to 86–91% earlier.
This is a significant result which proves the need for datalogger.

4.4 Cost Efficiency


Finally, the important measure of outcome for sustained use of dataloggers is the savings
due to the installation over a certain period ranging from one year to five years extending
to ten years [14, 15]. According to the statistics, savings over five years are generally
more than cost of datalogger itself which can be seen in Fig. 16. Our estimates show that
breakeven point occurs two to three years from time of installation. This proves that data
logger is profitable venture both from a qualitative and quotative perspective.

zamfira@unitbv.ro
Role of Wi-Fi Data Loggers in Remote Labs Ecosystem 247

Fig. 15. Downtime Comparison with and without dataloggers for different labs

Fig. 16. Compares cost of data logger, yearly savings due to datalogger and savings made over
five years

zamfira@unitbv.ro
248 V.V. Gowripeddi et al.

5 Conclusion

The discussion of this paper started with importance of Remote Labs in current context
and need for Wi-Fi Datalogger for efficient functioning of remote labs was well
established. Approach to build the data logger was discussed from scratch. The
foundation of Datalogger from choosing the lab to choosing components and hardware
was discussed. The building of datalogger in hardware, software and server aspect is
well illustrate in Sect. 2. Working of the datalogger, with live screens from different
labs and different states of operation is shown in working section. Data logger was
judged on the parameters of integration costs and effort, time to detect a failure and
rectify it, efficiency and finally cost perspective. The results clearly prove the effec-
tiveness of data logger. The importance of data loggers in remote labs ecosystem is
well established through this paper.

Acknowledgment. The authors wish to extend thanks to various universities and industries
across India and across the world for providing with opportunities to test the datalogger archi-
tecture and make findings.

References
1. Auer, M.E.: Virtual lab versus remote lab. In: 20th World Conference on Open Learning and
Distance Education (2001)
2. Ram, B.K., Kumar, S.A., Sarma, B.M., Mahesh, B., Kulkarni, C.S.: Remote software
laboratories: facilitating access to engineering softwares online. In: 2016 13th International
Conference on Remote Engineering and Virtual Instrumentation (REV), pp. 409–413. IEEE,
February 2016
3. Pruthvi, P., Jackson, D., Hegde, S.R., Hiremath, P.S., Kumar, S.A.: A distinctive approach to
enhance the utility of laboratories in Indian academia. In: 2015 12th International
Conference on Remote Engineering and Virtual Instrumentation (REV), pp. 238–241. IEEE,
February 2015
4. Esche, S.K., Chassapis, C., Nazalewicz, J.W., Hromin, D.J.: An architecture for multi-user
remote laboratories, dynamics (with a typical class size of 20 students), 5, 6 (2003)
5. Outram, J.D., Outram, R.G.: Adaptive data logger. U.S. Patent No. 4,910,692, 20 March
1990
6. Yunlong, F., Fang, A., Li, N.: Cortex-M0 processor: an initial survey. Microcontrollers
Embed. Syst. 6, 33 (2010)
7. D’Ausilio, A.: Arduino: a low-cost multipurpose lab equipment. Behav. Res. Methods 44(2),
305–313 (2012)
8. Gontean, A., Szabó, R., Lie, I.: LabVIEW powered remote lab. In: 2009 15th International
Symposium for Design and Technology of Electronics Packages (SIITME). IEEE (2009)
9. Auer, M., Pester, A., Ursutiu, D., Samoila, C.: Distributed virtual and remote labs in
engineering. In: 2003 IEEE International Conference on Industrial Technology, vol. 2,
pp. 1208–1213. IEEE, December 2003
10. Aloni, E., Arev, A.: System and method for notification of an event. U.S. Patent
No. 6,965,917, 15 November 2005

zamfira@unitbv.ro
Role of Wi-Fi Data Loggers in Remote Labs Ecosystem 249

11. Shnayder, V., Hempstead, M., Chen, B.R., Allen, G.W., Welsh, M.: Simulating the power
consumption of large-scale sensor network applications. In: Proceedings of the 2nd
International Conference on Embedded Networked Sensor Systems, pp. 188–200. ACM,
November 2004
12. Tinga, T.: Application of physical failure models to enable usage and load based
maintenance. Reliab. Eng. Syst. Saf. 95(10), 1061–1075 (2010)
13. Vuletid, M., Pozzi, L., Ienne, P.: Seamless hardware-software integration in reconfigurable
computing systems. IEEE Des. Test Comput. 22(2), 102–113 (2005)
14. Robinson, R.: Cost-effectiveness analysis. BMJ 307(6907), 793–795 (1993)
15. Tanner, M., Eckel, R., Senevirathne, I.: Enhanced low current, voltage, and power
dissipation measurements via Arduino Uno microcontroller with modified commercially
available sensors. APS March Meeting Abstracts (2016)

zamfira@unitbv.ro
Flipping the Remote Lab with Low Cost Rapid
Prototyping Technologies

J. Chacón(B) , J. Saenz, L. de la Torre, and J. Sánchez

Universidad Nacional de Eduación a Distancia (UNED), Madrid, Spain


jchacon@bec.uned.es

Abstract. This work proposes the idea of flipping the remote lab.
A flipped remote lab would consist on requesting students to build a
remotely accessible experiment, so that teachers would test the lab in
order to evaluate it, instead of creating it themselves. Building a remote
lab is a multidisciplinary activity that involves using different skills and
which promotes long-life learning and creativity. Also, by assigning this
task to work in groups, students would also build up abilities such as
teamwork, communication and leadership. Because creating a remote
lab is a complex task, the idea is to use the experience acquired dur-
ing many years of development and use of virtual and remote labs for
teaching engineering and physics, to simplify the process and make it
manageable for students. Given the current state of the technology, pro-
viding students with some guidelines and reference designs should be
enough to make feasible for them to develop a remote experiment.

Keywords: Remote labs · Flipped classroom · Low cost platforms

1 Introduction

Flipped classroom is an instructional strategy based on reversing the traditional


learning process. Students carry out research at home and are actively involved
in construction and knowledge acquisition, but also participate in the evaluation
of their learning. On the other hand, it is widely accepted that solving today’s
major challenges requires a multidisciplinary approach. Therefore, combining
the flipped classroom teaching paradigm with online control education labs can
be an interesting and formative experience for engineering students.
The purpose of this work is to propose the idea of flipping the remote lab.
A flipped remote lab would consist on requesting students to build a remotely
accessible experiment, so that teachers would test the lab in order to evaluate
it, instead of creating it themselves. Building a remote lab is a multidisciplinary
activity that involves using different skills and which promotes long-life learning
and creativity. Also, by assigning this task to work in groups, students would
also build up abilities such as teamwork, communication and leadership. Because
creating a remote lab is a complex task, the idea is to use the experience acquired
during many years of development and use of virtual and remote labs for teaching

c Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6 23

zamfira@unitbv.ro
Flipping the Remote Lab with Low Cost Rapid Prototyping Technologies 251

engineering and physics, to simplify the process and make it manageable for
students. Given the current state of the technology, providing students with
some guidelines and reference designs should be enough to make feasible for
them to develop a remote experiment.
Recently, low cost single board computers such as Raspberry Pi or Beagle-
bone Black, and 3D printing technologies, which allow for rapid prototyping of
mechanical systems, have become pervasive. These tools provide an interesting
framework that can assist the creation of remote labs. The hardware framework
is complemented with reusable software components, a web-based architecture,
and standard communication protocols to reduce the development costs and
efforts. Based on this paradigm, an easily replicable remote lab architecture is
proposed, using 3D printed parts designs that have been open source licensed to
allow for free use or modification, as well as software components that implement
the different subsystems of the lab, and elements that either can be gathered from
old electronics devices or are cheap and commonly used components.
Some examples can be found in the literature about flipped classroom
adapted to blended learning [5,8], and engineering subjects [7], as well as of
laboratories based on single-board low-cost platform either hands-on [3] and
remote [1,4,6].
Currently, a virtual and remote lab of an air flow levitation system has been
built using the proposed methodology. It consists of a small object that has to
be lifted using the air flow generated by a fan inside a cylinder. The position
of the levitating object is measured with an infrared distance sensor and it is
used to control the rotation speed of the fan. The prototype will be incorporated
into a master degree course in control engineering. Some of the benefits expected
from the experience are to provide students with a global insight of engineering
processes, increase their motivation to research about different sensing technolo-
gies, and promote their creativity.

2 Approach
Since the idea is to let students build their own systems, the remote lab design
has to be well thought and structured. There are a few requirements the lab
should verify to be realizable:

– The design should be as low-cost as possible (about or under 100$). Students


will have to construct the lab, so it is likely that they wear out of the compo-
nents more rapidly. Moreover, in case there are several students or working
groups, the components (therefore the cost) will have to be duplicated.
– The design should be easily replicated, so that students will be able to build
one.

If these requisites are met, not only students could be able to build the lab
as part of an assigned work, but maybe even those who would want to have
their own experimentation platform can afford to construct one on their own.
The lab should use open-source technologies, mainly for reducing costs in order

zamfira@unitbv.ro
252 J. Chacón et al.

to meet the first requirement, but also because this approach encourages to
acquire knowledge by tinkering with the system design, propose modifications
or enhancements, and so on. There are some other aspects that have been con-
sidered in order to keep the costs of building the remote lab low. The first one is
to use materials that are easy to obtain and have a reasonable cost. For exam-
ple, the IR sensor and single-board computer are cheap and can be bought in
virtually any electronic component shop. Also, components should be reused
whenever it is possible. The fans can be easily gathered from an old PC or other
electronic devices. Taking benefit from the boom rapid prototyping technologies
is also important: 3D printing allows to reduce greatly the cost of mechanic pro-
totyping, and it is relatively easy to have access to a 3D printer, either at the
university, specialized shop or online services which print your designs.
The design of the laboratory can be decomposed into several tasks, some of
which have to be done by the educator, and others that have to be prepared to
be assigned to students. The tasks that corresponds to educators are:

1. Design of the experience to be carried out.


2. Design and construction of the plant.
3. Design and construction of the server software.
4. Creation of the GUI.

2.1 Design of the Experience to Be Carried Out

It is responsibility of the educator to think about what concept should be learned,


what kind of system will be used for the laboratory, the physical variables that
will be measured, the elements that will interact with the environment, and so
on. Depending on the knowledge level of students, it has to be decided which
tasks may be assigned to them.
For example, if students are good in writing software, but they lack knowledge
of electronics, maybe it is reasonable to give them the plant and let them create
only the software parts. Or the design and construction of the structural parts
can be assigned to students with mechanics engineering knowledge.

2.2 Design and Construction of the Plant

The design and construction of the plant is a thorough engineering process from
which students can benefits, acquiring a learn-by-doing understanding of how to
convert an idea or a concept to a practical solution.
In the next paragraphs, a (not exhaustive) review is provided of the hard-
ware platforms that are available to develop electronic systems. It is followed
by a review of some open source cad tools to model structural components and
electronic circuits, and finally it will be discussed the architecture followed by
our previous designs, which can be used as a reference (but not the only and
definitive solution) for future labs.

zamfira@unitbv.ro
Flipping the Remote Lab with Low Cost Rapid Prototyping Technologies 253

Hardware. Since the release of the first Raspberry Pi model, a bunch of single
board computers have appeared intending to fit developers needs, which range
from small DIY projects such as home media centers or domotic appliances,
to high performance research computing. Most of these boards are specifically
focused on the maker community, students and educators, so they are fully open-
source hardware (Fig. 1).

Fig. 1. Screenshots of two popular open-source CAD software tools, (a) FreeCAD and
(b) OpenSCAD

An interesting feature of these single-board computers is their ability to run


a complete OS. As an example, a Raspberry PI can run several Linux distros
(Raspbian, Ubuntu, LibreElec, etc.), Windows 10 IOT Core, or RISC OS, imme-
diately opening an universe of possible applications: it is easy to set up a web
server, enable remote connections through SSH or even graphical sessions, or
use many different programming languages to develop our project. Furthermore,
the integrated input/output capabilities through digital IO, interconnection pro-
tocols (SPI, I2C, etc.), or AD converters makes easy (and affordable) to build
electronics systems, even if not an expert in the subject.

Software. CAD tools assist the designer to model the physical components
which will be part of the system, in our case the structural parts and the elec-
tronics circuits. It is out of the scope of this work to discuss the pros and cons of
the so many options available. However, it is worth to mention at least some of
the most popular open-source alternatives that cover the lab needs: FreeCAD,
OpenSCAD, KiCAD.
FreeCAD is an open-source 3D CAD software tool very popular among the
3D printing community. It has many features, parametric design, multiplatform
(works on Linux, Windows and Mac), a fully customizable GUI, and native
support for python scripting and extensions.
OpenSCAD is another popular tool, mostly used to design 3D printed parts.
Unlike FreeCAD, it uses a non-graphical with a different modelling approach.
It is based on a specific description language, so the creation process is more
similar to traditional programming. One of the advantages of this approach is
the flexibility to parameterize designs.

zamfira@unitbv.ro
254 J. Chacón et al.

The electronic circuits and the PCBs has been created with the software
KiCaD, a multiplatform and open-source tool that have the support of the
CERN, which started the KiCad project and have made important contribu-
tions to it as part of the Open Hardware Initiative (OHI)1 .
As in the case of 3D printing, there are many PCB manufacturers where you
can send your circuit design and have your PCB with professional quality and a
moderate cost or, following the maker paradigm, you can build your own circuit
with a CNC PCB milling machine or a chemical etching process.
At the end of this stage, the incomes are the structural parts and electronic
circuits needed to construct the plant.

2.3 Design and Construction of the Server Software


The software in the target computer must implement several capabilities,
including: Hardware interface, Datalogging, and Communication and Control
subsystems.

Hardware Interface. The hardware interface purpose is to read measures


from the sensors, and send values to the actuators. Though it is obviously very
platform dependent, it is a good practice to use standard libraries and protocols.
For example, the Arduino API is widely used for its simplicity and it has been
exported to other hardware, like the Beaglebone boards or Raspberry Pi. The
functionality to be covered can usually be reduced to read and write digital or
analog input and outputs.

Datalogging. Once the values have been acquired, it is needed to store them
in order to be accessed whenever be required. For that purpose there are many
options, but again it is recommended to use a standard solution. There are time
series database systems (TSDB) that are specialized on time series management,
such as InfluxDB, graphite, OpenTSDB or RRDtool.

Communication. The server software, running at the target platform (the


single-board computer) must provide an API to interact with the system. The
Remote Interoperability Protocol (RIP) has been proposed to interconnect engi-
neering systems with user interfaces. It is a simple API based on the JSON-RPC
protocol which is human-readable and can be easily integrated with JavaScript
applications, as it uses the JavaScript Object Notation.

Control. The remote labs have a local controller implemented, which can be as
simple or as sophisticated as needed. In the case of a control engineering lab, it
must be a central part of the design, but even in other cases it is always needed
to take some safety measures, to assure that the system cannot be harmed by
accident or by a malicious user.
1
https://home.cern/about/updates/2015/02/kicad-software-gets-cern-treatment.

zamfira@unitbv.ro
Flipping the Remote Lab with Low Cost Rapid Prototyping Technologies 255

2.4 Creation of the GUI


The interface design should be intuitive and easy to use, include a webcam visu-
alization of the system and a way to control and monitor the plant. The authors’
labs are created with Easy Java/Javascript Simulations (EjsS), an open source
tool which offers an easy way to create simulations and remote labs with a GUI
for users with no programming skills. These interfaces can be made according
to the user needs of interactivity and visualization.

3 Use Case: Air Levitator System


The following paragraphs give an insight of all the stages of the lab building,
from the 3D modelling of the structural pieces and printing to the electronics
and software setup, describing the Air Levitator System to have it as a reference
design.

3.1 Air Levitator System


The air levitator system is composed of a cylinder in which a forced air flow
is used to lift a small object levitating on a desired position. The structure is
simple on purpose, there are only a few elements: a methacrylate tube with a
nozzle, at one end, coupled with a blower fan. Both elements are supported by
an open and movable stand, which let the air flow into the fan. The system has
been built using only the following components:
– A methacrylate tube.
– A small and light object.
– 3D printed parts.
– A single-board computer (Beaglebone).
– An infrared distance measuring sensor (PIR).
– A PC fan.
– Some discrete electronic components and a PCB.
– A webcam.

Printed Parts. Most structural elements have been printed in a Prusa Mendel
i3 3D printer, a very popular and affordable RepRap printer, available at the
authors’ department. The 3D parts has been modeled with FreeCAD.

Electronics. The air levitator system is controlled by a single-board computer,


running a GNU/Linux distribution. The Beaglebone provide general purpose
input/output (GPIO) pins to interconnect with external components. Since the
range of the voltage signal provided by the sensor (PIR) lies outside the one
admitted by the analog inputs of the board (0, 1.8 V), it must be adapted before
being connected. Similarly, the actuators (fans) require voltages and currents
that can not be directly handled by the board, so a signal conditioning circuit
has to be used.

zamfira@unitbv.ro
256 J. Chacón et al.

Software. The hardware interface task is accomplished using the bonescript


library, which basically mimics the Arduino API to cope with Beaglebone and the
GPIO pins of the board. There is a real-time loop implementing the time critical
actions: read sensors, update the controller and write outputs. Technically, it
is not actually real-time, because currently it is not supported by the Node.js
bonescript library. But for the time scale of the system, which is sampled at a
100ms rate, it performs correctly. In case hard real-time is needed, there are
other alternatives (such as C++) supported by the Beaglebone board.
The datalogging capabilities have been separated into a low priority task that
periodically dumps measures and control actions to a database, so the data is
stored and can be accessed to perform off-line processing of past sessions.
The communication subsystem to make the server functionality accessible
from outside of the lab computer implements the Remote Interoperability Pro-
tocol (RIP, [2]), which provides a standard API to control and monitor the
hardware. That basically means that any RIP enabled application can easily
interconnect with the server to read and modify variables and plant parameters,
so it is easy to decouple the GUI design from the rest of the system.
Finally, the control subsystem implements a PID controller which parameters
can be modified and tuned. The control subsystem is prepare to be extended with
more sophisticated controllers without much development effort.

The GUI. The interface design is clean and simple, sharing the same layout
with the virtual lab: there is a view of the system on the left, which is obtained
from the laboratory webcam (the equivalent to the 3D visualization in the vir-
tual lab), some plots on the right showing the time evolution of the interesting
variables (the height of the lifting object measured by the IR sensor, the set-
point and the control signal sent to the fan). Finally, at the bottom there is a
control panel which allows to modify some system parameters, as the controller
gains or the setpoint, and the connection buttons which are analogous to the
simulation execution control ones in the virtual lab. Figure 2 shows the remote
lab web interface, designed with EjsS and the RIP Model Element (an add-on)
which enhances EjsS with RIP interconnection capabilities.

Fig. 2. The remote lab.

zamfira@unitbv.ro
Flipping the Remote Lab with Low Cost Rapid Prototyping Technologies 257

4 Conclusions
In recent times, it is not unusual that students, even of first courses of engi-
neering, have at least basic knowledge of the mentioned development platforms,
and a good predisposition to use them. In spite of that, the popularity of 3D
printing technologies and the do-it-yourself (DIY) and the maker community
can be an attractive way of drawing the students’ attention. As an example of a
similar approach, some universities already proposed robotic competitions where
students are asked to solve some problems using basic construction kits. These
activities, which have demonstrated to benefit students’ development, are not
very different in nature compared to the one proposed in this work. Therefore,
it is expected to obtain great profit from the flipped remote lab.

Acknowledgements. This work was supported in part by the Spanish Ministry of


Economy and Competitiveness under projects DPI-2012-31303 and DPI2014-55932-
C2-2-R.

References
1. Balula, S., Henriques, R., Fortunato, J., Pereira, T., Borges, H., Amarante-Segundo,
G., Fernandes, H.: Distributed e-lab setup based on the Raspberry Pi: the hydro-
static experiment case study. In: 2015 3rd Experiment@ International Conference
(exp.at 2015), pp. 282–285 (2015)
2. Chacón, J., Farias, G., Vargas, H., Visioli, A., Dormido, S.: Remote interoperability
protocol: a bridge between interactive interfaces and engineering systems. IFAC-
PapersOnLine 48(29), 247–252 (2015)
3. Krauss, R.: Combining Raspberry Pi and Arduino to form a low-cost, real-time
autonomous vehicle platform. In: 2016 American Control Conference (ACC), pp.
6628–6633, July 2016
4. Michels, L.B., Gruber, V., Schaeffer, L., Marcelino, R., da Silva, J.B., de Resende
Guerra, S.: Using remote experimentation for study on engineering concepts through
a didactic press. In: 2013 2nd Experiment@ International Conference (exp.at 2013),
pp. 209–211, September 2013
5. Shi, J., Yuan, S., Zou, Q.: From practice to experiment: Development and enlight-
enment of flipped classroom in China. In: 2016 International Symposium on Edu-
cational Technology (ISET), pp. 94–98, July 2016
6. Simão, J.P.S., Lima, J.P.C., Heck, C., Coelho, K., Carlos, L.M., Bilessimo, S.M.S.,
Silva, J.B.: A remote lab for teaching mechanics. In: 2016 13th International Con-
ference on Remote Engineering and Virtual Instrumentation (REV), pp. 176–182,
February 2016
7. Toner, N.L., King, G.B.: Restructuring an undergraduate mechatronic systems cur-
riculum around the flipped classroom, projects, labview, and the myrio. In: 2016
American Control Conference (ACC), pp. 7308–7314, July 2016
8. Zhang, H., Meng, L., Han, X., Yuan, L., Wang, J.: Exploration and practice of
blended learning in HVAC course based on flipped classroom. In: 2016 International
Symposium on Educational Technology (ISET), pp. 84–88, July 2016

zamfira@unitbv.ro
Remote Experimentation with Massively
Scalable Online Laboratories

Lars Thorben Neustock(B) , George K. Herring, and Lambertus Hesselink

Stanford University, Stanford, CA 94305, USA


larstn@stanford.edu, bert@kaos.stanford.edu
http://kaos.stanford.edu

Abstract. In this paper we present a solution for highly scalable online


laboratories at low cost. The Massively Scalable Online Laboratories
(MSOL) is an online platform that enables the virtualization of real
experiments in a fashion that very closely mimics a physical experiment.
Moreover, it includes social features to enable peer-to-peer learning and
facilitates the creation of an online community. To add an experiment to
the MSOL platform, an existing setup is automatically turned into a data
set, accessible through data base queries. In this way, MSOL provides an
effective and scalable solution to add an important element to current
online education systems at low costs. The MSOL platform might also
accompany scientific and engineering papers to add another domain to
disseminate qualitative and quantitative data.

Keywords: Online laboratory · Education · Experimentation · Scala-


bility

1 Introduction

Massively Open Online Courses (MOOC) have the potential to reach vast audi-
ences both in geographic and socioeconomic scope. Currently, many universities,
including Stanford and MIT, use online coursework to augment educational pro-
grams for their students, provide professional programs for a fee, and offer video
lectures as MOOCs to the general public. These universities use online course-
work as an augmentation of physical classrooms in a flipped classroom approach,
where students study online education materials to enable increased interaction
between students and teachers. This enhances educational efficiency and depth of
learning. Moreover, some professional certificate programs, such as Udacity, focus
on online coursework to reach their students. All of these concepts rely on the abil-
ity of online coursework to be a scalable and effective means of education [1].
Although current techniques of video streaming allow users to easily view
lectures online, and, in some cases talk to advisers or teaching assistants via video
call, there currently is not any means of including experiments into an online
coursework environment. However, experiments, which normally take place in a
laboratory environment, with severe time and cost restrictions, are a crucial part

c Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6 24

zamfira@unitbv.ro
Remote Experimentation with Massively Scalable Online Laboratories 259

of education in science and technology. Experimentation provides an opportunity


for students to gain intuition about physical processes and encourages intrinsic
motivation. In the end, all major discoveries in natural science and medicine
were done via experimentation, and only through experimentation can a theory
be validated.
The MSOL concept aims at recording a fully digitized version of a physical
experiment, which can be displayed online, replicating the feeling of a physical
experiment in a laboratory. Our proposed concept is called Massively Scalable
Online Laboratories (short: MSOL) and is based on the iLabs platform devel-
oped at Stanford in 1996. This platform was, to the best of our knowledge, the
first Internet controlled Laboratory and allowed students to control and observe
a physical experiment “in a box”. It was tested at Stanford and around the
world through the “Optics for Kids” website supported by the Optical Soci-
ety of America. We could show that the remote experimentation significantly
improved a student’s ability to master new knowledge [2].
The new approach presented in this paper aims for the same goal by providing
a highly scalable version of online experiments, where the experiment is fully
digitized and thus easy to distribute over the internet.

2 Concept of Massively Scalable Online Laboratories

The proposed concept of MSOL is a highly scalable version of an online labora-


tory, where students can easily access the contents of a lab and collaborate with
other students.
In his works, the philosopher Socrates called the “Purposeful Conversation”
the best means of education. He points out that the best learning is achieved by
direct contact between people, who share the aim to educate themselves. Based
on this concept, we created a platform for a shared laboratory experience, which
can reach broad audiences connected via meetings, online coursework, or social
media on a one-to-one basis. This tool will augment online education and can
easily be embedded in an online course. The high-level idea behind our approach
is split into two steps. The first step is to turn a physical experiment into a data
set by recording all of its possible states. Then, this laboratory experiment can
be represented as a MSOL on our platform to a user who can control the vir-
tual experiment, similar to the experience of controlling a real experiment over a
remote connection. In the setting of a MSOL, which is accessed by users through
a web-page, students can collaborate to explore a diverse set of experiments and
the configurations possible in each experiment. The visualization of the exper-
iment is designed to be interactive. Students can study how the experiment
behaves according to the inputs that they provide. This allows student driven
exploration as they observe how different control settings result in diverse exper-
imental results. The student observations then serve as the basis for purposeful
conversations between peers. By operating the virtual experiment, students learn
how to operate the equipment and observe the experimental results changing as
a result of their actions. Since the experiment has to be run only one time to turn

zamfira@unitbv.ro
260 L.T. Neustock et al.

it into a data-set, and subsequently into a MSOL, the platform provides access
to an otherwise economically-restricted, advanced laboratory experience. More-
over, since the laboratory is provided over a web-page, the barrier of entrance is
low. It can be accessed from all over the world and has very low acquisition costs
for the students and educational institutes. Each additional student requires only
enough resources to respond to their web requests. Therefore, it can be used in
resource poor areas when laboratory equipment is unavailable, or it can aug-
ment existing remote or in-class education. Additionally, users will be able to
access experiments and instruments that would otherwise require extensive prior
training, are dangerous, very expensive or not available.
Moreover, the entrance barrier to create a laboratory by turning an existing
experiment into a data set is very low as well. It only needs to be done once and
can easily be archived by a simple computer program. Today, most experiments
are already run by a computer, which means that only one layer of automatic
sweeping through possible permutations needs to be added. The MSOL platform
provides the required automation tools and storage facilities.
To encourage team building and peer to peer learning, interactive social fea-
tures, as described in Sect. 3, are added to the website. This design, alongside the
low entrance barriers, allows the creation of a large online community composed
of small sub-communities that encourage “Purposeful Conversation”.

3 Implementation of MSOL
3.1 Turning an Experiment into a Data-Set
The first step in turning an existing experiment into a MSOL is to record it in
all possible stages and save the corresponding information, such as values from
sensors or images of the experiment. Most modern experiments are already con-
figured to be computer controlled. The computer controls allow for repeatability
and accuracy in a research setting. This computer control also allows a program
to iterate through all possible states of all controls automatically with only very
little extra effort. If the sensor data and associated images are recorded with
each state in an automatic sweep, then this data is all that is required to cre-
ate a virtual experiment compatible with the MSOL interface. The majority of
relevant experiments can be turned into data in this fashion.
With decreasing storage costs and increased internet bandwidth, it is rea-
sonable to store more than 105 images per experiment, which provide the view
of the experiment at each permutation as if the observer was in the room. The
images can then be stored on a local hard drive, uploaded on a server, and simul-
taneously accessed by thousands of different users. Reviewing data just as a list
would be a very tedious task; therefore, the platform provides an interactive
interface that only shows relevant pictures and data, given the control states
that the user is interested in. This reduces the required bandwidth and increases
the scalability.
In general, before running the automation, the number of planned permuta-
tions should be considered and evaluated concerning its feasibility. Yet, in most

zamfira@unitbv.ro
Remote Experimentation with Massively Scalable Online Laboratories 261

educational experiments merely several thousand input combinations will yield


an interesting result and thus, this constraint does not cause any reduction in
the capability to recreate the relevant portions of an experiment.
During or after recording the experiment, experiment data can be uploaded
to a MSOL server. This upload will contain a data file, which includes the number
of different controls, binary controls and indicators, as well as information about
their dimensionality and range. Subsequently, each permutation of the state of
the experiment will be encoded by the values of the controls and indicators.
With this state information, image data will be uploaded to a database. Thus,
the whole experiment will be available on a database which allows for low latency
queries.
Alongside this lab data and in preparation of displaying it, more information
can be added to customize the laboratory experience. The names of the indica-
tors and controls can be chosen freely. In addition, the user can upload a short
summary about experimental operation, an abstract on the theory of the exper-
iment and more in-depth information, e.g. pictures, tables, exact experimental
parameters, and theory of operation.

Sample Case: For the purposes of this paper, we chose to demonstrate the
functionality of the MSOL platform with a diffraction experiment. This can
be found on the current online version of the MSOL platform at http://www.
ilabs.education/. Diffraction at a grating is a fundamental concept in optics,
by which the wave nature of light can be explored. The diffraction experi-
ment used here includes two different lasers and three different grating spacings.
A photo-detector that can be moved along the diffraction pattern is utilized as
an indicator, displaying the varying optical intensity due to the diffracted laser
light. The uploaded experiment also contains a light switch. A picture of the
setup can be seen in Fig. 1. The recording was done with the help of a simple
python script iterating over all permutations, recording 24,000 different data

Fig. 1. Experimental setup of the diffraction experiment: (a) Sketch of the setup
(b) Photo of the lab

zamfira@unitbv.ro
262 L.T. Neustock et al.

points with pictures. This data is uploaded through an upload interface. This
diffraction experiment will function as an example through the rest of this paper.

3.2 MSOL Platform

The MSOL platform is a web application which displays uploaded experiments


in an interactive way providing tools for social cooperation. It is optimized for
laptops and desktop computers and provides all of the same functionality on
handheld devices. The visitor of the webpage, e.g. a student that wants to deepen
his/her knowledge about diffraction, will be greeted by a starting page, where
he/she can also read about the general idea of the MSOL platform and visit the
list of experiments (see Fig. 2). The visitor can then browse through the listed
experiments, trying out many different experiences in a short time. The core of
the MSOL platform is the display of the experiment. This is illustrated in Fig. 3
through 6. In Fig. 3, the general outlook of the experiment display is shown. This
page gets created automatically given the data previously uploaded about the
experiment. Thus, the number of (binary) controls and indicators is experiment
dependent. The user can display the indicator/sensor values by hitting a button
which creates an overlay over the experiment images, which is the center of this
laboratory webpage. The values of the indicators and experiment images will
change depending on the users’ selection of the controls’ states. This update will
happen in real time and appear to the user quickly after the changed input.
Thus, this interface mimics the actual experiment very accurately. The user will
be able to engage with the experiment in a similar way as if he/she were in the
laboratory, especially since, in most cases, he/she would be operating a computer
here as well. The display of the actual labs contains several other features. Firstly,
as seen in Fig. 4, after starting the lab, an abstract with the most important
information will be displayed and the option of taking a tutorial which guides the
user through the interface is provided. This tutorial points to different buttons in
the interface and explains how to operate the MSOL platform. Also, additional

Fig. 2. List of experiment that a user can conduct with the ability to search for a
particular topic, title, or author.

zamfira@unitbv.ro
Remote Experimentation with Massively Scalable Online Laboratories 263

Fig. 3. Display of the laboratory experiment in the MSOL platform. The experiment
is visible along with the controls to provide input. The indicator overlay is visible.

Fig. 4. (a) Overlay at the beginning of the lab, which gives a first overview and basic
information (b) Part of the tutorial guiding through the functionalities of the platform

information (abstract, theory, and experimental details), will be displayed right


underneath the experiment images.
Secondly, and very important for the idea behind the MSOL platform, there
are various social features available, which can be accessed through another
overlay, shown using the functions button. The most important social feature is
the ability to create a meeting with other users. The meeting feature allows a
group of people to operate an experiment together. If one user changes a control
then the controls, indicators, and associated images change in the interfaces for
all members of the meeting. In this way, people can share their experience and
try to explore the physics underlying an experiment together. In our example,
one user could change the laser color from red to green and all other members
of the meeting would also see how the diffraction pattern would change with a
different laser wavelength. To create such a meeting, as displayed in Fig. 5, the
members have to agree, in advance, on a unique meeting name and all meeting
members must type the unique name in the meeting name field of the functions
overlay in the MSOL experiment. Other social features of the MSOL experiment
include the ability to share your opinion and emotions about the experiment
via social media. There is a button for sharing the link to the current setup

zamfira@unitbv.ro
264 L.T. Neustock et al.

Fig. 5. The MSOL platform with visible functions overlay while creating a meeting to
collaborate on the same experiment with several users.

either via a URL or other buttons for posting the experiment on Facebook or
Twitter. Additionally, the interface allows the user to comment on the lab using
his/her Facebook account; sharing their excitement, giving suggestions, or asking
questions to a broad audience. In our example, they could ask about the physics
behind diffraction, share their findings or simply express their excitement.
Thirdly, while conducting the experiment, the user is able to record the data
of the experimental stages he or she is going through in a personalized lab book,
if the record option is selected. While recording, the indicator data points are
automatically added to a text field in the lab book interface, accessible via the
functions overlay. This data can subsequently be downloaded as a .csv-file and
be used for creating plots. This is similar to how an actual experiment would be
used as well. The lab book interface is displayed in Fig. 6. Thus, for our example,

Fig. 6. Overlay with recorded data, showing indicator values for several control set-
tings.

zamfira@unitbv.ro
Remote Experimentation with Massively Scalable Online Laboratories 265

people will be able to record and compare the sensor data of optical intensity
for different diffraction gratings, or laser wavelengths.
In summary, the MSOL platform as described in these sections, is able to
accurately recreate the experience of the laboratory by providing an interactive
input and response system. In addition, social features enhance its usability in
an online learning environment.

4 Current State and Conclusion


The MSOL concept presented in this paper will behave like an actual experiment
while offering social features that would enable learning via “Purposeful Con-
versations.” The social features of this approach are a noteworthy, since social
engagement improves learning. This approach does not aim to replace any of
those standard approaches in science; instead, it is intended to augment the
usage of both. It is similar to an actual experiment in behavior, showing effects
that cannot be recreated in a simulation. The data points contain noise and
through this randomness, it provides a feeling close to an actual experiment.
Additionally, the presented approach is only based on retrieving small bits of
information from a database after turning a lab into a dataset. This makes it
scalable and easy to integrate in an online learning environment. To determine
the impact MSOLs can have on education, user testing with universities, MOOC
platforms and educational programs is planned. We encourage interested partic-
ipants to contact us at http://www.ilabs.education/contact to add experiments
to the MSOL platform.
If this technology reaches its full potential, it will become the new standard
in providing experiments for online education. Crucial for this aim is a high
awareness of this new platform among students and educators, a very realistic
display of the experiment, and an easy way to upload new experiments to have
continuously updated content. MSOL can provide these features.

Acknowledgements. G.K. Herring wishes to thank the Stanford Graduate Fellow-


ship for their support. We also thank Stanford University for partial funding of this
research.

References
1. Dalgarno, B., et al.: Effectiveness of a virtual laboratory as a preparatory resource
for distance education chemistry students. Comput. Educ. 53, 853–865 (2009)
2. Hesselink, L., et al.: Stanford cyber lab: internet assisted laboratories. Int. J. Dis-
tance Educ. Technol. 1(1), 22–39 (2003). Chang, S.-K., Shih, T.K. (eds.), Idea Group
Inc.

zamfira@unitbv.ro
Object Detection Resource Usage Within a Remote
Real-Time Video Stream

Mark Smith, Ananda Maiti ✉ , Andrew D. Maxwell, and Alexander A. Kist


( )

School of Mechanical and Electrical Engineering, USQ, Toowoomba, Australia


{mark.smith,andrew.maxwell}@usq.edu.au, anandamaiti@live.com,
kist@ieee.org

Abstract. The growth in remote education through technologies such as Remote


Access Laboratories has progressed to a stage where automated interpretations
of visual scenes within a video stream are necessary to provide enhanced learning
experiences. Augmented Reality tools are under development to expand the
current reach and immersion of remote laboratories. Network capabilities
between the experiment host and the client can affect the level of these enhance‐
ments. Augmented Reality relies on sensory engagement, which is critically
linked to the synchronization between the real-time scenes and the computer-
generated enhancements. This work highlights the problems of incorporating
Augmented Reality into Remote Access Laboratories, and the methods to
improve the level of user immersion.

Keywords: Remote laboratories · Augmented reality · Computer network · Data


clustering

1 Introduction

Remote Access Laboratories (RALs) provide a service whereby experimental rigs, key
hardware or software can be accessed and operated over a network remotely (Benetazzo
et al. 2000). Remote access provides the ability to deliver training and practical expe‐
rience to a larger cohort of students due to the increased availability of equipment. Most
RAL systems supply a live video stream of the equipment under control and include a
user interface to initiate tests and receive the results.
Augmented Reality (AR) shows a real-world environment with additional, computer
generated information. This allows a user of a service to experience a live action event,
but have the event enhanced through computer generated interactive sensory feedback
(Milgram and Kishino 1994). Users are generally provided sensory information of the
event which might not otherwise be directly viewable and thus extend the range of
information presented.
To date, Augmented Reality and Remote Access Laboratories have not been well
integrated. Incorporating AR into RALs has the potential to improve the practical expe‐
rience by supplying a rich sense of interactive control and immersion in the environment.
Combining these however introduces additional complexity and concern to current RAL
environments.

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_25

zamfira@unitbv.ro
Object Detection Resource Usage 267

Remote Access Laboratories are generally affected by network induced delays (Kist
et al. 2014), where the primary source of bandwidth consumption is typically via the
live video stream of the remote experimental rig. Integration of multifaceted systems
such as AR into RAL environments can therefore exacerbate these issues resulting in
potential rapid consumption of available ICT resources.
RALs provide an important resource to schools and universities (Fisher and Jensen
1980). Schools can utilize RALs at the fraction of a cost to purchasing and up keeping
didactic resources. Universities cover a diverse range of students, from different time-
zones, and demographics, requiring access to resources at any time (Gustavsson 2003).
Applying AR processes to the RAL environment provides an extra level of interaction
with the equipment, allowing for an improved user involvement and immersion into the
test environment (Azuma 1997).
An important aspect of implementing AR features is Computer Vision (CV). This
usual requires extensive data procession and is therefore resource intensive. Improving
the reality and immersion of RALs through AR enhancements comes at cost of these
resources.
CV models, used in security and surveillance fields to detect and track objects, tend
to need training or use off-line processing of recordings because of the limitations of the
technology (Fisher and Jensen 1980). Applying object detection and tracking methods
to the equipment within the live video stream of the experimental rig, increases
complexity. It may consume resources to a point that any expected advantage AR is to
provide, ends up nullified. This leads to the key question of what are the additional ICT
resources that are needed for the inclusion of AR services into a RAL environment. This
paper investigates this question.
This work outlines typical resources utilized by common CV models. It describes
an implementation of those models to measure and ascertain the impact each model has
on ICT resources. Consumption of the host computers memory and processors are
reported, along with the time taken to process each video frame. These results can then
be used to ascertain the minimum set of resources required for combined AR and RAL
configurations. Additionally, these figures can also be applied to other models as a
baseline of the known resource consumption underlying other processes.
This paper is structured as follows. Section 2 provides a brief overview of the current
RAL and AR works, focusing on any overlapping areas. Issues pertaining to AR
resources are addressed in Sect. 3, where special computer vision aspects are measured
and explained. Section 4 highlights methods to improve user immersion within the AR
RAL environment when bandwidth limitations exist. Section 5 concludes this paper.

2 Current Research

The field combining AR within the RAL environment is relatively new. Augmented
Reality is a component of the Virtual Continuum. This is a sliding scale representing
full reality at one extreme, and a completely virtual environment on the other extreme.
Virtual test rigs and experiments have been used in engineering education and the engi‐
neering industry for more than twenty years. With the recent advances in computing

zamfira@unitbv.ro
268 M. Smith et al.

availability and network capacity, these original activities have little resemblance to the
current RAL systems (Overstreet and Tzes 1999). Virtual rigs provide graphical repre‐
sentations of instruments which were controlled over a proprietary data bus (Fisher and
Jensen 1980). Overstreet and Tzes (1999) have produced a client/server configuration
which has quickly promoted a rapid uptake of web-based RAL configurations. Virtual‐
ized equipment has dominated the field. More recently infrastructure costs and capabil‐
ities have caught up with expectations.
Expanding bandwidth provides the infrastructure that is required for live video feeds
for remote systems, now readily available (Stauffer and Grimson 2000). The inclusion
of video streams into RAL systems has also provided the impetus to the field to expand
into other non-science and engineering fields. Diverse schools and faculties are utilizing
RAL to enhance their pedagogical outcomes. Disciplines as diverse as Nursing (Maiti
et al. 2016) and surveying have benefited from practical remote control of technical
equipment.
RAL systems currently depend on live video streams for the user to observe the
operation of the equipment; however, interaction with the equipment is limited. Famil‐
iarization with technical equipment is somewhat restricted without the senses being
engaged with the functionality (Ester et al. 1996). Early mixed reality systems have
utilized fully virtualized instrumentation (Maiti et al. 2013). These mixed reality systems
where developed completely in-house, utilizing local resources. Support equipment
consisted of computer hardware and applications to simulate the environment, and
provide users with virtualized objects. Virtual Reality systems have not reached the
hype, mostly due to the lack of ICT capability and capacity. Some users of full virtual
environments also consider the experience unsettling (Fig. 1).
In recent years, AR has undertaken extensive growth in all aspects of computing.
This includes a wide variety of mobile devices. Mobile AR (Azuma 1997; Maiti et al.
2013; Fazli et al. 2009; Ester et al. 1996) systems have helped to promote the technology
through a series of convenient applications such as the addition to Google’s StreetView
called StreetLearn (Wagner and Schmalstieg 2003). Applications such as StreetLearn,
demonstrate the technology, helping to further promote research and development.
The majority of AR operations are performed for our visual sense. As the field, has
progressed, additional senses, such as tactile feedback systems were incorporated. As
such, immersion into the augmented environment has become easier to implement.
Azuma (1997) reported works of some of the first see-through head-mounted devices,
capable of viewing the current environment, overlaid with computer generated objects.
Augmented Reality has also expanded into education, helping users to visualize 3D
objects in real-time (Maiti et al. 2016). Using desktop or handheld devices, a magic-lens
effect is achieved where coded images within books, or on cards are detected, interpreted,
and rendered into complete 3D representations of topical items.
Remote Access Laboratories and Virtual Reality originally overlapped in engi‐
neering education and engineering industry fields. Remote instrument virtualization
provided a means to operate electronic test equipment over local networks. By the
1990’s, simple AR started to appear (Milgram and Kishino 1994) as a result of improved
computing resources. This form of AR, in experimentation, only supported basic sensory

zamfira@unitbv.ro
Object Detection Resource Usage 269

data. Sensor data was displayed on virtual instrumentation, while watching videos of
the experiment.
Current AR systems in RALs have limited abilities. Very few works combine the
two technologies. Combined systems focus on visual enhancements, with some works
on the other senses. Works cover some practical implementations such as taxonomies
between hands-on and remote experimentation (Maiti et al. 2013), and more computer/
electronic test-bed (Fazli et al. 2009) systems. Many works have demonstrated the tech‐
nology through elaborate configurations. Engaging students with the technology has
produced systems such as an AR Racing Car games (Grimson et al. 1998), and 3D
modelling systems, all promoting the technologies capabilities.
Hence visual methods are typically used with AR, which rely on Computer Vision.
These CV models commonly used in industry to capture objects within video scenes,
are resource intensive to the extent that it can force significant portions of the processing
to occur off-line. Understanding the resource requirements for both AR and RAL struc‐
tures is necessary to develop effective sensory feedback systems suitable for implemen‐
tation. Basic estimates about RAL system resource limitations exist (Kist et al. 2014),
but there is little work done that has investigated the impact of the two technologies in
combination.

Fig. 1. Virtual continuum. Full reality on the left and a full virtual environment on the right.

3 AR Resources

The majority of current AR works focuses on the visual sense, while CV techniques for
object detection and tracking are employed to understand the scene in the live video
stream. This section will explain the various CV models which can be used with AR
systems to detect and track objects in the video stream. Resource monitoring and meas‐
urements are presented to demonstrate the additional ICT burden imposed by AR
processes.

3.1 Background
Any system implementing AR processes, has to ensure that the users of those systems
are able to engage and interact in a timely manner. The sense of immersion within AR
applications soon fails if registration, tracking and timing errors interfere with the system
processes. Remote Access Laboratories already require a variety of hardware, computer,
software and networking resources. Without including the additional re-source load
imposed by the AR processes, the RAL system could become degraded. Consequently,
AR resource usage must be determined and minimized to maintain effective synchro‐
nization and immersion.

zamfira@unitbv.ro
270 M. Smith et al.

Augmented Reality interprets video scenes using two data modes: remote data sets
and local data sets. The use of local data sets is demonstrated in AR systems using
fiducial markers, which render 3D models (Grimson et al. 1998) when the marker is
detected. Remote data sets are impacted by the network resources available. Desktop
and mobile AR systems have had to delegate the object detection and graphic processes
to separate systems (Wagner and Schmalstieg 2003) so as to cope with the computing
resource demands. This delegation reduces the local resources needed to render the
virtual objects that interact with the current environment.
Developing visual AR systems hinges heavily on CV models. Previous CV works
on video streams provide comprehensive object identification and tracking. Unfortu‐
nately, CV models rely on off-line or post processing of the video stream. Very few
systems provide live or real-time interpretation of the video stream because of the heavy
load on ICT resources.
Computer Vision techniques are expected to materialize physical objects from
multidimensional datasets (e.g. video frame), with the same level of competency as the
human eye and brain. Within video streams, CV systems must attempt to compensate
for shadows, lighting variations, a moving background (such as trees moving in the
wind) and periodic object movements.
To help understand the variations in each video scene, the CV systems require
extensive training. Statistical analysis (Fazli et al. 2009), clustering (Ester et al. 1996)
and frame subtraction systems (Stauffer and Grimson 2000) require considerable
processing to handle data sets consisting of 20–30 frames per second, with a minimal
resolution of 76,800 pixels per frame (typical 240 × 320 frame size). This equates to
307.2 kB of data for a 32 bit RGB encoded frame. A total of 1,536,000 pixels, or 6,144 kB
per second must be processed, which is beyond the capabilities of all but dedicated
hardware. Compounding the problem is the quality of the network services. As the
connection quality deteriorates, the number of frames available for processing also
diminishes. Good network connections provide smooth transitions between frames, but
increases the resource consumption to process those frames. The two strategic resources
counter-balance effective immersion of the AR experience.

3.2 Computer Vision Model Testing


For an AR system which engages a user’s visual sense, three CV models have been
tested to determine their resource usage. Testing the resource baseline needs for the three
CV models involves repetition of analysis of the same 213 frames of an AVI video file
from an experimental rig under operation. The software was written using Microsoft’s
C# (4.0 .NET Framework). Testing occurred on a Windows 8.1 platform with an Intel
Core i7-4790 CPU @ 3.60 GHz with 8.0 GB’s of RAM.

Statistical Models
Statistical analysis of a pixel relies heavily on the historical data for the pixel. The
standard

zamfira@unitbv.ro
Object Detection Resource Usage 271

( ) ∑K ( )
p xN = 𝜂 xN ;𝜃j (1)
j=1

calculates the probability of the pixel being a foreground or background object through
its distribution of preceding frames. Cataloguing a single pixel via its distribution adds
to the overall processing requirements, and accumulates to significant levels. Testing
involved storing pixel arrays N deep (20 pixels). The previous 20 pixels for a coordinate,
are used as a Gaussian model to derive the status of the pixel. The current parameters
of the pixel are compared to the Gaussian parameters to ascertain if the pixel status has
changed. Time and processing costs involve statistical calculations for every pixel on
every frame. Below, in Fig. 3, are the processing times for each frame, using normal
distribution. The formula below was applied to each pixel, with no weighting of the
distribution, so as to keep the processing requirements to a minimum.
Figure 2 shows a reasonably consistent period of approximately 140 ms for each
frame and is much larger than the required 50–33 ms frame rate for standard video feeds.
This demonstrates that processing live video in the current configuration, will allow only
every third or fourth frame to be processed.

200
GMM
Frame Subtraction
180

160

140
process time (ms)

120

100

80

60

40

20

0
1 9 17 25 33 41 49 57 65 73 81 89 97 105 113 121 129 137 145 153 161 169 177 185 193 201 209
Frame number

Fig. 2. Statistical (GMM) frame analysis and frame subtraction frame analysis: frame processing
time

Frame Subtraction Models


Frame Subtraction involves comparing every pixel in the current frame, with the corre‐
sponding pixel from the reference frame (Fresult = FRef − Fi ). Using the RGB color
channels as the data within the reference frame, individual pixel colors are subtracted
from each other. If the resultant pixel’s delta-color does not meet a threshold, then it is
returned as a black pixel for any location that has not changed from the reference frame.
In video streams, the term not changed is not absolute. For each pixel between each

zamfira@unitbv.ro
272 M. Smith et al.

frame, the color values may fluctuate for many reasons such as; slight ambient lighting
changes, shadows, reflective surfaces, and the camera’s internal CCD variations.
For this test, simple raster like processing measures the difference between pixels of
the same coordinate (x, y), from the current frame and the previous frame. A threshold
of 10% was used. Each pixel has a maximum value of 255 per color channel, so if the
difference between pixels was less than 25, it was set as white, otherwise it was set to
black.
Frame subtraction testing on the video file, produced better results than the statistical
and DBSCAN models, as shown in Figs. 2 and 3. A consistent 110 ms processing time
occurred for each frame. These results are still far from ideal, with every second or third
frame needing to be ignored if implemented in an AR environment.

DBSCAN Time (ms)


14000

12000

10000

8000

6000

4000

2000

0
1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106 113 120 127 134 141 148 155 162 169 176 183 190 197 204 211
-2000

Fig. 3. DBSCAN frame analysis: frame processing time

Clustering Models
Clustering methods do not take a pixel in isolation, but must analyze all unclassified
pixels within its neighborhood. All pixels within a radius (depending on the clustering
model) are required to be verified as to their suitability to be a member of current group.
Additionally, a pixel must be directly density-reachable (Ester et al. 1996) to the core
pixels before it can be considered a member of the cluster. While the task is not tech‐
nically challenging, the iterative nature of a O(n log(n)) time complexity system,
consumes precious resource time.
A pixel, under DBSCAN, can be a member of only one cluster. For each frame, the
test involves checking each pixel’s (Px) neighborhood, scanning a radius of pixels out.
Unclassified pixels within the region are tested, and if suitable, marked as part of P(x)
cluster. Processing costa increase as the number of clusters, the radius of the neighbor‐
hood, and the number of pixels’ reachable increases.
Performing a DBSCAN pass on each of the frames which have undergone processing
(such as frame subtraction of statistical analysis), adds significant delays. Figure 4 shows
a relatively consistent frame processing period until frame 149. At this point, the video
scene has an increase in object motion, and DBSCAN processing increases as a result.
This delay is at totally unacceptable levels. The summary of results shown in (Stauffer and

zamfira@unitbv.ro
Object Detection Resource Usage 273

Grimson 2000). tally the resource usage for the framework models tested. (Stauffer and
Grimson 2000) results only consist of the CV model attributes, and do not include the user
interaction functions, sensor data or other RAL resource needs.

Process Time (ms)


120

100

80

60

40

20

0
1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106 113 120 127 134 141 148 155 162 169 176 183 190 197 204 211

Fig. 4. Frame subtraction frame analysis: frame processing time

Offline video processing systems are tuned for graphical tasks, and are beyond the
capabilities of generic desktop systems of remote laboratory users. The summary of
results shown in Table 1 tally the resource usage for the framework models tested.
Table 1 results only consist of the CV model attributes, and do not include the user
interaction functions, sensor data or other RAL resource needs.

Table 1. Frame analysis performance summary


Average frame Average memory Average process
process time (ms) usage (kB) usage single CPU (%)
Frame subtraction 110 37333 97
Clustering 3305 36995 100
(DBSCAN)
Statistical (GMM) 136 86629 99

The vision systems within a RAL environment can consist of single or multiple
cameras. An AR implementation must be expected to process the live video stream(s),
interpret the scene, accept sensor data, render video overlays, and retransmit data back
to the remote laboratory. User’s interaction and satisfaction with the AR RAL config‐
uration will wholly depend on the consumption of the ICT resources during this process,
and how the management of the resources can ensure user immersion in the experiment
or practical session.

zamfira@unitbv.ro
274 M. Smith et al.

4 AR Improvements

The network topographies of RALs have been investigated (Maiti et al. 2013) to an
extent that they are well understood, and provide the offset to all timing techniques and
calculations discussed in this section. Network delays must be constantly and carefully
monitored and controlled so that the accumulation of all delays remains within accept‐
able levels for AR RAL environments.
Augmented Reality resource consumption revolves around interpreting the scene
between the frames of the live video stream. Any technique employed to improve AR
responsiveness must assume a minimum network latency.
Previously, old military visual systems, using cathode ray tubes, would hijack the
interlacing scheme to interweave tactical information into the image. In today’s envi‐
ronment, image overlays are the primary method to incorporate post-generation images
into the stream. Taking a leaf from the old frame interlacing techniques military systems,
image overlays can be given the opportunity to skip the current frame, providing addi‐
tional time for any intensive processing. Network latency times of 25 ms to 50 ms would
require that every second frame to be skipped, as a minimum resource necessity.
Changes within the scene between frames vary at different rates, depending on the type
of experiment/exercise being performed. Fast changing scenes may not find frame skip‐
ping an acceptable solution, while reasonable static scenes could be updated at much
greater intervals.
Rendering virtual objects is also dependent on the timing of sensor data received
from the experimental rig. Reception of the live video stream and the sensor data (also
streamed through the same link) complicates the synchronization of rendered virtual
objects. Ensuring the live action within the video stream matches sensor data informa‐
tion, adds to the processing overheads. With less active video scenes, it is possible to
pause or limit the need for continual analysis of each frame within the stream (Maiti
et al. 2016).
Real-time statistical analysis of a remote laboratory’s full video stream is time-
consuming through the number of pixels that must be processed. Reducing this bottle‐
neck can be achieved through the following techniques.

4.1 Windowing

Within every video scene, there are regions that are of no interest to the experiment, or
have no function. Within the gear experiment shown below in (Grimson et al. 1998),
separate regions are of interest at specific times. For example, monitoring only the top
half of an experiment, or the center region of the view is probably sufficient for some
demonstrations. This limits the processing needs of augmented systems to a much
smaller subset of data.

4.2 Training

CV techniques, based on statistical modelling, all benefit from training (Grimson et al.
1998) where each pixel’s color distribution is calculated from preloading, or training

zamfira@unitbv.ro
Object Detection Resource Usage 275

from existing video data. The Gaussian mixture model defines the probability distribu‐
tion of a pixel. The number of distributions is a factor of the available memory (Maiti
et al. 2016), and processor power. As additional frames are received into the CV
processes, the distributions are updated. If the Gaussian distributions are performed on
the stream before the experiment begins, then this training will provide a baseline for
comparison during actual rig operations. For every new frame, the pixel color values
are checked against the distributions. It is common to use a standard deviation of 2.5
(Ester et al. 1996) to determines the threshold for a pixel, marking it as either a back‐
ground or foreground object. Foreground objects are the detected objects, which are
tracked. Training will reduce runtime processing costs.

4.3 Client/Server
Workstations at remote locations will vary in capabilities, which limits the base level
resources acceptable for effective AR. Placing hardware capable of performing the
intensive graphical processing at the host, ensures that all client access can receive the
full AR immersion.
Clients receive a video stream that already has the image overlays included. Data
from sensors is also processed at the host. Clients receive the full and complete video
stream, including all feedback data, plus the interface to operate the various controls and
devices. Synchronization of screen transactions and sensor readings are simpler, with
only the user interaction requiring alignment with the video scene.
User interaction within a remote laboratory experimental rig is mostly through
controlling the various equipment. User actions trigger small data packets to the server.
Smaller data requirements to the server, have smaller network loading needs. Conse‐
quently, network delays from user interactions should be minimized and impact
modestly on the users’ immersion level. Processing responsive user input at the server
allows the server to supply the complete rendered scene back to the user. With any
reasonable network access, user input should undergo minimal delays to the resultant
feedback images.

5 Conclusion

Augmented Reality systems capable of integrating with RALs have many hurdles to
overcome to provide services across the wide range of practical and experimental envi‐
ronments. Efficient utilization of ICT resources is paramount for comprehensive and
effective immersion within the remote environment. Augmented Reality for RALs relies
on a responsive network conduit and synchronization between the visual information
and user interactions. All network delays must be accounted for when determining AR
configurations. With reduced capabilities in any of these pathways, then the immersive
effect of AR becomes a liability rather than a benefit.
Incorporating AR functionality into RALs involves consuming additional ICT
resources. A computer system undertaking the AR processes for a remote laboratory
will require additional memory, processor, and network resources. Executing CV

zamfira@unitbv.ro
276 M. Smith et al.

algorithms used within the AR processes provides a metric of the baseline resource
usage. Testing the three major computer vision methods - clustering, frame subtraction
and statistical - with actual remote experiment video streams demonstrated the imme‐
diate current shortcomings of the technology.
Object detection and tracking are both essential functions for AR and this paper has
identified the direction for future work to mitigate these limitations. Limiting the region
of interest within the video image can provide significant gains in terms of overall
responsiveness and reduction of resource usage. Providing some training to the vision
systems also benefits the responsiveness but requires additional management of the
experimental rig. Ensuring the users of a remotely controlled experiment have sufficient
resources can be mitigated by having the major computer vision processing done at the
host location by hardware better suited to the task.
These resource management plans will be further tested to ascertain their capabilities
and effectiveness for AR systems with a RAL environment.

References

Benetazzo, L., Bertocco, M., Ferraris, F., Ferrero, A., Offelli, C., Parvis, M., Piuri, V.: A web-
based distributed virtual educational laboratory. IEEE Trans. Instrum. Meas. 49(2), 349–356
(2000)
Milgram, P., Kishino, F.: A taxonomy of mixed reality visual displays. IEICE Trans. Inf. Syst.
77(12), 1321–1329 (1994)
Kist, A.A., Maiti, A., Maxwell, A.D., Orwin, L., Midgley, W., Noble, K., Ting, W.: Overlay
network architectures for peer-to-peer remote access laboratories. In: 2014 11th International
Conference on Remote Engineering and Virtual Instrumentation (REV), pp. 274–280. IEEE,
February 2014
Overstreet, J.W., Tzes, A.: Internet-based client/server virtual instrument designs for real-time
remote-access control engineering laboratory. In: Proceedings of the 1999 American Control
Conference, vol. 2, pp. 1472–1476. IEEE, June 1999
Fisher, E., Jensen, C.W.: PET and the IEEE 488 Bus (GPIB). OSBORNE/McGraw-Hill, Berkeley
(1980)
Gustavsson, I.: A remote access laboratory for electrical circuit experiments. Int. J. Eng. 19, 409–
419 (2003)
Azuma, R.T.: A survey of augmented reality. Presence Teleoper. Virtual Environ. 6(4), 355–385
(1997)
Wagner, D., Schmalstieg, D.: First steps towards handheld augmented reality. In: ISWC, vol. 3,
p. 127, October 2003
Maiti, A., Kist, A.A., Maxwell, A.D.: Estimation of round trip time in distributed real time system
architectures. In: Telecommunication Networks and Applications Conference (ATNAC), 2013
Australasian, pp. 57–62. IEEE, November 2013
Fazli, S., Pour, H.M., Bouzari, H.: A novel GMM-based motion segmentation method for complex
background. In: 2009 5th IEEE GCC Conference & Exhibition, pp. 1–5. IEEE, March 2009
Ester, M., Kriegel, H.P., Sander, J., Xu, X.: A density-based algorithm for discovering clusters in
large spatial databases with noise. In: KDD, vol. 96, no. 34, pp. 226–231, August 1996
Stauffer, C., Grimson, W.E.L.: Learning patterns of activity using real-time tracking. IEEE Trans.
Pattern Anal. Mach. Intell. 22(8), 747–757 (2000)

zamfira@unitbv.ro
Object Detection Resource Usage 277

Maiti, A., Kist, A., Smith, M.: Key aspects of integrating augmented reality tools into peer-to-
peer remote laboratory user interfaces. In: 2016 13th International Conference on Remote
Engineering and Virtual Instrumentation (REV), pp. 16–23. IEEE, February 2016
Grimson, W.E.L., Stauffer, C., Romano, R., Lee, L.: Using adaptive tracking to classify and
monitor activities in a site. In: Proceedings of 1998 IEEE Computer Society Conference on
Computer Vision and Pattern Recognition, 1998, pp. 22–29. IEEE, June 1998

zamfira@unitbv.ro
Integrating a Wireless Power Transfer System
into Online Laboratory: Example
with NCSLab

Zhongcheng Lei, Wenshan Hu(&), Hong Zhou, and Weilong Zhang

Department of Automation, School of Power and Mechanical Engineering,


Wuhan University, Wuhan, China
{zhongcheng.lei,wenshan.hu,hzhouwuhee,
weilongzhang}@whu.edu.cn

Abstract. Wireless Power Transfer (WPT) technology is able to transmit


electric power from the Tx side to Rx side without any electrical connection,
realizing electrical isolation and breaking through the limitations of electric
wires. Traditionally, finding the best working point of the WPT system is dif-
ficult as there are a great number of coupled parameters to tune. Besides, the
experimenter has to be on site to carry out the experiment with limitations such
as time, location, safety issue as well as sharing issue. In this paper, a two-coil
structure WPT system is integrated into web-based online laboratory NCSLab
using a controller and a DAQ (data acquisition) card as well as an user-defined
algorithm. With the latest technologies brought in, NCSLab is completely
plug-in free for experimentation on the WPT system. The optimum frequency
can be easily obtained by setting the system in the sweep-frequency mode using
the remote control platform. The remote control platform NCSLab addresses the
safety issue and test rig sharing issue by offering experimenter flexibility to carry
out WPT experiment anytime anywhere as long as the Internet is available. T he
integration of WPT system into NCSLab also provides teachers with a powerful
tool for classroom demonstration of state-of-the-art technology.

Keywords: Wireless Power Transfer (WPT)  Remote control  Data


acquisition  State-of-the-art technology sharing

1 Introduction

Wireless Power Transfer (WPT) technology has drawn growing attention in recent
years. Limitations on electric wires are no longer problems for the transformation of the
electric field to the magnetic field, and then to the electric field. In [1], a far-field
technique using propagating electromagnetic waves that transfer energy the same way
as radios transmit signals is presented. In contrast to the far-field technique, M. Soljacic
[2] introduced a near-field (inductive coupling) technique operating at distances less
than a wavelength of the signal being transmitted. As the near-field technique requires
relatively low frequency compared with far-field technique, it attracts much research
attention since it was proposed [3–6].

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_26
zamfira@unitbv.ro
Integrating a Wireless Power Transfer System 279

WPT systems designed in Wuhan University [7–9] use inductive coupling technique.
WPT systems in [8, 9] which could be potentially used for high voltage power cable
monitoring were first introduced. All of the systems above adopt a simple 2-coil structure
easy for implementation rather than 3-coil [10, 11] or even multi-coil structure [12, 13].
Regarding conventional design, the tuning of parameters has been a problem.
Traditionally, finding the best working point of the WPT system is difficult as there are
a great number of coupled parameters to tune. What’s more, the experimenter of the
WPT system has to be on site to carry out the experiment with limitations on time and
location as well as safety issue.
State-of-the-art technology is able to keep people informed of the latest trends and
hotspot in the related field. However, conventional, it is not easy to share students with
the latest technology either for cumbersome equipment or device needs careful atten-
tion. For a WPT system with complicated structure and even high voltage generated in
the Tx and Rx side while energized, classroom demonstration is difficult.
The complicated implementation of physical system makes it impossible for every
university and institution to build a set of WPT system. Thus, it is urgent to address the
sharing issue to provide open access for experimentation and research, especially for
education of state-of-the-art technology.
The tuning issue, education issue along with sharing issue has brought out the idea
of Remote Control WPT (RCWPT) system based on networked control [14, 15] which
is a hotspot. There are already a great many of online laboratories which can provide
remote control of physical equipment. For example, in [16], the remote control of
electric and electronic instruments is introduced in NetLab, GOLDi-labs in [17] allows
users to remote control a 3-axis portal and in [18] a remote inclined plane laboratory for
displacement measurements versus time is presented.
NCSLab (Networked Control System Laboratory) is a hybrid online laboratory
which provides both physical and virtual test rigs for remote experimentation. Previ-
ously, only physical and virtual test rigs in control engineering are setup in NCSLab.
For example, fan speed control system [19], dual tank [20], and DC motor [21].
In all, there are 20 virtual rigs and six physical test rigs in NCSLab. However, as
one of the advantages of NCSLab, test rigs in geographically diverse locations can be
integrated into NCSLab [22]. Theoretically, all test rigs that match the interface of
NCSLab can be successfully deployed.
However, it remains to be found whether it is possible to utilize NCSLab to explore
the WPT system. For example, whether it is able to remotely find out the efficiency,
best working point and optimum frequency of the WPT system. Given that NCSLab is
powerful platform that new types of test rigs can easily be deployed into its framework
through the pre-customized interface, a WPT system could be potentially integrated
into NCSLab as well.
The WPT system is a physical test rig containing multiple electric and electronic
parts that all need careful attention. As various widgets such as textboxes, charts and

zamfira@unitbv.ro
280 Z. Lei et al.

gauges are integrated into NCSLab, it provides convenience to remotely monitor and
tune parameters in a visual mode. The WPT system in this paper uses a simple 2-coil
structure as other ones in Wuhan University.
The rest of the paper is organized as follows. In Sect. 2, the NCSLab architecture is
presented. Two of the specific features of NCSLab are also introduced in this
part. Section 3 describes the principle of a two-coil WPT system adopted in this paper.
In Sect. 4, the integration of WPT system into NCSLab is explored, in which the
controller, USB data acquisition card and control algorithm are discussed in details.
Section 5 gives an example of a well-configured monitoring interface of the WTP
system in NCSLab. The paper is concluded in Sect. 6.

2 NCSLab Architecture

Evolved through over 10 years with the latest upgradation, NCSLab provides full
access at www.powersim.whu.edu.cn/ncslab with the advantage of 24/7 with HTML5
technology fitted in. Apart from common features of remote laboratories [23, 24],
NCSLab has its specific features, two of which are introduced as follow.
1. Free from plug-ins
Web-based online laboratory offers convenience without any software installation.
However, potential web crash and updating issues caused by plug-ins remains to be
addressed. The finalization of HTML5 provides alternative to other 3D engines
which needs plug-ins for rendering. As previous Flash 3D engine is replaced by
HTML5 technology in NCSLab [25, 26] with more and more web browsers sup-
porting HTML5, experimenter can conduct various experiments in NCSLab in the
web browser free from plug-ins.
2. 3D Virtual roaming
Apart from the tree structure (laboratory - sub-laboratory - test rig) of the NCSLab,
virtual roaming which can be parallelly accessed is also provided for the experi-
menter. Same as in the physical laboratory, the experimenter can go to the virtual
laboratory building with keyboard and mouse. Several sub-laboratory room will
appear when walking into the main building. If the experimenter chooses one of the
sub-laboratory and walks in, a series of virtual experimental equipment will lie on
the virtual desks in front of the experimenter. Each virtual test rig is ready for
experimentation if it is “picked up” by the experimenter.
Figure 1 shows the current architecture of NCSLab in Wuhan University.
Researchers from all over the world can access the system to carry out experiments
with registered username and password as all the test rigs are open for experimentation.
Test rigs in control engineering as well as WPT system in electric and electronic
engineering are integrated into NCSLab.

zamfira@unitbv.ro
Integrating a Wireless Power Transfer System 281

Fig. 1. NCSLab architecture

3 Principle of a Two-Coil WPT System

To provide a RCWPT system, the key issue is to find an appropriate parameter for
control using inductive coupling. Another problem to be addressed is to offer
observable result for monitoring. Therefore, a simple two-coil structure WPT system is
the best option.
The circuit model of two-coil WPT system using magnetically coupled resonator is
shown in Fig. 2, in which the Tx coil and the Rx coil share the same resonant fre-
quency. As can be seen in Fig. 2, an AC voltage source drives a RLC branch on the Tx
side, which is able to create a high frequency magnetic field on the Tx side. Once the
Tx coil is energized at the resonant frequency, the Rx coil can recover the energy from

zamfira@unitbv.ro
282 Z. Lei et al.

RP1 Rp2

AC iron core
L1 L2
bulb

k
C1 C2

Fig. 2. Circuit model of two-coil WPT system

the field converted from electric power transmitted through the magnetic field between
the two coils. Finally the Rx coil can drive a load bulb for observation.
Using Kirchhoff’s voltage law (KVL), the two-coil model depicted in Fig. 2 can be
analyzed as

1
I1 ðR1 þ jxL1 þ Þ þ jxI2 M ¼ Vs ð1Þ
jxC1

1
I2 ðR2 þ jxL2 þ Þ þ jxI1 M ¼ 0 ð2Þ
jxC2

where R1 ¼ Rp1 , R2 ¼ Rp2 and the M is the mutual inductance between the Tx and Rx
coil. The relationship between coupling coefficient k and mutual inductance M are
pffiffiffiffiffiffiffiffiffiffi
M ¼ k L1 L2

To simplify the two circuit Eqs. (1) and (2), Z1 and Z2 are defined as the impen-
dence of the both circuit loops as

1 1
Z1 ¼ R1 þ jxL1 þ ; Z2 ¼ R2 þ jxL2 þ
jxC1 jxC2

The two KVL Eqs. (1) and (2) can be solved as

Z2 Vs jxMVs
I1 ¼ ; I2 ¼  ð3Þ
Z1 Z2 þ x2 M 2 Z1 Z2 þ x2 M 2

zamfira@unitbv.ro
Integrating a Wireless Power Transfer System 283

4 Implementation of Integrating a WPT System into NCSLab

A WPT system is able to transmit electric power within a reasonable distance. To


achieve wireless power transfer, a great many of electronic devices are needed for the
practical implementation. Figure 3 demonstrates the diagram of the practical imple-
mentation. In Tx side, a H-bridge high frequency inverter is used to convert the DC to
AC. On Rx side, a high speed bridge rectifier made of Shockley diodes is used to
rectify AC to DC.

A A
+
S1 D1 S3 D3 RP1 RP2
V
iron core
Vd V
L1 L2 bulb
S2 D2 S4 D4 k
C1 C2
- To S2 and S3

DDS
Frequency
Generation To S1 and S4
Controller
USB DAQ Current and
Card Voltage

NCSLab
Camera
Server

Fig. 3. Diagram of practical implementation

Figure 4 shows the RCWPT system in the physical laboratory, it can be seen that
there is no electric connection between the Tx and Rx coils. The physical system can
definitely be used for hands-on WPT experiment on site with forementioned limita-
tions. After integration, the RCWPT system called Wireless Power Transfer in
NCSLab can be accessed at http://www.powersim.whu.edu.cn/ncslab in the Compli-
cated System sub-laboratory for remote experimentation.
Due to the relocation of the laboratory, there is no enough space for the WPT
system. Thus, the current WPT system is setup at the corner of the laboratory. For the
sake of legibility, the system in Fig. 4 used a picture taken in May, 2016, which is
exactly the same system as the current one except for the distance between the two
coils. The location of the system demonstrates the advantage of the RCWPT for saving
space.
Apart from basic electronic components, in order to integrate the WPT system into
NCSLab to build a RCWPT, a controller, a USB DAQ (Data Acquisition) card and an
algorithm are three key factors.

zamfira@unitbv.ro
284 Z. Lei et al.

Frequency
Rx coil Tx coil
generator

H-bridge high
Rectifier frequency
converter

Bulb

Controller
DAQ card

Fig. 4. Remote controlled WPT system (taken in May, 2016 in the old laboratory)

4.1 Windows-Based Controller


Actually, the controller for the RCWPT system is a Windows-based mini PC running
communication and camera-supporting program all the time. Figure 5 demonstrates the
controller based on mini PC bar. The USB interface board is mainly used.
The camera API is running to support the 24/7 monitoring of the system. For the
RCWPT system, two cameras are connected to the controller. One camera is for the
overall system. The other is for part of the system, or more precisely, the monitoring of
the bulb, ammeter and voltmeter. The ammeter is for the measurement of output

Fig. 5. Controller based on mini PC bar

zamfira@unitbv.ro
Integrating a Wireless Power Transfer System 285

current, and the voltmeter is to measure the output voltage. The experimenter is able to
watch the monitoring result in the web page, in which the brightness of the bulb shows
the output power of the system.
Traditionally for other WPT systems in Wuhan University, a direct digital syn-
thesizer (DDS) module controlled by a MCU (microcontroller unit) is adopted to
generate the accurate square wave exciting signal [9]. Using the keyboard on the MCU
controller, the output frequency can be tuned from 0.1–1 MHz with the step size of
10 Hz. To achieve remote control of the WPT system, the controller is connected to the
frequency generator. Parameters such as inciting frequency, sweep frequency and
sweep amplitude can be remotely reset as long as it can be found from the control
algorithm.

4.2 USB DAQ Card


Another functionality of the controller is the communication with the USB DAQ card.
The USB DAQ card is used for collecting signals like the current and voltage both in
the Tx and Rx side. It should be noted that the collected current and voltage are
measured from the DC side in both the Tx and Rx side, which can be seen in Fig. 3.
Using the collected current and voltage, the input power and output power can be
calculated. Thus, the transfer efficiency can be obtained.
The DAQ card also monitors the command between the test rig and the server.
Command such as algorithm uploading and downloading as well as parameters tuning
are under its surveillance.

4.3 Sweep-Frequency Algorithm


The sweep-frequency algorithm is designed in MATLAB/Simulink, and built in
Real-time Workshop (RTW). Figure 6 shows the sweep-frequency algorithm in detail.
Setting out and Feedback blocks are two user-defined functions concerning
sweep-frequency setting and signals retrieving. After the design and compilation of the
algorithm, it is uploaded to the server in the web interface. Program running in the
controller can communicate with the algorithm.

Fig. 6. Sweep-frequency algorithm

zamfira@unitbv.ro
286 Z. Lei et al.

The parameters in the algorithm such as frequency “Hz” block and sweep fre-
quency and amplitude in “Sweep Setting” block can be found and tuned in the tree
structure of the monitoring and control interface of NCSLab, and signals such as input
current, voltage and output current and voltage could be monitored using various
widgets offered by NCSLab.

5 Monitoring and Control of the WPT System in NCSLab

A WPT system can be integrated into NCSLab using hardware and algorithm men-
tioned in Sect. 4. The remote control platform NCSLab adopts Web structure, which
means experimenters don’t have to install any client applications. With the latest
technologies brought in, the platform is completely plug-in free, so the experimenter
just has to register and login to conduct the experiment on RCWPT system.
As the WPT system is for remote control rather than power delivery, the power
transfer efficiency and transferred power is not the priority in this paper, thus, the
RCWPT system is built without precise calculation. With the use of various widgets
provided by NCSLab, the system is able to monitor signals and parameters such as
current, voltage, power and frequency. Parameters such as frequency and
sweep-frequency amplitude can be easily controlled in the user-defined interface.
Signals can be collected easily using widgets like charts and gauges. More importantly,
it helps to remotely explore the optimum transfer frequency by tuning the exciting
frequency, sweeping frequency and sweeping amplitude.
In order to analyse the power transfer efficiency and optimum frequency, data such
as the input and output power, and working frequency should be collected. In partic-
ular, to obtain the optimum frequency, the WPT system should be set in
sweep-frequency mode, which is shown in Fig. 7(a). The resonant frequency is
180.75 kHz at the distance of 13 cm with sweep frequency 0.4 and sweep amplitude
1000 Hz, the transfer efficiency can be obtained. Figure 7(b) shows the RCWPT
system working at resonant frequency, in which the output current and voltage are
1.172 A and 5.782 V, respectively. It can be calculated that the output power is
6.777 W. From Fig. 7(b), it can be seen clearly that the bulb is brighter than the
moment in Fig. 7(a), in which the output current and voltage are 0.8707 A and
2.782 V, respectively, and the output power is 2.422 W.
Once the state-of-the-art WPT technology is integrated into NCSLab, it is able to
provide remote access for the teachers and students. On one hand, the teacher can
clearly explain the RCWPT system through classroom demonstration. On the other
hand, the students can carry out the WPT experiment individually anytime anywhere
with customized control and monitoring interface. The integration brings technology
close to students with less cost and more convenience.

zamfira@unitbv.ro
Integrating a Wireless Power Transfer System 287

(a)

(b)
Fig. 7. RCWPT system in NCSLab (a) working at sweep-frequency mode (frequency at
1.8075 kHz ± 1000 Hz, x = 0.4) (b) working at 1.8075 kHz

6 Conclusion

In this paper, a WPT system is deployed into the NCSLab framework. The integration
of WPT system into NCSLab benefits from various monitoring and control widgets of
NCSLab. The optimum frequency and best working point can be easily obtained by
setting the WPT system in the sweep-frequency mode using widgets of the NCSLab,
which shows the results in a visual and intuitive interface. Thus, the system can be
adapted to the best working point by resetting the frequency obtained previously,

zamfira@unitbv.ro
288 Z. Lei et al.

which could make the system working at the best condition and achieve the highest
output power. The remote control platform provides flexibility for the experimenter to
remotely perform experiment anytime anywhere as long as the Internet is available,
which address the tuning issue as well as the safety issue at the same time. Using
NCSLab, the WPT system is able to be integrated into online laboratory for remote
experimentation for both classroom demonstration and experiment by students, which
brings state-of-the-art technology close to students.

Acknowledgement. This work was supported by the National Natural Science Foundation
(NNSF) of China under Grant 61374064.

References
1. Sample, A.P., Yeager, D.J., Powledge, P.S., Mamishev, A.V., Smith, J.R.: Design of an
RFID-based battery-free programmable sensing platform. IEEE Trans. Instrum. Meas. 57
(11), 2608–2615 (2008)
2. Kurs, A., Karalis, A., Moffatt, R., Joannopoulos, J.D., Fisher, P., Soljacic, M.: Wireless
power transfer via strongly coupled magnetic resonances. Science 317(5834), 83–86 (2007)
3. Inagaki, N.: Theory of image impendence matching for inductively coupled power transfer
systems. IEEE Trans. Microw. Theory Tech. 62, 901–908 (2014)
4. Kiani, M., Jow, U.-M., Ghovanloo, M.: Design and optimization of a 3-coil inductive link
for efficient wireless power transmission. IEEE Trans. Biomed. Circuits Syst. 5(6), 579–591
(2011)
5. Sample, A.P., Meyer, D.A., Smith, J.R.: Analysis, experimental results, and range adaptation
of magnetically coupled resonators for wireless power transfer. IEEE Trans. Industr.
Electron. 58(2), 544–554 (2011)
6. Beh, T.C., Kato, M., Imura, T., Oh, S., Hori, Y.: Automated impedance matching system for
robust wireless power transfer via magnetic resonance coupling. IEEE Trans. Industr.
Electron. 60(9), 3689–3698 (2013)
7. Deng, Q., Liu, J., Czarkowski, D., Kazimierczuk, M.K., Bojarski, M., Zhou, H., Hu, W.:
Frequency-dependent resistance of litz-wire square solenoid coils and quality factor
optimization for wireless power transfer. IEEE Trans. Industr. Electron. 63(5), 2825–2837
(2016)
8. Zhou, H., Zhu, B., Hu, W., Liu, Z., Gao, X.: Modelling and practical implementation of
2-coil wireless power transfer systems. J. Electr. Comput. Eng. 27, 1–8 (2014)
9. Hu, W., Zhou, H., Deng, Q., Gao, X.: Optimization algorithm and practical implementation
for 2-coil wireless power transfer systems. Am. Control Conf. (ACC) 2014, 4330–4335
(2014)
10. Kang, S.H., Choi, J.H., Harackiewicz, F.J., Jung, C.W.: Magnetic resonant three-coil WPT
system between off/in-body for remote energy harvest. IEEE Microwave Wirel. Compon.
Lett. 26(9), 741–743 (2016)
11. Moon, S.C., Kim, B.C., Cho, S.Y., Ahn, C.H., Moon, G.W.: Analysis and design of a
wireless power transfer system with an intermediate coil for high efficiency. IEEE Trans.
Industr. Electron. 61(11), 5861–5870 (2014)
12. Yin, J., Lin, D., Lee, C.K., Hui, S.Y.R.: A systematic approach for load monitoring and
power control in wireless power transfer systems without any direct output measurement.
IEEE Trans. Power Electron. 30(3), 1657–1667 (2015)

zamfira@unitbv.ro
Integrating a Wireless Power Transfer System 289

13. RamRakhyani, A.K., Lazzi, G.: Interference-free wireless power transfer system for
biomedical implants using multi-coil approach. Electron. Lett. 50(12), 853–855 (2014)
14. Lai, J., Zhou, H., Lu, X., Yu, X., Hu, W.: Droop-based distributed cooperative control for
microgrids with time-varying delays. IEEE Trans. Smart Grid 7(4), 879–891 (2016)
15. Lu, X., Yu, X., Lai, J., Guerrero, J.M., Zhou, H.: Distributed secondary voltage and
frequency control for islanded microgrids with uncertain communication links. IEEE Trans.
Indus. Inf. doi:10.1109/TII.2016.2541693
16. Nedic, Z.: Demonstration of collaborative features of remote laboratory NetLab. In: 2012 9th
International Conference on Remote Engineering and Virtual Instrumentation (REV), pp. 1–4
(2012)
17. Henke, K., Vietzke, T., Hutschenreuter, R., Wuttke, H.D.: The remote lab cloud
‘GOLDi-labs.net’. In: 2016 13th International Conference on Remote Engineering and
Virtual Instrumentation (REV), pp. 37–42 (2016)
18. Stefka, P., Zakova, K.: Displacement measurements versus time using a remote inclined
plane laboratory. In: 2016 13th International Conference on Remote Engineering and Virtual
Instrumentation (REV), pp. 435–439 (2016)
19. Hu, W., Liu, G.-P., Zhou, H.: Web-based 3-D control laboratory for remote real-time
experimentation. IEEE Trans. Industr. Electron. 60(10), 4673–4682 (2013)
20. Hu, W., Zhou, H., Liu, Z.-W., Zhong, L.: Web-based 3D interactive virtual control
laboratory based on NCSLab framework. Int. J. Online Eng. 10(6), 10–18 (2014)
21. Lei, Z., Hu, W., Zhou, H., Zhong, L., Gao, X.: A DC motor position control system in a 3D
real-time virtual laboratory environment based on NCSLab 3D. Int. J. Online Eng. 11(3),
49–55 (2015)
22. Hu, W., Liu, G.-P., Rees, D., Qiao, Y.: Design and implementation of web-based control
laboratory for test rigs in geographically diverse locations. IEEE Trans. Industr. Electron. 55
(6), 2343–2354 (2008)
23. Santana, I., Ferre, M., Izaguirre, E., Aracil, R., Hernández, L.: Remote laboratories for
education and research purposes in automatic control systems. IEEE Trans. Industr. Inf. 9(1),
547–556 (2013)
24. Maiti, A., Maxwell, A.D., Kist, A.A.: Features, trends and characteristics of remote access
laboratory management systems. Int. J. Online Eng. 10(2), 30–37 (2014)
25. Lei, Z., Hu, W., Zhou, H.: Deployment of a web-based control laboratory using HTML5. Int.
J. Online Eng. 12(7), 18–23 (2016)
26. Hu, W., Lei, Z., Zhou, H., Liu, G.-P., Deng, Q., Zhou, D., Liu, Z.-W.: Plug-in free web
based 3-D interactive laboratory for control engineering education. IEEE Trans. Industr.
Electron. doi:10.1109/TIE.2016.2645141

zamfira@unitbv.ro
Spreading the VISIR Remote Lab Along Argentina.
The Experience in Patagonia

Unai Hernandez-Jayo1 ✉ , Javier Garcia-zubia1, Alejandro Francisco Colombo2,


( )

Susana Marchisio , Sonia Beatriz Concari3, Federico Lerro3, María Isabel Pozzo4,
3

Elsa Dobboletta4, and Gustavo R. Alves5


1
University of Deusto, Avda Universidades, 24, 48007 Bilbao, Spain
unai.hernandez@deusto.es
2
Universidad Nacional de la Patagonia San Juan Bosco, Ciudad Universitaria Km 4,
9005 Comodoro Rivadavia, Chubut, Argentina
3
Universidad Nacional de Rosario, Maipu 1065, 2000 Rosario, Santa Fe, Argentina
4
Rosario Institute of Research in Educational Sciences (IRICE-CONICET-UNR),
Ocampo y Esmeralda, Rosario, Argentina
5
Polytechnic of Porto, R. Dr. Roberto Frias, 4200-465 Porto, Portugal

Abstract. The learning of technical and science disciplines requires experi‐


mental and practical training. Hands-on labs are the natural scenarios where prac‐
tical skills can be developed but, thanks to Information and Communication
Technologies (ICT), virtual and remote labs can provide a framework where
Science, Technology, Engineering and Mathematics (STEM) disciplines can also
be developed. One of these remote labs is the Virtual Instruments System in
Reality (VISIR), specially designed to practice in the area of analog electronics.
This paper aims at describing how this remote lab is being used in the Universidad
Nacional de la Patagonia San Juan Bosco (UNPSJB - Argentina), in the frame‐
work of the VISIR+ (“This project has been funded with support from the Euro‐
pean Commission. This publication reflects the views only of the authors, and the
Commission cannot be held responsible for any use which may be made of the
information contained therein”.) project funded by the Erasmus+ Program, one
institution without previous experiences with remote labs.

1 Introduction

The Virtual Instrument Systems in Reality (VISIR) is a well-known remote lab that has
been discussed many times in this conference and in many articles published in journals.
Being designed and developed by Prof. Ingvar Gustavsson in Sweden almost 10 years
ago [1], this remote lab has been set-up in different European institutions. University of
Deusto was the first institution that purchased and deployed the VISIR outside Sweden,
and it was followed by other universities in Spain, Austria and Portugal. After the
expansion of the remote lab platform, the VISIR Consortium created around it aimed at
sharing experiences and experiments using the VISIR as a learning tool which helps
students and teachers to achieve the learning outcomes of subjects related to analogue
electronics. With the goal of spreading the knowledge about the VISIR, the VISIR

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_27

zamfira@unitbv.ro
Spreading the VISIR Remote Lab Along Argentina 291

+ project was presented to ERASMUS+ European Union call, being finally accepted
in July 2015 [2]. To fulfil that goal, European universities that have the experience of
using the VISIR will transfer it to Latin American institutions, namely Higher Education
Institutions with engineering careers/courses.
VISIR+ has two well-differenced well distinctive stages: during the first one, insti‐
tutions from Latin America must deploy the physical elements, instruments and compo‐
nents of the VISIR remote lab. This stage is supported by staff from BTH (Sweden), the
developers of the remote lab. In a second stage, the other European Universities involved
in the project will help their Latin American partners to exploit the resources of the
VISIR remote lab as a learning tool, sharing with them their experiences along these
years.
This paper rather than being focused on describing the VISIR+ aims at exploring
the results of the first training action that was held in Rosario (Argentina) in September,
2016. During this training action, staff from University of Deusto introduced the VISIR
remote lab to more than 25 trainers, lecturers and professors from different parts of
Argentina that were interested in discovering the possibilities offered by the VISIR. The
sessions started with an introduction to remote lab as many of the attendants were novel
in these environments.
The goal of this paper is to show not only the experiences during this training action,
but also the first intensive use of the VISIR by lecturers and students from Universidad
Nacional de la Patagonia San Juan Bosco (UNPSJB).

2 Scope of Training Action 2

One of the expected results of the project is a set of educational modules for engineering
courses comprising the use of hands-on, simulated and remote labs, following an
enquiry-based methodology. It implies the inclusion of the VISIR remote lab in theo‐
retical and practical lessons with students, within a variety of courses related to electric
and electronic circuits. In order to fulfill that objective, the project VISIR+ has two
training actions in associated Latin American institutions partners of the Project.
The first training action in the framework of VISIR+ project took place at Facultad
de Ciencias Exactas, Ingeniería y Agrimensura (FCEIA) from Universidad Nacional de
Rosario (UNR) in September 2016. The training was developed during three days,
combining oral presentations, workshops and practical activities with VISIR. The
training sessions were led by two research professors of the Universidad de Deusto, who
are experts in the use of VISIR, plus three UNR teachers who usually use remote labo‐
ratory practice in their subjects of Electronic Engineering courses. Also present was one
researcher of from Instituto Rosario de Investigación en Ciencias de la Educación
(IRICE), member of the VISIR+ project, with the aim of taking records of from the
training sessions.
This training action at FCEIA targets all teachers with lecture duties in Engineering
courses related to electric and electronic circuits, plus two representatives from each of
the two UNR associated partners: Facultad Regional Rosario of the Universidad Tecno‐
lógica Nacional and Instituto Politécnico Superior of the UNR. As this training action

zamfira@unitbv.ro
292 U. Hernandez-Jayo et al.

was also considered one key moment for dissemination at a regional level, there were
also invited academic authorities, PhD students and teachers from other institutions
nearby UNR.
Three teachers from different Argentine Universities were also invited to participate
in this training action. They were selected by Consejo Federal de Decanos de Ingeniería
(CONFEDI). The participation of CONFEDI as an associated partner provides the
conditions for creating an additional impact at the national level in Argentina. The three
teachers attended the training sessions as regional coordinators of a project that
CONFEDI is carrying out in Argentina, –to encourage the subsequent dissemination of
the use of VISIR in the Engineering faculties. Belonging to this last target group, is the
professor of the Universidad Nacional de la Patagonia San Juan Bosco, whose experi‐
ence using VISIR is presented in this paper.
During this training action, staff from Universidad de Deusto introduced the VISIR
remote lab to 26 trainers, lecturers and professors from different parts of Argentina that
were interested in discovering the possibilities offered by the VISIR (Fig. 1).

Fig. 1. Training action at Universidad Nacional de Rosario (Argentina)

Due to some administrative delays related to the import process, UNR still lacked
the necessary equipment to support training. This inconvenience was overcome by using
the VISIR platform of Universidad de Deusto, via Internet.
The sessions started with an introduction to remote lab because many of the attend‐
ants were novels in these environments. The training program included aspects related
with the design, implementation and the evaluation of educational modules with VISIR.
In addition it included application examples selected from those available on Web Lab
Deusto, to prove the adaptability of VISIR to different institutional cultures and its
universality in terms of experiments with electric and electronic circuits. The teachers
focused on both, technical and didactic aspects, especially in order to scaffold student’s
learning and foster their autonomy, namely by allowing them to conduct real experi‐
ments over the Internet. Once the training was completed, and to encourage both the
teachers’ motivation on the use of VISIR and the immediate application of what was

zamfira@unitbv.ro
Spreading the VISIR Remote Lab Along Argentina 293

learned to the classroom context, attendees were asked to plan an educational activity
using VISIR contextualizing the plan in their own subject, career and institution.

2.1 Immediate Outputs of Training Action


A Satisfaction Questionnaire (SQ) was designed by the members of VISIR+ Project in
charge of Qualitative research, from the Research Institute of Education Sciences
(IRICE-CONICET) in Argentina and from the Instituto Politécnico do Porto (IPP) from
Portugal. The SQ had a twofold objective: measuring the immediate impact of TA on
target audience and evaluating possible scenarios for VISIR implementations in HE
institutions. The SQ was given to the 19 TA participants at UNR and the questions
focused on three main aspects of the TA: (1) the workshop (objectives and time allotted)
and the lecturers (interaction with participants); (2) the use of technological equipment,
i.e. VISIR Lab, as regards the didactic implications and practical use; and (3) the partic‐
ipant’s expectations on TA2. All questions were presented in the form of statements and
a Likert scale from 1 to 5, being (1) Unsatisfactory and (5) Excellent. Table 1 below
sums up the results.

Table 1. TA impact/outcomes
Workshop Technological Participants
equipment expectations
Excellent 48.17% 6% 47%
Highly satisfactory 43.83% 26% 43%
Above average 8.00% 68% 10%

Most participants scored the workshop as excellent (48.17%) and highly satisfactory
(43.83%). Only 8% found the workshop above average. The evaluation of the workshop
included the overt explanation of the TA objectives, the time allotted, the instructors’
participation and the extent to which technological equipment had enhanced the effec‐
tiveness of teaching and learning. As regards the actual use of the technological equip‐
ment, namely VISIR Remote Lab, the answers ranged from too easy to use (i.e. excel‐
lent) 6%, easy to use (i.e. highly satisfactory) 26%, and just right (i.e. above average)
68%. Finally, TA met participants’ expectations by 47% as excellent, 43% as highly
satisfactory and 10% above average.
An open question was also included in the SQ in order to provide a qualitative
perspective to the evaluation by eliciting reflection on positive and negative aspects of
the whole experience. Three main categories aroused from the reading of participants
answers: equipment potential, clear presentation, time. Most of the participants argued
that the training action raised awareness about the potential of VISIR equipment not
only by presenting the possibilities of actual use in the classroom but also by giving
participants the chance to experiment during the sessions. Secondly, most participants
pointed out the presentation approach facilitated their understanding of VISIR technical
and pedagogical use. Finally, participants referred to the need of more time to extend
the TA experience: the schedule was constrained to some slots for actual connection to
VISIR via University of Deusto WebLab which participants considered limited.

zamfira@unitbv.ro
294 U. Hernandez-Jayo et al.

3 Early Use of the VISIR in Patagonia

One of the participants of TA which took place at UNR from Universidad Nacional de
la Patagonia San Juan Bosco (UNPSJB) in Comodoro Rivadavia (Argentina) imple‐
mented VISIR Remote Lab in his subject Theory of Circuits. The subject Theory of
Circuits is in the second year of Electronic Engineering at the Engineering College of
UNPSJB. VISIR Remote Lab learning tool was introduced to the subject to give students
more options on real circuit experiments. To the traditional lab activities, practice was
added to allow students to analyze and interpret the forced temporal response to a resis‐
tive, inductive and capacitive circuit (RLC). In this type of practice students had to
experiment on a real circuit, i.e. select components and instruments, make the connec‐
tions, set the instruments and carry out the measurement. Before the practice, students
made the modeling, calculus and simulation of the target phenomenon.
The modeling developed by students was based on the circuit theory from which set
physic magnitudes had to be calculated, expressions of variables obtained and results inter‐
preted. The behavior of the model was also simulated by means of appropriate software and
the results were compared. In the next stage, students carried out the experiments using
VISIR Lab and contrasted the results against calculus and simulation drawing conclusions
from results. To organize the tasks, a lab guideline was designed where the objective of the
practice, the activities preliminary to real circuit experiment and procedures were made
explicit. Students had access to the remote lab and all necessary information about VISIR
from the subject webpage (http://www.ing.unp.edu.ar/electronica/asignaturas/ee016/) and
links to WebLab-Deusto from University of Deusto, Spain.

3.1 Students’ Use of VISIR During the Experimental Practice

The students carried out the activities individually in the computer room of the Electronic
Department. At the beginning of the activities, a professor guided students in the use of
VISIR Lab about how to access to the remote lab by means of assigned users’ names and
passwords. Then students carried out the selection of components, the wiring, the instru‐
ment configuration and the measurement following the procedure given and the objectives
set for the practice. During this process, students shared with classmates the results of each
individual experience, their learning and conclusions, this time being the role of professor
that of a moderator.
To analyze and interpret the behavior of electric variables of RLC circuits, the guide
suggested the model shown in the figure with R1 = 100 Ω, C1 = 2.20 nF, L1 = 10 mH
(Fig. 2).

zamfira@unitbv.ro
Spreading the VISIR Remote Lab Along Argentina 295

Fig. 2. Experimental practice exercise

The procedure established that the circuit should be wired on the “protoboard”,
generate a square signal, 500 Hz frequency and 1 V PP amplitude, and obtain the signals
Vg and VL from the oscilloscope from which attenuation and resonance frequencies
should be measured (theoretical magnitudes are α = R1/2L1 ω0 = 1/(L1 * C1)1/2 respec‐
tively). To obtain the attenuation frequency students observed from the oscilloscope the
time τ = 1/α by which VL falls to a 37% of its minimum value. To determine the reso‐
nance frequency, they observed the period T of the sinusoid and calculated ω0 = 2π/T.
The results obtained from the experience using the VISIR Remote Lab were then
compared to the previous activities. Students submitted a report with the description of
the practice carried out and the conclusions drawn to the professor (Fig. 3).

Fig. 3. Practical implementation and results at VISIR remote lab

3.2 Impact

Adopting the new tool VISIR Remote Lab to carry out the experiments turned out to be
an appealing option for both students and professors. The tool is accessible and has an
outstanding graphic interface. During the experience, there was an immediate adoption
to the remote lab and the tool resulted intuitive, especially to students who most of the
time anticipated teachers’ explanations about use. Probably, being familiar with similar
real instruments at the UNPSJB lab, students did not need to read manuals or additional
online information about VISIR.
Many aspects from the subject Circuit Theory syllabus were strengthened using a
remote lab, namely the teaching objectives, the management, the task organization, the

zamfira@unitbv.ro
296 U. Hernandez-Jayo et al.

accessibility and the relation and integration with other pedagogical means and
resources.

3.3 Analysis of the Experience


The VISIR instance of the University of Deusto is deployed on the WebLab-Deusto
RLMS (Remote Laboratory Management System) [3], which offers a set of adminis‐
tration tools in order to analyse the performance of the users during their remote exper‐
imentation sessions.
If this analysis is focused only on the UNPSJB target group, the following conclu‐
sions can be obtained:
• The number of students involved in experience were 11.
• The total number of uses of the lab has been 46. On average, a student has accessed
to the lab 4 times.
• The total time of all the sessions has been 79215.06 s, that is 7201.37 s per user.
• The maximum number of access per day was 23, being 3561 s the maximum period
per day (Fig. 4).

Fig. 4. Analysis of uses of VISIR by students from UNPSJB

If the experience of most active user (unpsjb_1) is studied, we can obtain the
following information:
• The user with the account unpsjb_1 has accessed to the lab 11 times. The total time
spent by the user on the lab has been 11780.19 s.
• The day that the user spent more time on the lab was on October, 18, being 60 1 h
performing experiments on the lab. From this session, the following information can
be obtained:
– The user performed 31 experiments on the lab. This does not mean that the user built
31 experiments, but he/she executed 31 times one or different experiment on the lab.
– This session was before the last one, and he/she did not perform any work circuit.
This means that he/she did not try to build any not allowed circuit or measurement.

zamfira@unitbv.ro
Spreading the VISIR Remote Lab Along Argentina 297

– During the whole session, the circuit under test was the same and it was built by the
user in the same way. He/she only changed the configuration of the instruments to
obtain a better resolution of the measurement and then a better understanding of the
circuit behaviour.

4 Conclusion

The outcomes defined for VISIR+ project are the natural evolution of the use of the
VISIR remote lab during the last 10 years. This remote lab has been tested and used by
all the European partners involved in the project, so now it is high time it was deployed
in other regions as Latin American. Then, all the experiences and experiments developed
for ten years are going to be shared between all the institutions of the project. The Project
implemented Training Actions to bridge these experiences between European and Latin
American institutions. This paper shows how the VISIR instance deployed at University
of Deusto is being used by Universidad Nacional de la Patagonia San Juan Bosco
(UNPSJB) in Comodoro Rivadavia. However, this is only the first step of the VISIR
spreading in Latin American countries. According to the working plan of the project,
two VISIR platforms will be deployed on Argentina, making easier and faster its use by
other Argentinean institutions.

Acknowledgment. The authors would like to acknowledge the support given by the European
Commission to the VISIR+ project through grant 561735-EPP-1-2015-1-PT-EPPKA2-CBHE-
JP.

References

1. Gustavsson, I., Nilsson, K., Zackrisson, J., Garcia-Zubia, J., Hernández-Jayo, U., Nafalski, A.,
Nedic, Z., Gol, O., Machotka, J., Pettersson, M.I., Lago, T., Hkansson, L.: On objectives of
instructional laboratories, individual assessment, and use of collaborative remote laboratories.
IEEE Trans. Learn. Technol. 2(4), 263–274 (2009)
2. Alves, G.R., Fidalgo, A., Marques, A., Viegas, C., et. al.: Spreading remote lab usage. A
System – A Community – A Federation. In: CISPEE Conference, Vila Real, Portugal, 19–
21 October 2016
3. Orduña, P., Bailey, P.H., Delong, K., López-De-Ipiña, D., García-Zubia, J.: Towards federated
interoperable bridges for sharing educational remote laboratories. Comput. Hum. Behav. 30,
389–395 (2014)

zamfira@unitbv.ro
Educational Scenarios Using Remote Laboratory VISIR
for Electrical/Electronic Experimentation

Felix Garcia-Loro1, Ruben Fernandez2, Mario Gomez2, Hector Paz2, Fernando Soria2,
María Isabel Pozzo3, Elsa Dobboletta3, André Fidalgo3,4, Gustavo Alves4,
Elio Sancristobal1, Gabriel Diaz1, and Manuel Castro1 ✉
( )

1
UNED, Madrid, Spain
{fgarcialoro,elio,mcastro}@ieec.uned.es, gdiaz@ieee.org
2
UNSE, Santiago del Estero, Argentina
raf@unse.edu.ar, mariog76@hotmail.com, hrpazunse@yahoo.com.ar,
mfernandos80@hotmail.com
3
IRICE-CONICET, Rosario, Argentina
pozzo@irice-conicet.gov.ar, elsadobboletta@gmail.com
4
IPP, Porto, Portugal
{anf,gca}@isep.ipp.pt

Abstract. In 2015, Electrical and Computer Engineering Department (DIEEC)


of the Spanish University for Distance Education (UNED) in Spain started
together with the Santiago del Rosario National University (UNSE, Argentina)
and with the support of the Research Institute of Education Sciences of Rosario
(IRICE-CONICET, Argentina) under the Coordination of the Polytechnic Insti‐
tute of Porto (IPP, Portugal) the new development and deployment of the VISIR
system inside the UNSE University as part of the VISIR+ Project.
The main objective of the VISIR+ Project is to extend the current VISIR
network in South America, mainly in Argentina and Brazil, with the support and
patronage of the European Union Erasmus Plus program inside the Capacity
Building program and as part of an excellence network future development inte‐
gration framework. This extension of VISIR nodes reconfigure in 2016 a new
project, PILAR, that as part of the Erasmus Plus projects will allow the Strategic
Partnership to develop a new federation umbrella over the existing nodes and
network.

Keywords: Remote laboratory · VISIR · Educational scenarios

1 Context

Experimentation has been always a pillar in which educational institutions trust to


narrow the gap between academic and industrial worlds. The experimentation allows
students the interaction with real components, equipment and instruments, the verifica‐
tion of the theoretical laws governing the behavior of electric and electronic circuits or
the analysis of non-desired effects such as noise on output signals, temperature effects
on components, behavior of different component technologies, etc. Unfortunately, labo‐
ratory resources are limited because of their availability, costs, etc. This limitation

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_28

zamfira@unitbv.ro
Educational Scenarios Using Remote Laboratory VISIR 299

induces in students a path to address practical experiences separately from theoretical


contents, just if they were two activities non related one to the other.

2 Orientations on the Work

The emergence of remote laboratories has provided new horizons in the learning process
and has brought new challenges in teaching design. Remote laboratories are being used
in many different ways and with different strategies as in-person laboratories have been
used traditionally.
Remote laboratories are a new tool to complement in-person laboratory, simulators
and virtual laboratories. The pool resulting by all the available possibilities provides a
wide range of possibilities when designing a course in which experimentation plays a
key role.
Virtual Instruments System In Reality (VISIR) is a remote lab for wiring electric
and electronic circuits experiments that has been used, in the Electrical and Computer
Engineering Department (DIEEC) of the Spanish University for Distance Education
(UNED), within several subjects from different engineering degrees, master subjects,
expertise courses, Small Private Online Courses (SPOCs) and Massive Online Open
Courses (MOOCs); providing satisfactory results with regarding to either it’s perform‐
ance or skills acquired by students.
The whole system, formed by all the actors and all the strategies used in the diverse
scenarios, have been analyzed in order to define a new learning environment, with the
objective of achieving an improved system in which all the teaching/learning scenarios
must have room, solving the inconveniences experimented separately and in their inter‐
action between them.

3 Approach Used

The main advantage of remote laboratories versus in-person laboratories lies in its access
availability without temporal nor geographical restrictions; The main advantage of
VISIR, when comparing with other electronic remote laboratories, lies in his concurrent
access: multiple users interacting with the remote laboratory simultaneously, designing
the same or different circuits and monitoring the same or different signals in real time,
as in an in-person laboratory room with replicated workbenches.
The experience reached in the integration of remote laboratory VISIR, mainly in
distance education, and the data collected from students’ feedback, logs related with the
remote laboratory interaction, surveys, etc. have allowed identifying the needs for
improvement and/or redesign.
All the data gathered from the LMS (Learning Management System) platforms have
been obtained analyzing the different databases ((PostgreSQL, MongoDB). To evaluate
VISIR behavior (accurately of the measurements, response managing requests’ over‐
load, etc.) and to inspect students’ interaction in the laboratory (common mistakes
typical from VISIR, number of accesses, etc., its database (MySQL).

zamfira@unitbv.ro
300 F. Garcia-Loro et al.

VISIR, (number of accesses, etc.) has been analyzed its database (MySQL) and its
logs (over 51 million of lines from the logs).
Inside the VISIR+ and PILAR projects, as well as previously inside the use of the
remote laboratory VISIR in the distance and online learning courses at UNED and the
MOOCs delivered using the VISIR system, a wide use has been obtained and published
regarding this use, [1–9].
According to Ursutiu et al. [10] and its reference to Learning by Experience from
Haynes any experience for learning involves a number of steps:
• Experiencing/doing with the instructor’s help or not;
• Sharing/what happened?
• Processing/analyzing;
• Generalizing;
• Applying.
Using this experience, the process developed to be more effective inside the new
starting of new installations, [11–16], (hardware, software and educational uses inside
the High Educational Academic environments) sites with the VISIR remote laboratory
and software are:
1. Share publications and tutorials regarding the use of VISIR inside electrical and
electronics engineering courses.
2. Share the use of VISIR remotely to allow the new teachers access to start working
with the VISIR system.
3. Start a first time face-to-face experience with some of the decision making teachers
and academic administrators regarding the feasibility and best practice of the use of
VISIR inside the target institution.
4. Start several synchronous sessions (using some collaborative environment, like
Moodle, videoconference facilities, etc.) with the new teachers and personnel
involved in the new deployment to allow a fast starting access as well as the first
touch of the system. During these preliminary sessions the expert or monitor will
show the main functions and specifications of the VISIR system as well as some
starting simple examples and use in the same environments and working area of the
future implementations.
5. Develop the face-to-face delivery with all the people involved in the on-site imple‐
mentation as well as with some possible new target institution members in the area
of the local University to try to have a core users target that will have in the future
the local use.
6. Have a local experience on the use and development of the educational implemen‐
tation of the VISIR remote examples with the local students, inside the classroom
and as well as with remote access to extend the experience of the use and as comple‐
mentary use of the remote laboratory.
7. Develop and extend the teaching experience from the local institution to all the core
new institutions inside the local area to reinforce the knowledge and implementation
as well as to develop new local strategies and synergies.

zamfira@unitbv.ro
Educational Scenarios Using Remote Laboratory VISIR 301

8. Realize a formal evaluation and quality assurance of all the process involved during
the implementation of the previous new acquisition and development of the VISIR
remote laboratory deployment.

4 Outcomes

The integration of remote laboratories in online learning environments, together with


good practices in designing practical experiences, can alleviate the disadvantages of
remote laboratories compared to in-person laboratories, without leaving behind their
inherent advantages. What’s more, the strategy of using diverse and complementary
options in the same course (as in-person laboratories, remote laboratories and/or simu‐
lators) provides a broad range of capabilities and an easier assimilation of the experi‐
mental advantages in the academic domain, [17–19].
Students have been able to complete the different activities and tasks from different
courses and educative platforms, to interact with the remote lab, etc. So, for students,
the different systems used have accomplished its function: to provide the remote labo‐
ratory along with theoretical contents.
Previous experience for the UNED system implementation, communities and plat‐
form, aLF and INTECA videoconference system, allow the implementation of the
remote laboratories as well as the support systems inside UNED Abierta, [20–24],
However, for teaching staffs it has been no possible to track the students’ interaction
over the different actors, so it has not been possible to cross the information obtained
from them. A new whole system, taking into account all the inconveniences and diffi‐
culties found, has been developed and is being deployed for the opening academic year,
[25–32].

5 Conclusions

The results show the ductility of VISIR remote laboratory in different learning scenarios.
Together with VISIR, it is needed a well-designed course, contents and experimental
experiences in order to obtain satisfactory results since, not only VISIR, a remote labo‐
ratory is a tool: it is a means, not an end in itself.
A LMS platform with the necessary tools for a deeper analysis of the students’
learning process and that integrates both environments (courses’ platforms and remote
laboratory) seems necessary in order to evaluate the convenience of the supplementary
documentation (videos, documents, activities, etc.) and their relationship with learning
and disengaging.
All these findings led to a new and more inclusive structure for the whole system in
order to a better exploiting of the experimental resources and, mainly, to create a new
learning environment intended for the analysis of the learning process for further
improvements.

zamfira@unitbv.ro
302 F. Garcia-Loro et al.

Acknowledgments. The authors acknowledge the support of the eMadrid project (Investigación
y Desarrollo de Tecnologías Educativas en la Comunidad de Madrid) - S2013/ICE-2715, VISIR
+ project (Educational Modules for Electric and Electronic Circuits Theory and Practice following
an Enquiry-based Teaching and Learning Methodology supported by VISIR) Erasmus+ Capacity
Building in Higher Education 2015 nº 561735-EPP-1-2015-1-PT-EPPKA2-CBHE-JP and PILAR
project (Platform Integration of Laboratories based on the Architecture of visiR), Erasmus
+ Strategic Partnership nº 2016-1-ES01-KA203-025327.
And to the Education Innovation Project (PIE) of UNED, GID2016-17-1, “Prácticas remotas
de electrónica en la UNED, Europa y Latinoamérica con Visir - PR-VISIR”, from the Academic
and Quality Vicerectorate and the IUED (Instituto Universitario de Educación a Distancia) of the
UNED.

References

1. Tawfik, M., Sancristobal, E., Martin, S., Gil, R., Diaz, G., Colmenar, A., Peire, J., Castro, M.,
Nilsson, K., Zackrisson, J., Håkansson, L., Gustavsson, I.: Virtual Instrument Systems in
Reality (VISIR) for remote wiring and measurement of electronic circuits on breadboard.
IEEE Trans. Ind. Electr. 6(1), 60–72 (2013)
2. Haertel, T., Terkowsky, C.: Creativity versus adaption: a paradox in higher engineering
education. Int. J. Creativity Probl. Solving 26(2), 105–119 (2016)
3. VISIR+ Project. Educational Modules for Electric and Electronic Circuits Theory and
Practice following an Enquiry-based Teaching and Learning Methodology supported by
VISIR - Erasmus+ Capacity Building in Higher Education 2015 nº 561735-EPP-1-2015-1-
PT-EPPKA2-CBHE-JP. http://www2.isep.ipp.pt/visir/. Accessed 15 Nov 2016
4. PILAR Project. Platform Integration of Laboratories based on the Architecture of visiR -
Erasmus+ Strategic Partnership nº 2016-1-ES01-KA203-025327. http://ec.europa.eu/
programmes/erasmus-plus/projects/eplus-project-details-page/?nodeRef=workspace://
SpacesStore/2d88ecb1-3db1-4a29-93c1-dd2802eec4f6. Accessed 15 Nov 2016
5. Naef, O.: Real laboratory, virtual laboratory or remote laboratory: what is the most effective
way? Intl. J. Online Eng. 2(3), 1–7 (2006)
6. Hanson, B., Culmer, P., Gallagher, J., Page, K., Read, E., Weightman, A., Levesley, M.:
ReLOAD: Real Laboratories Operated at a Distance. IEEE Trans. Learn. Technol. 2(4), 331–
341 (2009)
7. Nedic, Z., Machotka, J., Nafalski, A.: Remote laboratories versus virtual and real laboratories.
In: 34th ASEE/IEEE Frontiers in Education Conference, T3E-1, pp. 1–6, November 2003
8. Coble, A., Smallbone, A., Bhave, A., Watson, R., Braumann, A., Kraft, M.: Delivering
authentic experiences for engineering students and professionals through e-labs. In: IEEE
EDUCON, pp. 1085–1090 (2010)
9. Sancristobal, E., Martin, S., Gil, R., Orduna, P., Tawfik, M., Pesquera, A., Diaz, G., Colmenar,
A., Garcia-Zubia, J., Castro, M.: State of art, initiatives and new challenges for virtual and
remote labs. In: IEEE 12th International Conference on Advanced Learning Technologies,
ICALT, pp. 714–715, July 2012
10. Ursutiu, D., Samoila, C., Jinga, V.: Remote experiment and creativity. Int. J. Creativity Probl.
Solv. 26(2), 47–80 (2016)
11. Potkonjak, V., Vukobratovic, M., Jovanovic, K., Medenica, M.: Virtual mechatronic/robotic
laboratory - a step further in distance learning. Comput. Educ. 55, 465–475 (2010)

zamfira@unitbv.ro
Educational Scenarios Using Remote Laboratory VISIR 303

12. Jara, C.A., Candelas, F.A., Puente, S.T., Torres, F.: Hands-on experiences of undergraduate
students in automatics and robotics using a virtual and remote laboratory. Comput. Educ.
57, 2451–2461 (2011)
13. Rojko, A., Hercog, D., Jezernik, K.: Power engineering and motion control web laboratory:
design, implementation, and evaluation of mechatronics course. IEEE Trans. Ind. Electron.
57(10), 3343–3354 (2010)
14. Vivar, M.A., Magna, A.R.: Design, implementation and use of a remote network lab as an
aid to support teaching computer network. In: Third International Conference on Digital
Information Management, ICDIM, London (UK), 13–16 November 2008
15. Gustavsson, I., Nilsson, K., Lagö, T.L.: The visir open lab platform. In: Azad, A.K.M., Auer,
M.E., Harward, V.J. (eds.) Internet Accessible Remote Laboratories: Scalable E-Learning
Tools for Engineering and Science Disciplines Engineering Science Reference, pp. 294–317
(2012). ISBN 978-1-61350-186-3
16. Sheridan, T.B.: Descartes, Heidegger, Gibson, and God: towards an eclectic ontology of
presence. Presence Teleoperators Virtual Env. 8(5), 551–559 (1999)
17. Ma, J., Nickerson, J.V.: Hands-on, simulated, and remote laboratories: a comparative
literature review. ACM Comput. Surv. 38(3), 1–24 (2006)
18. Lang, D., Mengelkamp, C., Jager, R.S., Geoffroy, D., Billaud, M., Zimmer, T.: Pedagogical
evaluation of remote laboratories in eMerge project. Eur. J. Eng. Educ. 32(1), 57–72 (2007)
19. Lindsay, E.D., Good, M.C.: Effects of laboratory access modes upon learning outcomes. IEEE
Trans. Educ. 48(4), 619–631 (2005)
20. Intecca.uned.es: INTECCA | ¿Qué es AVIP? (2016). https://www.intecca.uned.es/
inteccainfo/plataforma-avip/que-es-avip/. Accessed 15 Nov 2016
21. VISIR SIG: Online-engineering.org. VISIR Special Interest Groups (SIG). http://
www.online-engineering.org/SIG_visir.php. Accessed 15 Nov 2016
22. UNED-DIEEC: VISIR – Electronics Remote Lab « Research on Technologies for Engineering
Education (2016). http://ohm.ieec.uned.es/portal/?page_id=76. Accessed 15 Nov 2016
23. Openlabs: OpenLabs - ElectroLab (2016). http://openlabs.bth.se/index.php?page=
ElectroLab. Accessed 26 Jan 2016
24. UNED Abierta: UNED Abierta (2015). https://unedabierta.uned.es/wp/. Accessed 15 Nov 2016
25. IMS Global Learning Consortium: Imsglobal.org, IMS Global Learning Consortium. https://
www.imsglobal.org/. Accessed 15 Nov 2016
26. IMS Global Learning Consortium: Imsglobal.org. IMS Global Learning Tools
Interoperability Basic LTI Implementation Guide. https://www.imsglobal.org/specs/ltiv1p0/
implementation-guide. Accessed 15 Nov 2016
27. Coursera Technology: Tech.coursera.org. LTI Integration. https://tech.coursera.org/app-
platform/lti/. Accessed 26 Jan 2016
28. Oauth.net: OAuth Community Site. http://oauth.net/. Accessed 15 Nov 2016
29. IMS Global Learning Consortium: Imsglobal.org, Learning Tools Interoperability®
Background. https://www.imsglobal.org/activity/learning-tools-interoperability. Accessed
15 Nov 2016
30. Weblab-Deusto: WebLab-Deusto, documentation. Authentication. https://weblabdeusto.
readthedocs.org/en/latest/authentication.html. Accessed 15 Nov 2016
31. Moodle: Moodle - Open-source learning platform. http://www.moodle.org. Accessed 15 Nov
2016
32. Swope, J.J.: A Comparison of Five Free MOOC Platforms for Educators: EdTech. (2014).
http://www.edtechmagazine.com/higher/article/2014/02/comparison-five-free-mooc-
platforms- educators. Accessed 15 Nov 2016

zamfira@unitbv.ro
Use and Application of Remote and
Virtual Labs in Education

zamfira@unitbv.ro
Robot Online Learning Through Digital Twin
Experiments: A Weightlifting Project

Igor Verner1 ✉ , Dan Cuperman1, Amy Fang2, Michael Reitman3,


( )

Tal Romm1,3, and Gali Balikin1,3


1
Technion – Israel Institute of Technology, Haifa, Israel
ttrigor@technion.ac.il, dancup@inter.net.il, ty.romm@gmail.com,
galya1406@gmail.com
2
Massachusetts Institute of Technology, Boston, MA, USA
amyfang@mit.edu
3
PTC Inc., Haifa Office, Haifa, Israel
reit@ptc.com

Abstract. This paper proposes and explores an approach in which robotics


projects of novice engineering students focus on development of learning robots.
We implemented a reinforcement learning scenario in which a humanoid robot
learns to lift a weight of unknown mass through autonomous trial-and-error
search. To expedite the process, trials of the physical robot are substituted by
simulations with its virtual twin. The optimal parameters of the robot posture for
executing the weightlifting task, found by analysis of the virtual trials, are trans‐
mitted to the robot through internet communication. The approach exposes
students to the concepts and technologies of machine learning, parametric design,
digital prototyping and simulation, connectivity and internet of things. Pilot
implementation of the approach indicates its potential for teaching freshman and
HS students, and for teacher education.

Keywords: Robot learning · Weightlifting · Virtual twin · Internet of Things

1 Introduction

The view of robots as systems that repetitively perform preprogrammed behaviors under
automatic control is changing. The increasing complexity of robot hardware and
growing sophistication of robot tasks in unstructured dynamic environments make it
difficult to preprogram robots for every possible scenario, urging the development of
robots capable of adapting to changes, coordinate movements, and learn new behaviors
[1]. Robot intelligence technologies used to implement these capabilities are based on
methods of machine learning, simulation, and cloud computing.
The two machine learning approaches, mainly applied to teach new skills to robots,
are imitation learning and reinforcement learning [2, 3]. In imitation learning, the robot
records and imitates the target movement demonstrated by the instructor. In reinforce‐
ment learning (RL), the robot is not directly instructed, but through autonomous trial-
and-error search, it determines and records the action which optimizes the performance

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_29

zamfira@unitbv.ro
308 I. Verner et al.

criterion. The opportunities to teach robots new skills or adapt existing skills to new
situations by using the RL approach have been intensively investigated, while one of
the main challenges is to reduce the experimentation time and the wear and tear on the
robot. A recommended method to cope with this challenge is by means of an empirical
predictive model which is autonomously generated and used to guide robot actions [4].
Simulation in robotics is a software tool for design and testing robot behaviors on a
virtual robot before implementing them on a real robot. Some of the benefits of using
robotics simulations, listed in [5], directly relate to reinforcement learning. In particular,
performing robot trials in a virtual environment allows experimental data to be generated
faster, more easily, and in any desired quantity, thus significantly speed-up the learning
process. Modern computer aided design systems provide means for creating virtual
models which accurately resemble the geometric and mechanical characteristics of the
real robots.
Cloud robotics is a method to enhance functionality of a robot by using remote
computing resources of memory, computational power, and connectivity [6]. In robot
learning, connection to the intended cloud platform enables the accumulation, storage,
and processing of data of robot trials and other relevant information on the Web server,
and transmit the data to the robot. The platform can serve a hub of the Internet of Things
(IoT) network, through which robots can share between them the learned skills and
communicate with other systems.
The goal of our research project is to propose and explore an approach in which the
challenge of implementation of robot learning is used as a thread for teaching the
discussed robot intelligence technologies to high school and first-year engineering
students. In this approach, the student is assigned to implement a robot task in which the
desired behavior cannot be pre-programmed, but has to be learned by the robot. In such a
project the student teaches the robot to acquire the skill by implementing a reinforcement
learning process supported by simulation modeling and cloud communication.
Our research is an ongoing multi-case study conducted at the Technion Center for
Robotics and Digital Technology Education through collaboration with the PTC Israel
Office. Participants are 1st and 2nd year students from MIT doing summer internship
projects in our lab, high school (HS) students participating in our outreach activities,
and Technion students studying technology education. We utilize robots constructed by
students using the ROBOTIS Bioloid Premium kit (http://en.robotis.com) and software
tools by PTC, namely, the 3D modeling system Creo Parametric, and the IoT platform
ThingWorx.
Our project so far has passed three research phases. In the first phase, university
students implemented a RL scenario in which a humanoid robot, through a series of
trials, learns to adapt its body tilt angle for lifting different weights [7]. In the second
phase, a group of high school students, mentored by a faculty staff member and our
university students, constructed animal-like robots and implemented different RL
scenarios, utilizing the approach tested in the first phase. The focus of this paper is the
third phase, in which university students apply reinforcement learning, 3D modeling,
and cloud communication in order to implement a scenario in which a humanoid robot
learns to manipulate multiple joints to maintain its stability when lifting different
weights.

zamfira@unitbv.ro
Robot Online Learning Through Digital Twin Experiments 309

2 Robot Weightlifting Task

Pick-and-place manipulations by fixed-base robots are widely explored and studied in


industrial robotics. On the other hand, planning basic handling tasks such as weight‐
lifting, to be executed by humanoid robots, is yet an evolving research topic [8, 9]. In
the weightlifting task, if the mass and size of the weight are known to the robot, then its
posture can be controlled in the open loop. The control policy is to prevent the robot
from falling down by maintaining its static and dynamic stability [9]. If the weight’s
mass and size are unknown, the close loop control is needed. Here, the control policy
can be determined analytically, based on the dynamic model of the robot and data from
force and torque sensors [8]. Rosenstein et al. [10] noted that analytic solutions for
humanoid robot weightlifting can be complex. They proposed an alternative approach
based on reinforcement learning through trial-and error.
Recently, performing weightlifting tasks by humanoid robots has become a chal‐
lenge in educational robotics addressed to university and even high school students [11].
Michieletto et al. [12] used a weightlifting task as a challenge of their “Autonomous
Robotics” course for MSc students majoring in computer science. The task was imple‐
mented trough robot learning from a human demonstrator with the aid of Microsoft
Kinect. Weightlifting by a humanoid robot was also posed as a benchmark assignment
of the robot competition FIRA HuroCup for university and school students [12, 13]. The
assignment was posed without reference to robot learning.
Our motivation to explore the educational challenge of humanoid robot weightlifting
came from developing a fetch-and-carry robot for the RoboWaiter contest [14] in which
we introduced the humanoid league [15]. In the contest assignment, the mass, size, and
location of the weight were predetermined. Following the contest project, we turned to
the new challenge of lifting a weight when its mass is unknown to the humanoid robot.
As mentioned in the introduction, in the first phase of our project, undergraduate students
constructed and programmed a humanoid robot that coped with the new challenge and
learned to lift an unknown weight by a series of trials and errors [7]. The robot was built
from the ROBOTIS Bioloid Premium kit and had 18 degrees of freedom, an acceler‐
ometer, a Bluetooth communication module, an IR sensor, and a sound sensor. The robot
was programmed using RoboPlus software.
The reinforcement learning scenario was as follows: The robot is given an unknown
weight while sitting down. The mass of the weight is estimated by measuring the angular
velocity of the robot shoulder joints in the way described in [7]. Then, the robot performs
weightlifting trials for different values of body tilt angles, each time attempting to stand
up from the sitting position. The robot evaluates whether it succeeded or failed the task
by determining if it remains standing or has toppled over, based on data provided by its
accelerometer [7].
Because of the memory limitation of the robot controller, the robot can store results
of a limited number of trials and only until the robot is powered off. Therefore, the
empirical data acquired from the robot trials were stored on a local computer. The
computer communicated with the robot via a Bluetooth interface supported by a Python
script. Based on these data, the computer provided the robot with the tilt angle value
suitable for successfully lifting the specific weight.

zamfira@unitbv.ro
310 I. Verner et al.

In the following section we will discuss the way in which the robot weightlifting
task has been implemented in our current study.

3 Development of Robot Learning Mechanisms

The developed robot learning environment is presented in Fig. 1. It consists of three


components: the robot, the simulator (digital twin) and the cloud (ThingWorx). The
constructed robot is essentially the same as in the first stage of our project, but was
upgraded by adding grippers to suit barbell lifting. The digital twin is a virtual counter‐
part of the robot created to test robot functioning in the simulation mode instead of testing
the physical robot. The ThingWorx server is connected with the robot through the local
computer used as a routing point. ThingWorx also receives and analyzes data from the
simulator and sends recommendations for weightlifting posture to the robot upon
request.

Fig. 1. The implemented robot ML

3.1 Construction and Calibration of Virtual Twin


The virtual robot was created using the Creo modeling software. We took the computer
designed models of parts of the Bioloid premium kit from the ROBOTIS website and
imported them to Creo. Using these parts, we assembled the virtual robot in the same
order as the construction of the physical robot. Because the models of the robot parts on
the website do not have assigned weights, we weighed the parts and added the mass
properties to each of the parts in Creo. We also assigned to each joint of the virtual robot
the same range of motion as of the corresponding joint of the physical robot.
After assembling the virtual robot, we calibrated the model to have the same balance
characteristics as the physical robot. During this step, we modified several parts of the
model, such as its motors, where we took into account its uneven distribution of mass.

zamfira@unitbv.ro
Robot Online Learning Through Digital Twin Experiments 311

Then, we compared the balance characteristics of the physical robot and its digital twin.
We tested the balance of the physical and virtual robots in the same posture shown in
Fig. 1, for different values of mass of the weight. For the virtual robot, the calculation
was made using the “center of gravity analysis” and “sensitivity analysis” features of
Creo. The maximal mass that each robot can hold in this posture without losing stability
was determined. We calculated that the discrepancy between the physical robot and its
virtual twin was less than 3%.

3.2 Simulation Analysis


The objective of the simulation analysis was to optimize reinforced learning of the
physical robot. Two possible approaches to such analysis were considered: real-time
on-line simulation and batch offline simulation. The latter approach was chosen as more
simple and applicable to cloud-based learning. The approach implements massive multi-
parametric analysis (virtual testing) of weightlifting by the digital twin to create a
“space” of possible solutions and refine it into a “sub-space” of optimal solutions. Then
the optimal solutions are stored on the IoT platform and used in on-line communication
with the physical robot.
The problem for the simulation analysis was defined as follows: for the weight of a
given mass, test the balance of the virtual robot in its various postures over the range of
possible bending angles at the hip, knee, and ankle joints. The concept of a “virtual
sensor” was utilized – using capabilities of Creo, we have attached a “sensor” to the
center of gravity of the digital twin, to analyze the balance of the robot for different
combinations of possible angles of the joints. The angles ranged within 100° for the hip
and knee joints and within 90° for the ankle joint. We divided the ranges into intervals
of 10°. This resulted in 10 angles for the hip joint, 10 angles for the knee joint, and 9
angles for the ankle joint, which means that there were a total of 900 robot postures to
analyze for each value of mass of the weight.
The two parameters calculated for each posture of the virtual twin were: d, the
distance between the center of gravity and the center of the foot, and h, the robot’s height.
The distance d is used to evaluate the robot stability, and the height h is a parameter
which characterizes the robot’s posture when completing the task. Using “Design Study”
capability of Creo, the simulator determined various combinations of angles that allow
the robot to lift the given weight. By further analysis, the optimal combination (in which
the robot is in balance, h is maximal and d is minimal for this h) was found. We note
that the number of virtual trials can be reduced by using the Dynamic Analysis capability
of Creo.

3.3 Cloud Management of Robot Learning Data

Each optimal solution, generated through simulation analysis for each mass value,
contained three parameters (hip, knee and ankle angles). Database of these optimal
solutions has been uploaded to a data table in IoT platform ThingWorx.
We defined a function in ThingWorx which, upon getting the weight value as input,
utilizes the data table, and returns the corresponding angle values of the optimal solution.

zamfira@unitbv.ro
312 I. Verner et al.

When the physical robot has to lift a weight, it first measures the weight mass, and sends
its value to the ThingWorx server. In response, the robot receives the values of the three
angles suggested based on the simulation analysis. Then the robot executes the lifting
using those values.
To visualize the online communication between the robot and ThingWorx, we
created a mashup web page which serves as a dashboard for displaying parameters of
the robot weightlifting trial (the weight’s mass and the three angles). The mashup is
shown in Fig. 2.

Fig. 2. The mashup displaying the data table in ThingWorx

4 Educational Implementation

As noted in the introduction, our research project explores an approach to teaching robot
intelligence technologies to high school and first-year engineering students by engaging
them in teaching robots to learn. The instructional design in the project is implemented
by the authors of this paper. A faculty staff member develops instruction in robot
construction and programming and conducts courses for school and freshman students.
Two more instructional designers are Technion mechanical engineering graduates
working at PTC and pursuing an additional degree in science and technology education
in our faculty. In the research project, which is part of their studies, they develop
instructional units on 3D modeling and internet communication in robotics and mentor
school and freshman students in these topics. The strategic planning, project coordina‐
tion and supervision of the instructional designers are made by a faculty member who
also guides school and freshman students in pedagogical concepts relevant to robot
learning. Consultancy regarding the modeling and communication technologies and
software systems was given by PTC.

zamfira@unitbv.ro
Robot Online Learning Through Digital Twin Experiments 313

Three first-year students majoring in mechanical engineering at MIT have performed


robot learning projects in our lab, two students in 2015 and one in 2016. The learning
activities in the 2015 project are discussed in [7]. In the 2016 project the student
constructed and programmed the robot; created and calibrated the virtual twin;
programmed the balance analysis procedure for weightlifting; implemented virtual trials
and transmitted the results to the cloud data table; presented her work at PTC and faculty
seminars; and wrote a project report.
We implement the proposed approach by teaching intelligent technologies through
outreach courses to students of a high school in Haifa that has recently established an
engineering systems program. In the second stage of our research project, during the
2015–2016 academic year, we conducted pilot courses in 3D modeling and robotics to
11th graders. In the modeling course, the students learned to design and analyze
computer models of robots using Creo.
In the robotics course, they constructed various robots using the Bioloid kit and
implemented different scenarios of reinforcement learning. The students applied the
knowledge acquired in our courses in the project developed for participation in the
FIRST Robotics Competition. The school team participated in the 2016 international
competition in St. Louis and won the Rookie All Star Award for “implementing the
mission of FIRST to inspire students to learn more about science and technology.”

5 Conclusion

The approach proposed in our research extends the scope of educational robotics, which
traditionally focuses on practices with preprogramed robots. Results of our research
indicate that the challenge of developing learning robots can engage novice engineering
students in experiential learning of innovative concepts and technologies, such as
machine learning, parametric design, digital prototyping and simulation, connectivity
and internet of things. We found that those concepts and technologies are within the
grasp of understanding of freshman and HS students. In the next phase of the research,
we will practically implement our approach in an outreach course and we anticipate that
the evaluation of this experience will lead to the development of strategies for learning
with learning robots.

References

1. Peters, J., Lee, D.D., Kober, J., Nguyen-Tuong, D., Bagnell, D., Schaal, S.: Robot learning.
In: Siciliano, B., Khatib, O. (eds.) Springer Handbook of Robotics, pp. 357–394. Springer
(2017)
2. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge
(1998)
3. Kormushev, P., Calinon, S., Caldwell, D.G.: Reinforcement learning in robotics: applications
and real-world challenges. Robotics 2(3), 122–148 (2013)
4. Nguyen-Tuong, D., Peters, J.: Model learning for robot control: a survey. Cogn. Process.
12(4), 319–340 (2011)

zamfira@unitbv.ro
314 I. Verner et al.

5. Reckhaus, M., Hochgeschwender, N., Paulus, J., Shakhimardanov, A., Kraetzschmar, G.K.:
An overview about simulation and emulation in robotics. In: Proceedings of SIMPAR, pp.
365–374 (2010)
6. Kehoe, B., Patil, S., Abbeel, P., Goldberg, K.: A survey of research on cloud robotics and
automation. IEEE Trans. Autom. Sci. Eng. 12(2), 398–409 (2015)
7. Verner, I., Cuperman, D., Krishnamachar, A., Green S.: Learning with learning robots: a
weight-lifting project. In: Robot Intelligence Technology and Applications, vol. 4, pp. 319–
327. Springer (2017)
8. Harada, K., Kajita, S., Saito, H., Morisawa, M., Kanehiro, F., Fujiwara, K., Kaneko, K.,
Hirukawa, H.: A humanoid robot carrying a heavy object. In: Proceedings of the IEEE
International Conference on Robotics and Automation, pp. 1712–1717 (2005)
9. Arisumi, H., Miossec, S., Chardonnet, J.R., Yokoi, K.: Dynamic lifting by whole body motion
of humanoid robots. In: IEEE/RSJ International Conference on Intelligent Robots and
Systems, pp. 668–675 (2008)
10. Rosenstein, M.T., Barto, A.G., Van Emmerik, R.E.: Learning at the level of synergies for a
robot Weightlifter. Robot. Auton. Syst. 54(8), 706–717 (2006)
11. Kuo, C.H., Kuo, Y.C., Chen, T.S.: Process modeling and task execution of FIRA weight-
lifting games with a humanoid robot. In: Conference Towards Autonomous Robotic Systems,
pp. 354–365. Springer, Heidelberg (2012)
12. Michieletto, S., Tosello, E., Pagello, E., Menegatti, E.: Teaching humanoid robotics by means
of human teleoperation through RGB-D sensors. Robot. Auton. Syst. 75, 671–678 (2016)
13. Anderson, J., Baltes, J., Cheng, C.T.: Robotics competitions as benchmarks for AI research.
Knowl. Eng. Rev. 26(1), 11–17 (2011)
14. Ahlgren, D.J., Verner, I.M.: Socially responsible engineering education through assistive
robotics projects: the RoboWaiter competition. Int. J. Soc. Robot. 5(1), 127–138 (2013)
15. Verner, I.M., Cuperman, D., Cuperman, A., Ahlgren, D., Petkovsek, S., Burca, V.:
Humanoids at the assistive robot competition RoboWaiter 2012. In: Robot Intelligence
Technology and Applications 2012, pp. 763–774. Springer, Heidelberg (2013)

zamfira@unitbv.ro
Interactive Platform for Embedded Software Development
Study

Galyna Tabunshchyk1 ✉ , Dirk Van Merode2, Peter Arras3, Karsten Henke4, and
( )

Vyacheslav Okhmak1
1
Zaporizhzhya National Technical University, Zaporizhia, Ukraine
galina.tabunshchik@gmail.com slavas490@gmail.com
2
Thomas More Mechelen-Antwerpen, Mechelen, Belgium
dirk.vanmerode@thomasmore.be
3
Faculty of Engineering Technology, KU Leuven, Leuven, Belgium
peter.arras@kuleuven.be
4
Integrated Communication Systems Group, Ilmenau University of Technology, Ilmenau,
Germany
karsten.henke@tu-ilmenau.de

Abstract. This paper describes a didactional system which is aimed at


supporting remote experiments in developing software for Embedded Systems.
As basis for this system is used the Raspberry Pi which provides a variety of
possibilities at low costs. Demo experiments and possibilities for learning soft‐
ware development for embedded systems are described.

Keywords: Remote experiments · Raspberry Pi · Reliability study

1 Introduction

Internet of Things (IoT), from just a definition nowadays transforms into a global
approach for the functioning/controlling of things in the world. Things refer to anything
which is used, controlled, measured and is connectable to the internet. IoT refers to all
kind of devices and (powerful) microcontroller systems that can read sensors, do some
preliminary digital signal processing and send output over the Internet to a variety of
users, being other machines or human users. The emergence of very powerful multi-
core microcontrollers, large sized working memories and a wide variety of commercial
of the shelf sensors enables this new, exciting and challenging market. It is projected
that the average amount of microcontrollers per person is rapidly growing and will
continue to grow in the next few years [1]. In their new Internet of Things report, Busi‐
nessinsider.com projects there will be 34 billion devices connected to the internet by
2020 [2].
It should be clear that there are great job opportunities for specialists in this specific
high-skilled field of expertise. These specialists should have a profound knowledge of
both hard- and software aspects of the system, in interfacing with sensors, in using
embedded operating systems or real-time operating systems, but also on networking.

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_30

zamfira@unitbv.ro
316 G. Tabunshchyk et al.

The task for higher educational institutes (HEI) is to deliver highly skilled engineers
and developers to the labor market who have the knowledge to design, build, operate
and problem solve these devices. Especially when it deals with Industrial Internet of
Things, where these systems are deployed in an industrial environment to run process-
critical applications, quality issues of the combined hard- and software become
extremely important.
It is the task of HEI’s to provide the most efficient and effective means for learning
the necessary skills. To acquire skills, technology education at any level requires exper‐
imenting and practicing in labs. For embedded systems and digital systems opportunities
for creating new laboratories as remote labs offer the possibility to learn about these new
features (IoT) by using it. The main characteristics for robust embedded system are
reliability, availability, failsafe and dependability [3]. For this reason it is obvious
important to teach students techniques not only to use but also to develop reliable soft‐
ware for embedded systems.
This paper describes the development of remote experiments for the study of
embedded systems and the reliability of the embedded software.

2 Embedded Systems Software Development Tasks

To come to a working and reliable embedded system (ES), there is a great variety of
tasks to be solved. Besides the hardware design there is the different software tasks which
need to be implemented in the embedded system. For this paper we only look at some
aspects of the software development.
We focus on two main tasks which a ES-engineer needs to solve: the manipulation
of data elements on different levels and the collection and processing of data.
Any lab – including remote labs - should offer enough possibilities for students to
experiment and offer measurable learning outcomes, associated with experimenting. In
other words, care should be taken that the remote lab is more than a demonstration lab,
but a real experiment – although controlled from a distance.
When developing remote experiments as teaching/learning aid, one should bear in
mind the same questions as when developing any other didactical method: namely think
carefully on the learning outcomes and teaching approach. The learning outcomes will
point out what and how students will need to learn and also point on how to evaluate [4].
These observations clearly show that remote labs not only have advantages but also
are the cause of many challenges when considering the construction of remote experi‐
ments. The advantages for students is clear: the 24/7 availability to experiment and
repeat experiments can motivate students to achieve a deeper learning on the topics. The
challenges for the construction of the lab are to make it user-friendly, efficient in
achieving the learning outcomes and motivating and attractive to students. Another
major challenge is potential distant evaluation and feedback for the students on mistakes/
good and bad practices they used [3].
In order to improve teaching in diagnostic methods for embedded systems, a remote
lab (Interactive Lab Platform (ILP)) that examines reliability problems in real time was
constructed [4] based on the Raspberry Pi platform which we named Informational

zamfira@unitbv.ro
Interactive Platform for Embedded Software Development Study 317

Systems on Reliability Tasks (ISRT). Raspberry Pi is a basic embedded system and as


a single-board computer it is often used in cyber-physical systems and IoT applications.
Possible operating systems for the Raspberry Pi are Debian Linux in Raspbian Jessie
distribution or Ubuntu Linux. Raspberry Pi has Input/Output for low level control and
as it offers the possibility of installing Linux on it, so a web server and databases can be
installed. For these reasons and extended range of possibilities, the platform was chosen
as the basis for the remote experiments. This allows us to collect data for further analysis.
In Sect. 3 a short description of the platform architecture and the different experi‐
ments are discussed.

3 Interactive Lab Platform Description

3.1 Architecture of the Platform

The heart of the ISRT design is a Raspberry Pi Model B. The ISRT is also equipped
with an expansion board [4] developed at Thomas More Mechelen Antwerpen (TMMA)
University College, a MI0283QT Adapter v2, Wi-Fi, BLE4 adapters for signal variation
tasks, camera for video transmission tasks, a display for online compilation of the
programs and results overview.
The program is designed on the principles of MVC (Model-View-Controller). For
software development the following elements were used: NodeJS platform with inte‐
grated auxiliary module [6]; framework Expresses; as programming language Java‐
Script; library bcm2835 [5].

3.2 Platform Possibilities

The outcomes of the remote labs are reached with different exercises. For a first explo‐
ration of the system there are demo examples.
Next there are assignments on ISRT software development and checking.

ISRT Demo Examples. For a first exploration of the hardware architecture, which can
be built with a Raspberry Pi as server, four Demo modules were developed which realize
different ways of ISRT system usage:

3.2.1 Manipulation with LEDs on the Expansion Board


In this demo the Raspberry Pi was fitted with the expansion board [5]. Students tasks is
to convert a number from one system into another (binary to hex, hex to binary, oct to
binary). A camera is used to display the expansion board display containing the number
in hex or oct and the number in the binary system is displayed by the LEDs. So students
have a visual for the task and the solution. The complexity of the task progressively
increases each next step. The next task is reachable only after the correct answer on the
conversion. The time spent on solving each number by the student is recorded.
The main outcome of this demo experiment is to provide students with the knowledge
in the bcm2835 library and its possibilities for LED manipulation.

zamfira@unitbv.ro
318 G. Tabunshchyk et al.

This experiment can also be used for the assessment of the knowledge of first year
bachelor students in the binary system.

3.2.2 Manipulation with a Light Sensor


The expansion board [5] also contains light and temperature sensors. This demo experi‐
ment allows to change the distance between light and sensor and to measure luminosity
and to build a chart representing the relation between distance and luminosity.
The main outcome of this demo experiment is to provide students with the knowledge
in storing and processing of information from different types of sensors.

3.2.3 Face Detection Demo


The face detection demo lets students check the time which is needed for the face detec‐
tion algorithm on the OpenCV Python libraries [8]. There are two possibilities: to work
either with the Raspberry Pi Professional Infrared Camera OV5647 (internal) or with a
standard Web-camera (external) (Fig. 1).

Fig. 1. Face detection demo

Experiments for face recognition are developed with the standard OpenCV library.
Main study outcomes is the knowledge of the standard OpenCV library and influence
of the type and strength of light and the type of the camera on the time-delay in facial
recognition. Understanding delay times and times of execution is very important for real
time applications in general, and for using video-streaming of live pictures in particular.

3.2.4 GSM Module Manipulation (Fig. 2)


One of the common tasks in ES to provide 24/7 access to remote working systems. To
provide robustness of such system it should provide access to it by all possible protocols
Wi-Fi, BLE, GSM.

zamfira@unitbv.ro
Interactive Platform for Embedded Software Development Study 319

Fig. 2. GSM demo

For the GSM module manipulation the SIM900 was provided. Students can send an
SMS on a Ukrainian provider or system and can show the last SMS sent to the GSM
module (Fig. 2).
Main outcomes are that students understand the pipeline of communication which
they use and how this is affected by different components of the system.

ISRT Software Development. Students can practice their skills on the remote system
and have assignments on the platform for formal evaluation.
The assignment is to control different remote experiments by a self-generated
program.
This task is performed on the built-in editor on the Raspberry Pi. Programming mode
allows the user to create and run their applications developed on C ++ and Python on
the platform Raspberry Pi directly from the panel (Fig. 3).

Fig. 3. Remote software development

Access to the software development interfaces of the ISRT is allowed only to regis‐
tered users. As for all processes there is provided a log file for control and check if
students participated.
After login, the user gets access to the programming page. At this page there is the
list of all stored programs of the user. As such students can edit previously developed
programs or can create new ones from scratch. After the user chooses a file, the code
editor will be shown and the user can write/edit the required code. When selecting the
“compile” button, the program is sent to the server. If compilation is successful the user
is forwarded to the output page. If there are errors, they are displayed on the code editor

zamfira@unitbv.ro
320 G. Tabunshchyk et al.

in a red frame. At the output page it is possible to execute the program, clear the output
screen, see the real time video of the experiment and return to the editing.
This ISRT-lab allows access to the laboratory equipment 24/7.
All developed experiments are used for the courses in bachelor and master study for
the Embedded System Software Development Modules in Zaporizhzhya National Tech‐
nical Universities.
Examples of the tasks are:
– to develop a program in Python for adding binary numbers and displaying the results
of the addition on the display of the TMMA expansion board;
– to create a system loop and measure the Mean Time to Failure (MTF) for the SIM900;
– to create a program for face detection and compare the response time to the same
algorithms from OpenCV library (Fig. 4).

Fig. 4. Remote led manipulation example

For the master students the idea of re-usage of systems is implemented. Master
students have the task to develop a measurement system for a certain defined purpose
(e.g. ecological measurements, climate control measurements etc.). They can (re)use the
developed templates on the system for the development of a their own personal meas‐
urement system, providing reliability tests for the specified lab hardware.

4 Conclusion

Questions of software and hardware reliability are of great importance. For embedded
systems it is a challenge as soft- and hardware reliability should be solved simultane‐
ously. For the tasks of building reliable software a low-cost system was developed,
which allow students to remotely practice skills in the embedded software development
in C++ and Python for Raspberry Pi. The developed remote lab makes a variety of tasks
possible on embedded platforms from basics of embedded systems to calculation of the
reliability characteristics. Future work is to provide built-in solutions for automated
testing of the plugged-in components.

zamfira@unitbv.ro
Interactive Platform for Embedded Software Development Study 321

Acknowledgement. In work was done in the frame of international project Tempus 544091-
TEMPUS-1-2013-1-BE-TEMPUS-JPCR [DesIRE] [7]. We want to thank EmSys Group from
Thomas More Mechelen-Antwerpen University College for their support of our work with the
TMMA expansion board.

References

1. IDC (2016): Worldwide Internet of Things Forecast Update 2016–2020. Doc # US42082716.
http://www.idc.com. Accessed 17 Jan 2017
2. Camhi, J.: BI Intelligence projects 34 billion devices will be connected by 2020, Internet of
Things report, BI Intelligence (2015). www.busisnessinsider.com. Accessed 17 Jan 2017
3. Kozik, T., Simon, M., Arras, P., Olvecky, M., Kuna, P.: Remotely controlled experiments. In:
Noga, H., Cernansky, P., Hrmo, R. (eds.) Nitra, Slovacia: Univerzity Konstantina Filozofa v
Nitre (2016)
4. Tabunshchyk, G.: Remote experiments for reliability studies of embedded systems. In:
Tabunshchyk, G., Van Merode, D., Arras, P., Henke, K. (eds.) Proceedings of XIII
International Conference on REV2016, Madrid, Spain, 24–26 February 2016, pp. 68–71.
UNED (2016)
5. Raspberry Pi b/b+ compatible expansion board. http://emsys.pbei.be/?product=raspberry-pi-
bb-compatible-expansion-board
6. Platform NodeJS. https://nodejs.org/en/
7. Tabunshchyk, G., Van Merode, D., Petrova, O., Ochmak, V.: Multipurpose educational system
based on Raspberry Pi. In: Proceedings of the International Symposium on Embedded Systems
and Trends in Teaching Engineering, Nitra, Slovakia, 12–15 September, pp. 202–206 (2016)
8. Brahmbhatt, S.: Practical OpenCV (Technology in Action), 1st edn., 244 p. Apress, New York
(2013)
9. DesIRE Project website. http://tempus-desire.eu/. Accessed 17 Jan 2017

zamfira@unitbv.ro
Integrated Complex for IoT Technologies Study

Anzhelika Parkhomenko ✉ , Artem Tulenkov, Aleksandr Sokolyanskii,


( )

Yaroslav Zalyubovskiy, and Andriy Parkhomenko


Zaporizhzhya National Technical University, Zaporizhzhya, Ukraine
parhom@zntu.edu.ua

Abstract. As known, Internet today is not only environment of communication


and information exchange between people, but it is a tool and technology of
interaction between customers, “things” and devices. Therefore, industry wants
effectively design, create and deploy modern smart connected products and need
the relevant professionals with a wide breath of knowledge and skills from busi‐
ness intelligence, hardware engineering, information security and Internet of
Things (IoT). Integration of IoT study into curriculum is an actual task, because
gives real possibilities to enhance students’ competitiveness in a rapidly changing
labor market.
The purpose of this work is realization of practical-oriented approaches and
methods in educational process of future IT-professionals based on REIoT
complex - Smart House&IoT lab integrated with remote lab RELDES. Complex
based on platforms Arduino, Raspberry Pi and OpenHAB. OpenHAB REST API
has been used for integration remote lab RELDES with Smart House&IoT lab.
This allows get remote access to Smart House&IoT lab experiments and their
states as well as status updates or sending of commands for experiments.
REIoT complex application in educational process gives students all advan‐
tages of remote experiments and possibilities of different IoT platforms, sensors,
actuators, protocols, interfaces practical study.

Keywords: Internet of Things · Remote laboratory · Embedded Systems · Smart


House · Practically-oriented teaching methods

1 Introduction

Today the IoT technologies greatly extend the possibilities of collecting, analysis and
distribution of data, which humanity can transform into information and knowledge.
The Internet of Things opens new perspectives and gives more opportunities to increase
economic efficiency by automating processes in various fields of activity [1]. At the
beginning of 2016 the main segments for IoT applying were Manufacturing, Energy and
Transportation [2]. The impact of the IoT on companies’ activities is increasing. Smart,
connected products and the data they generate are transforming traditional business
functions, sometimes significantly [3].
Of course, there are still many issues that must be solved: more and more new unique
IP-addresses, sensors’ autonomous power supply, IoT devices certification, security,
protection of personal information, etc. [1].

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_31

zamfira@unitbv.ro
Integrated Complex for IoT Technologies Study 323

But even today, thanks to IoT technologies, the world begins to interact with physical
and virtual “things” and devices in other way. For this reason, companies need more
and more appropriate experts for development and implementation of new technologies
for effective interaction of customers and “things”. A lot of companies have already
sharply felt a lack of such specialists.
Therefore, the task of IoT technologies teaching is relevant and focused on formation
of students’ knowledge in the field of IoT modern software and hardware, and practical
skills in application of existing platforms and devices.

2 Concept of REIoT Complex

IoT technologies’ teaching is not a simple task. On the one side, there is a lot of web-
sites, webinars, documentation and materials on this topic [4–12]. On the other side,
even the interpretations of the term «IoT» in various papers sometimes significantly
differs. In addition, a huge number of different devices and platforms for creation of IoT
systems are described but sometimes it is problematic for students and teachers to sort
out the necessary information in this variety. Therefore, the task of creation of IoT
teaching-learning environment for future professionals training is an urgent.
As well as the author of [13], we have accepted the following definition as a basis:
«The Internet of Things (IoT) is the interconnection of uniquely identifiable embedded
computing devices within the existing Internet infrastructure». In this case it is possible
to distinguish several practically-oriented educational tasks: learning of Embedded
Systems (ES) software and hardware, analysis of existing approaches to ES design,
studying of the principles of ES interaction and connection to the Internet [14, 15].
As known, ES can be used in conjunction with Sensors/Actuators for collecting
the information and turning the collected or received information into actions. Also
the ES can use a range of technologies for connecting with other devices or the
Internet (Wi-Fi, Bluetooth, RFID, Ethernet, GSM, CDMA and so on) [16]. More
often, the concept of IoT is inseparably connected with something smart: Smart
House, Smart Transport, Smart City, Smart Businesses and so on [17]. Technolo‐
gies of Smart House creating are interesting and useful for students, as they allow to
make our life more comfortable, safe and to provide resource saving. That is why,
REIoT complex for IoT technologies study and investigations includes two inte‐
grated parts - Smart House&IoT laboratory and laboratory RELDES (REmote Labo‐
ratory for Development of Embedded Systems) [18–20]. Integration of remote labo‐
ratory into educational process expands opportunities of distance learning and gives
the students all advantages of remote experiment [21–27].
REIoT complex architecture is shown in Fig. 1. The concept of Smart House&IoT
lab’s stand passed several stages of design (Fig. 2).

zamfira@unitbv.ro
324 A. Parkhomenko et al.

Fig. 1. REIoT complex architecture

Fig. 2. Concept of Smart House&IoT lab stand

Eventually we have used two the most popular embedded platforms for IoT smart
devices - Arduino and Raspberry Pi [13, 17, 27], as well as OpenHAB (Open Home
Automation Bus). OpenHAB is the software for integrating different home automation
systems and technologies into one general solution that allows comprehensive automa‐
tion rules and offers uniform user interfaces [28].

3 Features of REIoT Complex Realization

Smart House&IoT lab set of experiments is based on Arduino NANO V3 boards


(Fig. 3). They are: Solar station, Lighting control, Climate control, Access control,
Safety control, Zone control, Presence control, Ventilation, Illumination control.

zamfira@unitbv.ro
Integrated Complex for IoT Technologies Study 325

Fig. 3. Smart House&IoT lab structure

Minicomputer Raspberry Pi performs the role of the lab server with installed
OpenHAB platform. Additional libraries Modbus TCP Binding are used for connection
it with Arduino boards with USB interface and Modbus RTU protocol. Sequential line
RS-232 is used for communication between electronic devices.
Each Arduino board contains a program, which handles input and output data.
Arduino uses open Modbus Master- Slave library. With the help of this library the
holding registers of sign or non-sign type which available for recording and reading
were created in each board (subsystem). Registers contain 8 or 16 elements of 16 bits
length each. Thus, the structure for data exchange is created. Each Arduino board oper‐
ates as Master. OpenHAB platform reads or adds data to registers when it interrogates
the devices. Each element of the register is correlated to individual parameters of sensors
or actuators.
The laboratory includes IP camera D-Link DCS-2121 which transmits video
streaming for users to view the experiments’ current status. This IP-camera is a complete
system with a built-in CPU and Web-server that transmits high quality video with reso‐
lution of 1280 × 1024 pixels and speed 10 frames per second. IP-camera and computer
are connected via Ethernet cable and interact using protocol TCP/IP. Router D-Link
DIR-300 allows to connect to the laboratory via Wi-Fi and also to add devices using
network cables.
For the integration of two REIoT complex parts as well as for Smart House&IoT lab
administration OpenHAB REST API was used [29]. To access Smart House&IoT lab
experiments, RELDES administration system sends HTTP GET request to the OpeHAB
REST API and receives results in JSON format.

zamfira@unitbv.ro
326 A. Parkhomenko et al.

On receiving the list of available in the Smart House&IoT lab experiments, RELDES
include them into the total list of experiments (Fig. 4) and after that a queue, statistics
and other functions are available for them.

Fig. 4. RELDES interface

Subsequently, to carry out the experiment, RELDES refers REST methods to the
Smart House&IoT lab, for example HTTP PUT and HTTP GET requests are used for
illumination level change and result control.
In order to start the streaming broadcast, we have used the utility ffmpeg [30].
FFmpeg is a set of free libraries with open source code that allow record, convert and
transmit digital audio and video in various formats. Library ffmpeg starts to catch video
from our camera with resolution 1280 × 1024, codes it to MPEG format with 10 fps and
bitrate 800kbit/s, and after that uses HTTP for sending to local server, which sends this
video stream to the end user.
In order to divide video for blocks (to cut and select the part of video for each
experiment), the filter “crop” is used. As a result, we have got some video fragments for
each experiment or for group of experiments.

4 Students’ Knowledge and Practical Skills

The experiment «Solar station» is intended for studying the basics of solar energy and
the principles of the solar battery power. The main components are solar panel (6 V,
70 mA), Li-Ion battery and the charger for Li-Ion batteries. The experiment «Climate
control» is intended to study the basics of climate control using digital temperature and
humidity sensors DHT-11 and air quality analog sensor MQ135. The experiment «Zone
control» based on optical pair with photodiode, allows controlling the status of the
perimeter and it reacts in case of the disturbed perimeter. The experiment «Presence
control» is intended for studying the principles of the lighting systems or systems that

zamfira@unitbv.ro
Integrated Complex for IoT Technologies Study 327

control the human presence in the room. The experiment is based on the presence control
devices which control the reactions using pyroelectric sensor and Fresnel lens (pyro‐
electric motion sensor). The experiment «Ventilation» allows studying the basics of
recovery units and air flow control. The experiment is built with using the electrical
loads driver L298E, and it controls the loads with wide-width modulation. The experi‐
ment «Light control» is intended for studying the basics of load objects remote control
using relay module and wide-width modulation in multiple channels. The main compo‐
nents are: LED strip, RGB LED strip and load driver L293E. The experiment «Illumi‐
nation control» allows the students to perform light level control of the different areas
using photo-resistors (Fig. 5). The experiment «Access control» is compounded with
RFID reader module cards and trinkets RC522. The experiment is intended for studying
the principles of access and authorization systems. The experiment «Safety control»
allows the students to study the principles of security systems that react to exceptional
situation as the motion in controlled zone. The subsystem can be in the state of zones’
control and the state of sensors’ indication. The experiment is based on pyroelectric
motion sensor with using the sound alarm. Several experiments can be performed simul‐
taneously. In this case, the students study the principles of interaction between subsys‐
tems, define the process logic, create effects, evaluate the reaction of the elements and
analyze the results.

Fig. 5. Experiment «Illumination control» web-page

zamfira@unitbv.ro
328 A. Parkhomenko et al.

The students can use standard and create original Actions within Scripts and Rules
for execution OpenHAB specific operations. For example, such Actions as Telegram,
my.openHAB and other can be used for Smart House&IoT lab events notification or
feedback. Thus, connection to the Telegram allows sending messages to Telegram
clients from a bot-client (for example - sending notifications to the user about the air
conditioner turn on/off). With my.openHAB, students can connect to OpenHAB from
any device from everywhere with the Internet connection, to provide access to other
users as well as to keep all activities and events in the cloud my.openHAB.
The administration of events executed by OpenHAB can be realized with MailCon‐
trol binding. It provides the possibility of receiving commands sent via email in JSON
format. The following types of commands can be sent: decimal, HSB, increase –
decrease, on – off, open – closed, percent, stop – move, string, up – down. Therefore,
one of the practical tasks for the students can be the development of desktop or mobile
applications for OpenHAB connection and control.
Also the integration with Google calendar for REIoT complex is possible. Students
can create events and manage the system on schedule (on/off lighting, air conditioning,
open/close the door for a predetermined time, etc.).
So, the students acquire the IoT technologies knowledge and practical skills by
performing remote control experiments, studying the descriptions of experiments,
carrying out various scenarios and developing scripts.

5 Conclusions

REIoT complex for students’ research and training brings together several subsystems
to create a true Internet of Things for Smart House. It is used for a variety of training
tasks in several modes. In the first mode the complex provides the possibilities of remote
experiments on each subsystem separately and with the entire system as a whole. The
descriptions of the experiments and measurement results are available for students in
this case. Another mode allows students to specify the logic of the system working,
programming and processes monitoring.
The implementation of real practical tasks, based on modern technologies, gives the
students valuable practical experience in the IoT engineering, the motivation to research,
to work in team, to communicate with the customer, to present the results of their work
to the audience.
Practically-oriented teaching methods based on REIoT complex usage will provide
students with the necessary knowledge and skills for implementation of their own
projects, as well as for the successful application of IoT technologies in the future
professional activities.

References

1. What is the Internet of Things, IoT (in Russian). http://www.tadviser.ru/index.php/


2. Internet of Things (IoT). http://www.cisco.com/web/solutions/trends/iot/overview.html

zamfira@unitbv.ro
Integrated Complex for IoT Technologies Study 329

3. Porter, M., Heppelmann, J.: How smart, connected products are transforming companies.
Harvard in Business Review Articles. http://www.ptc.com/internet-of-things/harvard-
business-review/download-article-2#sthash.L5wGKzcN.dpuf
4. Knowledge is Power. https://www.thingworx.com/resources/?topic=&type=white-papers
5. Chung, J.-M.: Internet of Things & Augmented Reality Emerging Technologies (in Russian).
https://ru.coursera.org/learn/iot-augmented-reality-technologies/
6. Purpose-built for the Internet of Things. http://www.thingworx.com/
7. Internet of Things Institute Learning Center. https://education.ioti.com/
8. Internet of Things Roadmap to a Connected World. http://web.mit.edu/professional/digital-
programs/courses/IoT/
9. IEEE Internet of Things Webinars. http://iot.ieee.org/education/webinars.html
10. Vermesan, O., Friess, P.: Internet of Things - From Research and Innovation to Market
Deployment. River Publishers, Aalborg (2014)
11. Qiu, F.: Overall framework design of an intelligent dynamic accounting information platform
based on the Internet of Things. iJOE 12(5), 14–16 (2016)
12. Cai, K., Tie, F., Huang, H., Lin, H., Chen, H.: Innovative experimental platform design and
teaching application of the Internet of Things. iJOE 11(6), 28–32 (2015)
13. Stage 1 - Introduction to the Internet of Things: What, Why and How. http://
www.codeproject.com/Articles/832492/Stage-Introduction-to-the-Internet-of-Things
14. Tabunshchyk, G., Parkhomenko, A.: New technologies in the training of embedded systems’
specialists. In: International Scientific Conference on Internet-Education-Science, pp. 248–
250. VNTU, Vinnitsa, Ukraine (2014) (in Russian)
15. Prytula, A., Tabunshchyk, G., Parkhomenko, A.: Practically oriented teaching methods in the
field of embedded systems. In: VII International Scientific Conference on Current Problems
and Achievements in the Field of Radio, Telecommunications and Information Technology,
pp. 216–218. ZNTU, Zaporizhzhya, Ukraine (2014) (in Russian)
16. Internet of Things – Overview. http://www.codeproject.com/Articles/833234/Internet-of-
things-Overview
17. IoT Path to Product: Smart Home. http://www.codeproject.com/Articles/1119436/IoT-Path-
to-Product-Smart-Home
18. Parkhomenko, A., Gladkova, O., Kurson, S., Sokolyanskii, A., Ivanov, E.: Internet-based
technologies for design of embedded systems. J. Control Sci. Eng. 13(2), 55–63 (2015)
19. Parkhomenko, A., Gladkova, O., Ivanov, E., Sokolyanskii, A., Kurson, S.: Development and
application of remote laboratory for embedded systems design. iJOE 11(3), 27–31 (2015)
20. RELDES. http://swed.zntu.edu.ua
21. Tabunshchyk, G., Van Merode, D., Arras, P., Henke, K.: Remote experiments for reliability
studies of embedded systems. In: International Conference on Remote Engineering and
Virtual Instrumentation, Madrid, Spain, pp. 68–71 (2016)
22. Poliakov, M., Larionova, T., Tabunshchyk, G., Parkhomenko, A., Henke, K.: Hybrid models
of studied objects using remote laboratories for teaching design of control systems. iJOE
12(09), 7–13 (2016)
23. Gilibert, M., Picazo, J., Auer, M., Pester, A., Cusidó, J., Ortega, J.: 80C537 microcontroller
remote lab for e-learning teaching. iJOE 2(4), 1–3 (2006)
24. Guerra, H., Cardoso, A., Sousa, V., Gomes, L.: Remote experiments as an asset for learning
programming in python. iJOE 12(4), 71–73 (2016)
25. Ozvoldová, M., Ondrusek, P.: Integration of online labs into educational systems. iJOE 11(6),
54–59 (2015)

zamfira@unitbv.ro
330 A. Parkhomenko et al.

26. Wuttke, H.-D., Ubar, R., Henke, K.: Remote and virtual laboratories in problem-based
learning scenarios. In: IEEE International Symposium on Multimedia, Taichung, Taiwan, pp.
377–382 (2010)
27. Cvjetkovic, V., Matijevic, M.: Overview of architectures with Arduino boards as building
blocks for data acquisition and control systems. iJOE 12(7), 10–17 (2016)
28. Open HAB. http://www.openhab.org/
29. REST API. https://github.com/openhab/openhab/wiki/REST-API
30. FFmpeg. https://www.ffmpeg.org/

zamfira@unitbv.ro
Incorporating a Commercial Biology Cloud
Lab into Online Education

Ingmar H. Riedel-Kruse(&)

Department of Bioengineering, Stanford University, Stanford, USA


Ingmar@stanford.edu

Abstract. Traditional biology classes include lab experiments, which are


missing from online education. Key challenges include the development of
online tools to interface with laboratory resources, back-end logistics, cost, and
scale-up. The recent emergence of biology cloud lab companies offers a
promising, unexplored opportunity to integrate such labs into online education.
We partnered with a cloud lab company to develop a customized prototype
platform for graduate biology education based on bacterial growth measure-
ments under antibiotic stress. We evaluated the platform in terms of (i) relia-
bility, cost, and throughput; (ii) its ease of integration into general course
content; and (iii) the flexibility and appeal of available experiment types. We
were successful in delivering the lab; students designed and ran their own
experiments, and analyzed their own data. However, the biological variability
and reproducibility of these online experiments posed some challenges. Overall,
this approach is very promising, but not yet ready for large-scale deployment in
its present form; general advancements in relevant technologies should change
this situation soon. We also deduce general lessons for the deployment of other
(biology and non-biology) cloud labs.

1 Introduction

A new paradigm has recently emerged for providing access to biology experimentation
through a distributed online platform known as cloud biology labs (Hossain et al. 2015;
2016; Lee et al. 2014; Hayden 2014). The notion is similar to the well-established
framework of cloud computing (Fox 2011) and complements ongoing advances in
life-science technology (Kong et al. 2012; Melin and Quake 2007), which have focused
mainly on automation and parallelization but have largely ignored issues related to
remote or shared access. This cloud lab technology reduces access barriers to costly
biological lab equipment and also abates the need for maintenance and hands-on
preparation, allowing users to concentrate on experimental design and data analysis.
Further advantages include reduced training needs, improved biosafety, facilitated data
tracking, and increased standardization (Sia and Owens 2015).
Two of these biology cloud labs have been successfully deployed recently in
academic settings for educational purposes: interactive chemotaxis experiments with
the slime mold physarum over the course of a day (Hossain et al. 2015), and real-time
interactive phototaxis experiments with the protist Euglena over the course of one
minute (Hossain et al. 2016); the latter lab in particular promises to scale at low cost to

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_32
zamfira@unitbv.ro
332 I.H. Riedel-Kruse

massive user numbers (millions of students per year with a cost of less than 1 cent per
experiment). Recent years have also seen the emergence of dedicated biology cloud lab
companies with initial efforts largely focused on industrial applications (Hayden 2014);
however, there is an opportunity for partnership between commercial cloud lab pro-
viders and biology educators to teach lab biology via the web, which has not yet been
achieved.
There is a growing literature regarding the utility of educational online experiments
and how such remote labs should be designed (Lowe et al. 2012; de Jong et al. 2013;
Heradio et al. 2016; Wieman et al. 2008; Bonde et al. 2014; Sauter et al. 2013). Key
design principles include that students feel a real presence (such as via a live video) and
that the user interface is intuitive enough to effectively abstract the logistics of
preparation so that students can focus on experimental designs and strategies. The
development of large-scale cloud biology platforms could be instrumental in bringing
true laboratories into online education. The above cited research also compares real
labs (remote or local) to simulations (virtual labs), with the conclusion that both have
their situation-dependent advantages, and that ideally both are used synergistically.
The goal of the presented work is to assess whether and how commercial biology
cloud labs could be utilized for education. We partnered with the start-up company
Transcriptic (https://www.transcriptic.com/) to develop a customized cloud-based
biological experimentation activity for educational use. Transcriptic has been devel-
oping the Workcell platform (Fig. 1), in which a robot shuttles biological specimens in
96-well plates between experimental instruments such as liquid-handling robots,
imaging devices, and incubators. Experiments can be fully programmed in Python.
This overall framework is under constant development; for example, some experi-
mental steps are still executed by hand, but will eventually be automated. The vision
and roadmap to full and flexible automation of cloud experiments is clear. To assess the
affordance for future scale-up in educational settings, we deployed this platform in a
graduate class user study where students designed and analyzed their own experiments
to model the effects of antibiotics on bacterial growth curves. The main categories in
which we evaluated the system were (1) logistical feasibility and cost and (2) student
responses and potential educational outcomes.

2 Methods

All experiments were handled off-site by our commercial partner (Transcriptic, Inc.,
Menlo Park, CA). Transcriptics has developed a Workcell that automatically executes
all experiments (Fig. 1A). These experiments are executed in disposable, standard
96-well plates. A liquid handling robot mixes the corresponding solutions and dis-
tributes them among the individual wells, achieving the specific antibiotic concentra-
tions requested by the remote users. The Workcell shuttles these plates between
incubators and a plate reader that executes the measurements. This platform is con-
trolled via a Python-based framework. In order to enable student access, we developed
a Python-based user interface that enabled students to enter their experimental
instructions, read-off their results in graphical form, and download their experimental
data in csv format (Fig. 2A).

zamfira@unitbv.ro
Incorporating a Commercial Biology Cloud Lab 333

Fig. 1. Commercial biology cloud labs supported by robotics allow remote and controlled
execution of cellular and molecular biology experiments. (A) Workcell (Transcriptic)
automates the execution of life-science experiments in order to increase speed, ease, accuracy,
and reproducibility. This Workcell contains several instruments and a robot that moves the
sample plates among instruments. Image adapted from https://www.transcriptic.com/. (B) Top,
electron micrograph of bacteria. Bottom, schematic of a typical bacterial growth curve in which
optical density is measured as a proxy for bacterial concentration. Generally, four phases of
growth can be distinguished: lag phase, exponential phase, stationary phase, and decay phase.
(Image: Public Domain, Credit Rocky Mountain Laboratories, NIAID, NIH; Scale: individual
bacterium *2 lm in length.)

Fig. 2. User interface, cloud lab technology, and experimental data on bacterial growth.
(A) Website displayed to students after login. On the left, six different amounts of Kan can be
entered. On the right, the currently selected batch of data is displayed, updated every *40 min
during the experiment to generate a new recorded data point. Current and previous batch data can
be selected for display. (B) Top: Plate reader used by the collaborating company; a robotic
platform shuttles the plate between the incubator and spectrophotometer. Bottom, transparent
96-well plates in which experiments are executed. Each student is assigned to six wells during
each run, enabling up to 16 students to execute experiments in parallel. (Images: Thermo Fischer
Scientific)

zamfira@unitbv.ro
334 I.H. Riedel-Kruse

Bacterial growth experiments were performed in 96-well plates (Fig. 2B). At the
beginning of an experiment, a robotic pipetting assembly was programmed to load each
well with 150 µL of a suspension of Escherichia coli. The E. coli suspension was
prepared by diluting an E. coli stock (optical density at 600 nm (OD600) = 0.9) into
Miller’s Luria Broth (LB) at a 1:150 ratio. To generate the stock, the DH5a strain of
E. coli (Zymo Research Corp, US) was transformed with pUC19 plasmid. The robotic
pipettor then added the user-specified amount (0–50 µL) of the antibiotic kanamycin
(Kan; 40 µg/mL in LB). Each well was then filled to 200 µL with LB, yielding final Kan
concentrations of 0–10 µg/mL. A robotic setup then shuttled the plate to a spec-
trophotometer (Figs. 1 and 2) to measure and record the bacterial concentration of each
well using OD600. The plate was covered and placed in an incubator at 37 °C, with
shaking at 180 rpm. Every 47 min, the plate was shuttled back to the plate reader and
new OD measurements were recorded. This cycle was repeated continually overnight
for a total of 20 times to generate a full bacterial growth curve (Figs. 1, 2 and 3).

Fig. 3. Examples of experimental growth curves and experimental encountered during


study. (A) Example of an experimental recording showing a linear sweep from 0 to 50 µL Kan
(the maximal amount). We see the expected growth behavior as in Fig. 1B, with increasing Kan
resulting in a slower or delayed exponential phase and lower plateau phase; decay phases are
partially visible. (B) Different types of errors occurred during some experiments: (i) systematic
shifts of individual data points (according to the company this was due to condensation on the
lids); (ii) one of the samples had a significantly shorter lag phase than the others; (iii) one sample
still grew but was much delayed, even with saturating Kan levels (likely due to a spontaneous
mutation in the sample); and (iv) same Kan concentration plateaus at different levels. (Legends
on the right indicate Kan concentrations)

zamfira@unitbv.ro
Incorporating a Commercial Biology Cloud Lab 335

These activities were deployed in a graduate-level, lecture-based class on the bio-


physics of multicellular systems. Students used the Transcriptic web interface from
home. Students handed in a homework report, and later were given a post-questionnaire.
Students were asked to provide written consent to have their homework and question-
naires analyzed for this study; all students gave permission to do so.

3 Results

Among the many possible experiments available on this platform, we chose bacterial
growth in the presence of different antibiotic concentrations, as it suited the content of a
college-level class (and was likely suitable for middle and high school) and had the
potential for easy adaptation on the existing Transcriptic platform. We developed a
custom web interface (Fig. 2A) that allowed students to run and evaluate experiments
to test the effects of antibiotic concentration on the growth of bacterial populations.
Each user accessed six wells; wells from up to 16 users could be combined for a
parallel run on a 96-well. For each of the six wells, the user specified varying amounts
of an antibiotic to add to the medium (from 0 to 10 µg/L Kan), which affected growth,
primarily leading to delayed onset of growth, slower growth, and lower maximal OD600
(Methods). Experiments ran overnight; the next day, users logged on to the web
interface to view and download the resultant bacterial growth curves (Fig. 2A).
Prior to the user study, we performed approximately 10 test runs. Overall, these
initial test experiments were stable and satisfactory, with the typical experimental
outcome leading to Hill-type functions (Figs. 1B, 2A and 3A). The OD600 began at a
low and reasonably steady value (OD600 = 0.2; lag phase), increased over time (5–
10 h; exponential phase), and eventually plateaued (OD600 = 0.2 – 0.5; stationary
phase), yielding the familiar sigmoidal curve associated with bacterial growth. The
somewhat high initial OD600 reading of 0.2 reflects the initial bacterial starting con-
centration as well as the fact that reading were not normalized against initial readings or
blanks. Maximal OD600 is dependent on the total volume of media in the plate, the well
size, culture aeration during growth, and the bacterial strain. In the presence of Kan, the
entry into exponential phase was delayed, leading to a lower maximal OD600, con-
sistent with the literature (Faraji et al. 2006; Lin et al. 2000). At the maximum Kan
concentration (10 µg/mL), growth was completely inhibited, yielding a flat growth
curve at OD = 0.2.
The platform was deployed in an advanced undergraduate/graduate-level class on
modeling multicellular systems. Thirteen students of both genders with various back-
grounds related to engineering, physics, and biology took the class. Some of the
students did not have any hands-on biology lab experience beyond what is standard in
high schools, while others had taken extensive biology lab courses or had previously
worked in wet labs including performing equivalent bacteria sample preparation and
OD measurements on bacterial growth.
As part of a lecture module on biological growth and competition, students were
provided with the theoretical background and pointed to review papers on how to
model bacterial growth under the influence of antibiotics. The students were then
introduced to the commercial cloud platform and given an open-ended homework

zamfira@unitbv.ro
336 I.H. Riedel-Kruse

assignment to model the growth of bacteria in the presence of antibiotics. Students


were responsible for designing their experiment to generate data on the cloud platform.
Each student had a total of six overnight experimental rounds with six wells per round.
In addition, students were asked to develop models to analyze and fit the data collected,
and to write a short 1–2 page report detailing their conclusions on the effects of
antibiotics on bacterial growth. In their reports, students were required to explain the
rationale behind their experimental strategy and encouraged to explore multiple models
to explain their results. Throughout the process, experimental design, hypothesis
testing, and data fitting were largely left open to students, apart from the review papers.
Data were collected for the user study by assessing the homeworks and a
post-experiment questionnaire.
Students used distinct experimental strategies (Table 1). The first strategy used the
six available wells to “sweep” through the available antibiotic quantities (0–50 µL),
presumably to test for a dynamic range of response. Examples of this strategy included
linear [0 10 20 30 40 50] µL and logarithmic [0 1 3 9 27 50] µL sweeps. In total, 9/13
students used some sort of sweep strategy on their first experimental run, one even
without testing for 0 and 50 µL. The second strategy was to use the six wells to run a
series of replicates, presumably to calibrate reproducibility of the system. One example
of this strategy was performing replicates at the extremes: [0 0 0 50 50 50] µL. Overall,
3/13 students used this second strategy. Third, one student used a hybrid of the two
strategies by running a triplicate at 0 µL Kan, in addition to a coarse sweep with the
three remaining wells [0 0 0 2 10 50] µL. Forth, one student started with all values at
20 µL. We note that experimental freedom is rather limited with this system; within a
few rounds of experimentation, students are equipped to answer basic questions about
the overall dependence of bacterial growth on antibiotic concentration, and to assess
within- and between-day variability.

Table 1. Distinct initial experimental strategies and answers from the post questionnaire.
Strategy Description Examples # students # students post Incl. 0 Incl. Sweep Sweep Replicate
during exp exp 50 linear log
1a A linear sweep through [0 10 20 30 40 50] 5 5 X X X
full parameter ranges
1b A logarithmic sweep [0 1 3 9 27 50] 3 1 X X X
through full parameter
ranges
1c a single logarithmic [2 4 8 16 24 48] 1 0 X
sweep (w/o 0, 50)
2a 2 triplets at the extremes [0 0 0 50 50 50] 1 3 X X X
2b 2 triplets, one at 0 and [0 0 0 10 10 10] 1 0 X X
one low
3 mix of control and [0 0 0 2 10 50] 1 1 X X X X
spanning the space
4a 6 times same average [20 20 20 20 20 20] 1 0 X
4b full blank control [0 0 0 0 0 0] 0 3 X X

zamfira@unitbv.ro
Incorporating a Commercial Biology Cloud Lab 337

During the experiments, we observed that the quality and reproducibility of


experimental results varied and ultimately deteriorated between runs. The first run was
fully functional and returned the expected growth curves (Fig. 3A). The second run
was also fully functional for most of the students. However, in some cases (Fig. 3B),
we observed that at certain time points, all OD600 readings systematically increased, the
variability between replicates increased, and bacteria grew even under high Kan con-
centrations. There were at least two main reasons for these inconsistent results, as
clarified later with Transcriptic. (1) The robot handling logistics had some issues: the
lid collected condensation due to delayed shuttling of the plate between the incubator
and the reader, because Transcriptic’s robot was executing many other experiments in
parallel. (2) Bacterial cultures were taken from a stock generated specifically for this
study and stored at 4 °C instead of freshly prepared every time in order to reduce costs.
Thus, although batch-to-batch variations due to stock preparation were eliminated, the
time in storage likely led to the emergence of resistant strains in the starter culture.
None of these issues had arisen during our pre-tests prior to classroom use. As a result
of these issues, several students obtained variable, unexpected results from their runs.
While it would have been feasible to restart the entire set of runs with fresh bacterial
cultures at a later time in the course, we ultimately decided, in conjunction with student
feedback, to continue the experimental runs as they were. Students had the option to
either use their own data or to use pre-recorded data sets of higher quality that had been
obtained when the protocols were developed (about half of the students did, the others
did not).
Student used a variety of models for fitting their growth curves to explain the
results in their reports. Roughly half (7/13 students) applied growth models from
review papers (Buchanan 1997; Faraji et al. 2006; Lin et al. 2000; Swinnen 2004;
Zwietering et al. 1991) that we provided in the assignment handout, whereas the rest of
the class developed novel models or sought other models from the literature (Fig. 4).
Depending on the model, students needed to deduce 5–7 parameters; the quality of the
match depended on the chosen model. Some students chose global fitting strategies for
all data at once, while others (Fig. 4) systematically determined individual parameters
from subsets of their data.
Although these data could have been subjected to error analysis, we did not request
the students to do so, given that error analysis was not the focus of the class and given
the technical challenges experienced during some of the experiments. More than half of
the students (7/13) nevertheless presented some analysis of uncertainty in their fit
parameters, computed statistical errors, or otherwise made comments on error and noise
with regard to their results.
In the discussion, all students critiqued their own modeling and experimental
strategies, as instructed. More than half of the students (7/13) responded that they could
have improved their reports with less variable data, while 7/13 students proposed
variations on the growth model to test in the future. Notably, 5/13 students explicitly
suggested improvements to their experimental strategy. Overall, students self-reported
that they spent an average of 10 h (min/max: 5/20 h) on all activities (planning,
experiment, data analysis, and reporting). The most time was spent developing models,
analyzing data, and writing reports, with much less time spent performing the actual
experiments.

zamfira@unitbv.ro
338 I.H. Riedel-Kruse

Fig. 4. Example data set and fitting approach from one student. (Images are direct
replications of student’s homework reports.) (Left) Data for six Kan amounts (dots) with fitting
curves (solid) superimposed. For the 0 µL Kan condition, multiple data repeats were collected;
for 25 µL Kan, two were recorded; all others are single runs. (Right) The student chose a
sigmoidal model with four independent parameters—note that these four parameters could have
been defined differently by using a different notation for the sigmoidal curve. The table shows the
fit parameters for each condition. A linear dependency on Kan amount is assumed (fitted) for
each of these four parameters, leading to eight parameters shown in the table in panel D. One
component is found to be zero, leaving the student with a 7-parameter fit to explain bacterial
growth under the influence of Kan, which leads to the fit curves in A.

In order to assess whether students gained experience or changed their approach to


the task after using the cloud lab, the post-assignment questionnaire included a question
about how they would design their first experimental run if they were to repeat the
assignment. We again categorized the experimental strategies (Table 1) and observed
that the number of students choosing a sweep strategy diminished from 9/13 to 6/13.
Those six students used the replicate strategy, including three students who suggested a
full blank control [0 0 0 0 0 0] µL, as well as a single student who used the hybrid
strategy. All students now included at least one negative control well (0 µL Kan),
which had not been the case during the original experiment. More students now also
proposed a full parameter exploration. We cannot resolve these transitions on the
individual student level, as the questionnaires were anonymous.
In order to assess the potential learning benefit from this type of activity, we asked
the students to “State up to 3 things that you feel you have learned/gained/benefited
from this activity.” We categorized the student responses, with some answers appearing
in multiple categories (numbers in parentheses are the number of answers in that
category with one example): (i) Biological content about bacterial growth (6, “I learned
more about bacterial growth”); (ii) Biology cloud lab technology (4, “learned what a
cloud lab is”); (iii) Insights into experimentation (2, “I learned experimental design”);
(iv) Experimental noise and errors (6, “I learned more about experimental noise”);
(v) How to develop and evaluate models (8, “learned about growth curve model
types”); (vi) Data fitting (5, “Data fitting and analysis”). Two answers did not fit into
these categories (“remembered why I don’t like experimental research”, “freedom of
modeling framework as proposed to the other homework”). Hence, the students
reported learning advances on a variety of issues relating to biological content,

zamfira@unitbv.ro
Incorporating a Commercial Biology Cloud Lab 339

Table 2. Overview of the post-activity questionnaire. Questions were grouped according to


topic; the first column indicates the order in which questions were initially presented to the
students (1, “strongly disagree”; 9, “strongly agree”).

Noisyness / reliability of
Are these experiments
Was the activity liked?
Order of questions on

The value of this being


Utilityfor this specific

Disagree(1-3)
a real experiment

Neutral(4-6)

Agree(7-9)
questionnaire

too simple?

mean
class?

data
Question
15 I enjoyed this activity. x 2 5 6 5.9
The user online interface was intuitive to use / appropriate
9 x 0 2 11 7.9
for the task.
2 The activity was worth the time and effort. x x 2 6 5 5.4

6 This activity was a valuable addition to the class. x x 2 6 5 5.8

1 The activity reinforced concepts in class. x 2 4 7 6.0


The experiments are too simple to be useful in a graduate
7 x x 3 6 4 5.2
level class.
The experiments were far too simple – this activity needs
3 x 0 5 8 7.2
more degrees of freedom.
It was good that I could change only one experimental 4 7 2
13 x 4.8
variable as the activity overall had much complexity.
Substituting with pre-recorded data would have had the 4 1 8
4 x 6.4
same learning outcome for me.
Substituting with a simulation would have the same 7 4 2
16 x 4.1
learning outcome.
The fact that these were real experiments executed in real 2 3 8
11 x 6.6
time made it more interesting for me.
The fact that the data was so noisy made me think deeper 4 4 5
5 x x 5.5
and therefore was actually a good thing.
It was nice that there was freedom of what model to 0 3 10
17 x x x 7.0
choose and to deal with the ambiguity of the data.
Prior to the first experiment I strongly trusted the 0 6 7
10 x 6.6
reliability of the system.
12 The data was noisy because of instrumentation issues.* x 0 3 9 7.1
The data was primarily noisy because of biological
8 x 6 5 1 3.7
effects.*
The data was noisy due to a combination of
14 x 1 4 7 6.5
instrumentation issues and biological effects.*
* one student did not answer these questions bias beyond neutral
>95% significant

experimentation, modeling, and data fitting. Future controlled studies would evaluate
the extent to which these outcomes are met.
This post-questionnaire also solicited general student feedback via 17 questions
summarized in Table 2. These questions can be loosely clustered into five aspects. The
first three aspects targeted general student impressions of whether they thought the
activity was useful/appropriate/engaging. The last two aspects focused on whether
there was substantive value in having run real experiments with real noise.
The majority of the class was neutral or positive as to whether they liked the
assignment, and also in whether they felt it was a valuable addition to the class and
reinforced course concepts. Students clearly found the web interface to be intuitive.
Overall, students were neutral in whether this activity was appropriate for the class
level, with many believing that the activity was too simple and lacked experimental
freedom. Some students reported that the noisiness of the real data provided an added

zamfira@unitbv.ro
340 I.H. Riedel-Kruse

benefit, while others did not. Students preferred the freedom in model choices and the
ambiguity of the data. Students had high trust in the system before starting the
experiment, but became more critical about the instrument throughout the assignment,
which is an important lesson for any experimentalist. Overall, students interpreted the
variability in their data as stemming primarily from issues with instrumentation rather
than biological noise. Overall, we conclude a subpopulation of students appreciated this
online lab and had a positive educational outcome.

4 Discussion

In this proof-of-concept project, we explored the feasibility and utility of using a


commercial biology cloud lab in (college) education. We found that such platforms can
be integrated into classes in an enjoyable and useful manner for students. Overall, a
significant portion of the students appreciated the open-endedness of exploring a real
experimental environment without having to do the experiments manually. Some of the
challenges we experienced during this first deployment revealed that these automated
biology cloud labs are still in their early days, but these technologies are advancing
very rapidly.
We learned multiple lessons for what would make biology cloud labs useful for
education and possibly for research as well. (1) Both stimulus and response spaces
should be large enough to be interesting, but not too large to become unmanageable for
the students, depending on the student audience and educational goals. For example,
having three chemicals at ten concentrations that can be added at multiple time points
during the experiment provides a stimulus space that is difficult to test exhaustively. In
terms of response space, the individual growth curves are rather low dimensional, as
they can be described with 5–7 parameters based on the students’ projects, with
experimental variability providing additional information. (2) The instruments, as well
as the biological responses, should be within desired, pre-specified parameters
regarding variability, noise, and measurement uncertainty. It would be good practice to
run standards alongside each experiment and to provide that information to the user.
Standards would reveal the quality of execution and could be used to normalize raw
data in certain scenarios. This variability can be enriching or disruptive to the educa-
tional outcome depending on its extent and how well it is embedded into the educa-
tional context. (3) Ideally, the setup is interactive in that the stimulus can be changed
while the experiment is running, which was not the case for the present lab. Experi-
ments should be chosen where automation also enables experiments that no student
would be able to do in a lab by hand, such as adding a stimulus every 20 s over the
course of hours to days. (4) When going through the effort of running real experiments,
there should be an added benefit to the students versus running a simulation or using
pre-recorded data. Hence, students should be aided in feeling the reality (such as
providing a live webcam to see the machine operating), or be empowered to make
interactive choices throughout the run, which counters arguments for pre-recorded data,
or be able to experience biological variability.
Who is the target audience for this particular biology experiment? There was some a
priori debate among instructors whether the activity was too simple for a graduate-level

zamfira@unitbv.ro
Incorporating a Commercial Biology Cloud Lab 341

class, which was also indicated by student feedback. There was essentially only one
variable that students could choose: the amount of antibiotic. Students would have pre-
ferred more experimental freedom. We therefore believe that this set of activities would be
better suited to more introductory research-focused classes, perhaps at advanced
high-school or introductory college levels, to train students on experimental design and
data analysis while taking away the time consuming efforts for a hands-on lab.
Overall, the Transcriptic platform enables much more complex experiments for
future deployments given its professional target audience. These experiments could be
made more interactive and versatile, such as being able to choose from multiple
antibiotics or adding liquids to the sample at multiple times during the experimental
run. Many other experiments are possible, given that the ultimate vision of these cloud
lab companies is to enable (nearly) every possible experiment in the molecular and
cellular research domain. Finally, another interesting aspect is that students can access
research-grade equipment over the web.
Based on Transcriptic’s business model, the current cost of these of experiments
is *$70 per 96-well plate. This cost depends on the experiment type and is likely to
decrease in the future given advancements in the technology. Hence, running five
successive experiments of six wells per student might be considered reasonable for this
activity, which would cost *$20 per student, a price point that can be considered
reasonable in comparison to advanced hands-on lab classes in colleges, but is poten-
tially at the upper limit for K-12 education. One of the major advantages of such a
commercial cloud lab approach is that all the costs are already factored in, with no
additional logistics for the instructor. The 96-well experiment also demonstrates how
high-throughput experiments in general can be virtually partitioned among many users.
Given the size of the educational market, with millions of students in the US alone
going through the same curriculum each year, offering cloud biological experiments
could be of interest to commercial cloud lab companies.
We asked our collaborators at Transcriptic for their evaluation of the project. They
indicated that it was an important educational experience for them as well, especially as
the Workcell had just gone operational and was still in the debugging phase. The
variability that had emerged during the experiments was unintended, but also provided
valuable insights into where the system and the protocols needed improvements. These
issues were subsequently resolved through a combination of improved hardware and
specimen-handling protocols, such as purchasing more advanced liquid handling robots
and avoiding condensation on the plates.
It is also important and insightful to compare these results to the two other biology
cloud labs that were deployed in educational settings previously (Hossain et al. 2015),
where students performed chemotaxis experiments with the slime mold physarum over
the course of one day (Hossain et al. 2015) or phototaxis experiments with Euglena
cells over the course of one minute (Hossain et al. 2016). The bacterial growth
experiment investigated here provided the students with a significantly smaller design
space for their experiments (exhaustive exploration within five experiments vs. effec-
tively having an infinite space in the other two cases). Similarly, the result space was
much more limited (discrete data curves vs. rich image data, although the biological
variability in the growth curves added interesting elements). The Workcell approach of
shuttling experiments between different types of instruments provides a tremendous

zamfira@unitbv.ro
342 I.H. Riedel-Kruse

opportunity for effectively executing an unlimited number of experimental designs


(these experiments could even be research grade), and therefore this approach should
ultimately have an even higher design and discovery space than the previous two labs
(Hossain et al. 2015; 2016). The experimental duration of one day was similar to the
physarum lab, but was much longer than the Euglena lab. A major advantage of any
commercial cloud lab is the sustained business model, which is in stark contrast to
many academic initiatives, which often become non-operational when research funding
ceases.
Regarding comparable virtual labs, the Labster platform (https://www.labster.com/)
{Bonde:2014wk} enables the execution of a variety of simulated experiments typical
of the life sciences, including the type that we presented here as well as what Tran-
scriptics offers in general. Due to various animations, the current Labster user interface
provides a much more realistic laboratory feel than the students experienced in the
current study. In the future, it will be important to compare and implement both
approaches side by side, as virtual and real experiments have their own standout
features and are likely best used synergistically (see references in introduction).
Another aspect to consider is the integration of this cloud lab into learning management
systems (LMS) (Heradio et al. 2016). Given the currently dynamic development for
both biology cloud labs as well as LMSs, we suggest that a tight integration should not
be the foremost goal. Rather, a more flexible, modular approach with simple data
transfer between the platforms is recommended, e.g., accessing the cloud lab via a
simple web link inside the LMS and bringing the data from the cloud lab to the LMS
via intermediate file download onto a local hard drive.
Overall, our work demonstrates that commercial biology cloud labs can be inte-
grated into education and that they offer potential for developing enjoyable and useful
online learning platforms to teach essential scientific skills such as experimental design
and data analysis. Although the technology is at the cusp of being mature enough to
deploy in educational settings, more interesting research and development should be
carried out regarding technology, user interface design, and educational framing. We
encourage other stakeholders from educators and industry to join the effort.

Acknowledgements. We would like to thank M. Hodak, the transcriptics team, Z. Hossain,


X. Jin, H. Kim, M. Head, and the students in the class.

References
Bonde, M.T., Makransky, G., Wandall, J., Larsen, M.V.: Improving biotech education through
gamified laboratory simulations. Nature 32, 694–697 (2014)
Buchanan, R.L., Whiting, R.C., Damert, W.C.: When is simple good enough: a comparison of
the Gompertz, Baranyi, and three-phase linear models for fitting bacterial growth curves.
Food Microbiol. 14(4), 313–326 (1997). 1–14
de Jong, T., Linn, M.C., Zacharia, Z.C.: Physical and virtual laboratories in science and
engineering. Education 340(6130), 305–308 (2013)

zamfira@unitbv.ro
Incorporating a Commercial Biology Cloud Lab 343

Faraji, R., Parsa, A., Torabi, B., Withrow, T.: Effects of kanamycin on the macromolecular
composition of kanamycin sensitive Escherichia coli DH5a strain. J. Exp. Microbiol.
Immunol. (JEMI) 9, 31–38 (2006)
Fox, A.: Cloud computing–what’s in it for me as a scientist? Science 331(6016), 406–407 (2011)
Hayden, E.C.: The automated lab. Nature 516, 131–132 (2014)
Heradio, R., de la Torre, L., Galan, D., Cabrerizo, F.J., Herrera-Viedma, E., Dormido, S.: Virtual
and remote labs in education: a bibliometric analysis. Comput. Educ. 98(C), 14–38 (2016)
Hossain, Z., Chung, A.M., Riedel-Kruse, I.H.: Real-time and turn-based biology online
experimentation. In: Remote Engineering and Virtual Instrumentation (REV) (2015)
Hossain, Z., Blikstein, P., Riedel-Kruse, I.H., Jin, X., Bumbacher, E.W., Chung, A.M., et al.:
Interactive cloud experimentation for biology. Presented at the the 33rd Annual ACM
Conference, pp. 3681–3690. ACM Press, New York (2015). http://doi.org/10.1145/2702123.
2702354
Hossain, Z., Chung, A.M., Baumbacher, E., Lee, S.A., Honesty, K., Walter, A., et al.: A real-time
interactive, scalable biology cloud experimentation platform. Nature Biotechnol. 34(12),
1293–1298 (2016)
Kong, F., Yuan, L., Zheng, Y.F., Chen, W.: Automatic liquid handling for life science: a critical
review of the current state of the art. J. Lab. Autom. 17(3), 169–185 (2012)
Lee, J., Kladwang, W., Lee, M., Cantu, D., Azizyan, M., Kim, H., et al.: RNA design rules from
a massive open laboratory. Proc. Natl. Acad. Sci. 111(6), 2122–2127 (2014)
Lin, J., Lee, S.-M., Lee, H.-J., Koo, Y.-M.: Modeling of typical microbial cell growth in batch
culture. Biotechnol. Bioprocess Eng. 5(5), 382–385 (2000)
Lowe, D., Newcombe, P., Stumpers, B.: Evaluation of the use of remote laboratories for
secondary school. Sci. Educ. 43(3), 1197–1219 (2012)
Melin, J., Quake, S.R.: Microfluidic large-scale integration: the evolution of design rules for
biological automation. Ann. Rev. Biophys. Biomol. Struct. 36, 213–231 (2007)
Sauter, M., Uttal, D.H., Rapp, D.N., Downing, M., Jona, K.: Getting real: the authenticity of
remote labs and simulations for science learning. Distance Educ. 34(1), 37–47 (2013)
Sia, S.K., Owens, M.P.: Share and share alike. Nat. Biotechnol. 33, 1224–1228 (2015)
Swinnen, I.: Predictive modelling of the microbial lag phase: a review. Int. J. Food Microbiol.
94(2), 137–159 (2004)
Wieman, C.E., Adams, W.K., Perkins, K.K.: PHYSICS: PhET: simulations that enhance
learning. Science 322(5902), 682–683 (2008)
Zwietering, M.H., de Koos, J.T., Hasenack, B.E., de Witt, J.C., van’t Riet, K.: Modeling of
bacterial growth as a function of temperature. Appl. Environ. Microbiol. 57(4), 1094–1101
(1991)

zamfira@unitbv.ro
Learning to Program in K12 Using a Remote Controlled
Robot: RoboBlock

Javier García-Zubía ✉ , Ignacio Angulo ✉ , Gabriel Martínez-Pieper, Pablo Orduña,


( ) ( )

Luis Rodríguez-Gil, and Unai Hernandez-Jayo


Department of Industrial Technologies, University of Deusto,
Avenida de las Universidades 24, 48007 Bilbao, Spain
{zubia,ignacio.angulo,gabi.martinez,pablo.orduna,
luis.rodriguezgil,unai.hernandez}@deusto.es

Abstract. Programming is part of the curricula in different subjects and coun‐


tries. To face this challenge, schools are using visual programming (e.g., Scratch,
Blockly) and/or educational robots. Some combinations of these two tools are
very popular, such as the Lego Mindstorm robots. This work presents a remote
controlled robot called RoboBlock, and its main characteristic is that it can be
programmed and controlled via Internet. RoboBlock is developed under the
WebLab-Deusto Remote Laboratory Management System.

Keywords: Remote experimentation · Educational robotics · K12 education

1 Introduction

The promotion of STEM among the young people is one of the objectives of different
countries and institutions like EU. To increase the interest of the young people to science,
engineering and technology the schools are including subjects that combine program‐
ming and robotics [1, 2]. The main effect of these initiatives is in K12.
In general, the teachers in the classrooms have to deal with different problems.
Firstly, they need financial resources to buy several robots. Secondly, they have to
maintain them in perfect conditions (even after being used by students). These two issues
reduce the time that they have to teach the students how to program. It is exciting for
the students to program and control real robots, but at the same time it is frustrating for
the students to see that their robot is not running properly but the robot of other group
is running well, with the same program!! At the same time the teacher has to manage
this situation, and he could also be frustrated or he could be very busy reviewing the
boards, the sensors, etc. This problem is especially remarkable at the beginning of the
learning process because the students have more problems with the equipment and with
the programming language. The learning curve is in Fig. 1.

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_33

zamfira@unitbv.ro
Learning to Program in K12 Using a Remote Controlled Robot 345

Fig. 1. Typical learning curve

At the same time, and at least in Spain, robotics use to be taught in the Technology
subject and the associated teacher does not use to be a computer or electronics engineer.
The common situation is that the Science teacher must to teach programming/robotics
because he needs to fill his timetable. Currently it is not common to have a specific
teacher for technology and programming.
To solve this situation or at least to help teachers and institutions to manage this
situation a robot can be designed and used as a remote experiment. RoboBlock will help
the community to solve these problems using a web tool to complete all the teaching
process. There are several remote laboratories offering robots around the world but not
with the same characteristics that RoboBlock [3, 4].

2 RoboBlock General Characteristics

A remote laboratory allows the user/student to experiment using the Internet as his eyes
and hands. Instead of using the hands to manipulate the equipment and instead of using
the eyes to see the evolution of the experiment, the user will use the Internet. To make
an experiment the user only needs an Internet connection, so he can experiment every‐
where and every time he has an Internet connection.
To deploy a remote laboratory into a K12 classroom there are some requirements [5]:
• Universality: the remote laboratory can be accessed using any OS and any web
browser.
• Device: the remote laboratory can be accessed using any kind of device, including
tablets and smartphones [6].
• Security: the remote laboratory cannot affect the IT security of the school, so it is
based only in standard ports (80, …) and without problems with firewalls.
• LMS: the remote laboratory can be connected to the LMS of the school.
• Queue: the remote laboratory manages the users’ queue.
If the previous requirements are not fulfilled by the remote laboratory, then it could
not be implemented successfully in the classroom. RoboBlock fulfills these technical
requirements. But there are also some functional requirements.

zamfira@unitbv.ro
346 J. García-Zubía et al.

The typical scenario with a robot in a remote laboratory is [7]:


• Phase 1. The student writes the code in his computer/tablet, he compiles it (maybe
he simulates the code now).
• Phase 2. The student accesses the remote lab with his user/pass, maybe he has to wait
in a queue.
• Phase 3. The student downloads the software in written in phase 1 into the robot.
• Phase 4. The robot starts to execute the downloaded program: it follows the line, it
avoids the obstacles, it fights against another robot, etc.
• Phase 5. The student sees the evolution of the robot to decide if the program is correct
or not. If it is not, the student must go back to phase 1.
The code can be written in C, python, scratch, Blockly, etc. The language depends
on the teacher and in the level of the students. For K12 students it seems to be clear that
it is better to use graphical software like scratch or Blockly. The first two phases are
placed in the student’s computer/tablet, and the last three phases are placed in the remote
laboratory.
So to program the robot, the student needs to have his own computer with the
programming environment, IDE, installed (like SDK, KEIL, EV3, etc.), and because of
this to perform a task, the student needs his computer or a computer with the same
software installed. This is not a problem when the student is performing his tasks in the
school, but maybe is a problem when the student is at home.
The purposed solution by RoboBlock is to integrate the robot, the programming
environment and the remote laboratory in a unique web. Under the RoboBlock approach,
the student to perform the experience only needs a computer/tablet (any computer)
connected to Internet. This characteristic promotes the use of the remote experiment
everywhere and every time. RoboBlock portal has two parts:
• ArduBlock. In this part of the portal, the student draws/writes the program using
blocks, or even he can write the code directly in C. When the student obtains the
program using Blockly, at the same time he can automatically see the corresponding
code in C.
• Zumoline. In this part of the portal, the student downloads the code written in Ardu‐
Block to the see the evolution of the robot in real time (Figs. 2 and 3).

zamfira@unitbv.ro
Learning to Program in K12 Using a Remote Controlled Robot 347

Fig. 2. ArduBlock

Fig. 3. ArduLab

3 Functional Description of RoboBlock

In the previous section, RoboBlock has been described in general terms, and this section
is devoted to describe what can be done with RoboBlock and how it can be done.
Universality, security, etc. are key factors of any IT learning tool in the school, but
they are not important for the teacher if the tool is not powerful enough. RoboBlock can
be explained using the programming blocks and to the programs themselves.

zamfira@unitbv.ro
348 J. García-Zubía et al.

3.1 Programming Blocks of RoboBlock


RoboBlock programming tool is based in Blockly (developed by Google as part of
Google for Education) and the blocks are divided in four main groups:
• Logic. It contains logic, mathematics and loops.
• Data. It contains variables and functions.
• Input/Output. It contains sensors, leds, motors, etc.
• Miscellaneous. It contains time functions, communications, etc. (Fig. 4)

Fig. 4. Main functions of ArduBlock

In a first group (see Fig. 5), in Loops the student can select blocks to implement:
loop, while, count and break. In the Logic area the available blocks are: if, numerical
comparison, logical comparison and other blocks. In the Mathematics area we can find:
arithmetical operations (basic and advanced), constants, and other functions. There is a
block for timing functions: wait, measure, etc. There are other blocks for text, commu‐
nications, etc.
For programming RoboBlock, it is important to have powerful blocks for sensoring.
It is important to read from sensors the current situation of the robot, and it is important
to write in the actuators to modify the situation of the robot, everything in real time.
There are three proximity sensors placed in the center of the robot, in the right side and
in the left side, but also every sensor has two parts: left and right; by this way the control
of RoboBlock can be very accurate. The same situation occurs with line detector sensor:
there is one sensor in the center of RoboBlock but it reads five signals. RoboBlock has

zamfira@unitbv.ro
Learning to Program in K12 Using a Remote Controlled Robot 349

Fig. 5. Sensors, buttons and motors in RoboBlock

also three buttons to perform different tasks, in this case the programmer can scan the
buttons to read pulses and levels.
The student can also control RoboBlock using the actuators of the two motors, left
and right. Each motor has an associated speed from 0 to 100.
This is the current state of RoboBlock, but additional sensors or actuators can be added
to it, and in this case new programming blocks should be added. The design of web
programming tool, ArdulabBlockly, is intended to allow its modification easily to promote
the adaptation of RoboBlock to each scenario in different schools and events (Fig. 6).

Fig. 6. Blocks for programming of RoboBlock: loops, time, and numerical and logical functions.

zamfira@unitbv.ro
350 J. García-Zubía et al.

3.2 Programming RoboBlock


To write the program, the student should drag and drop some specific blocks to combine
them in a graphical structure. This structure is the algorithm that will be downloaded in
RoboBlock. Figure 7 shows in the center the graphical algorithm that moves RoboBlock
during 2 s (2000 ms), and in the right side, the corresponding program in C can be seen.

Fig. 7. ArduLab-Blockly programming environment.

The student, after writing/drawing the code, can validate his code clicking on Vali‐
date (Validar) to see if there is any problem with his program. After doing this, the
student will open the Zumoline environment to download his program in RoboBlock.
Zumoline knows what is the last program validated by the user (called blocks), and this
program will be downloaded in RoboBlock. The student can also access and test to a
variety of running examples.
RoboBlock is designed to develop the basic programming skills using a robot. The
student can create programs combining basic blocks like loop, if-then-else, assignation,
read, write, etc. It is up to the teacher to decide the complexity level of the programs
focusing to the curricula of the subject (Fig. 8).

zamfira@unitbv.ro
Learning to Program in K12 Using a Remote Controlled Robot 351

Fig. 8. Zumoline environment

For example, the program in Fig. 9 controls the robot increasing its speed one by
one from 0 to 50 every 10 ms. In this case the student practice the for structure using
the robot to see the effect of it.

Fig. 9. Example of RoboBlock program

In Fig. 10 the algorithm shows the student how to use the if structure. RoboBlock
will go faster (one by one, from 0 to 50 every 10 ms) until detecting a line in the floor
using the sensor.

zamfira@unitbv.ro
352 J. García-Zubía et al.

Fig. 10. Example of RoboBlock program

As final project the teacher can purpose the students a competition like “Win in a
Formula one circuit”. The circuit made with plastic in Fig. 11 is a copy of the Montmelo
Circuit in Spain, and the students should write the best program to obtain the lowest
time to tour the circuit.

Fig. 11. Montmelo circuit with RoboBlock

Every time the student writes a new program, he can test it in the remote laboratory,
in RoboBlock. At this moment to download the code it is needed to open a new tab in

zamfira@unitbv.ro
Learning to Program in K12 Using a Remote Controlled Robot 353

the web browser, the Zumoline environment. When a student accesses to Zumoline he
has 5 min (it can be increased) to select his program, download it in RoboBlock and see
the evolution of the robot. After this, maybe he can modify the program to be just a bit
faster in the competition. In Fig. 12 there is a simple program to win in Momtmelo that
the student can improve adding a better control.

Fig. 12. Montmelo circuit program

3.3 Technical Description of RoboBlock

The development of the laboratory can be divided into three stages. Firstly, a Python
module performs remotely the compilation of source files to generate the binary files to
be loaded into the program memory of the microcontroller. Secondly, a programming
environment has been developed to capture algorithms with Blocky. Finally, a system
has been developed that allows the programming of the mobile robot and the interaction
with its inputs, so that these can be accessed remotely.

3.3.1 Cloud Compiler


Main objective of this laboratory is to provide the students a full development studio
without installing requiring any specific software, farther the Internet browser. To this
end, the back-end provides all required tools to compile the project source files and store
the binary files, that will be accessed later.
Developed module has been developed in Python and makes use of the “InoTools”
tools [8]. This toolkit allows you to compile Arduino projects from the command line
so it can be easily integrated into any third-party development environment such as those
presented in this project.
The developed module performs the cloud compilation service and manages compi‐
lation queues. Using this system only one simultaneous compilation can be performed.

zamfira@unitbv.ro
354 J. García-Zubía et al.

Since the compilation of a project is performed in a matter of seconds, the “Celery”


software tool [9] has been used to add a queue system to the compiler and ensure that
the compilations are sequentially.

3.3.2 Development Environment


The development environment is based on the Ardublockly tool [10]. Ardublockly is a
bifurcation of Google’s Blockly project, a set of tools that allow you to generate source
code in various programming languages from interlocking block strings. It has natively
generators of code for Javascript, Python, PHP and Dart. It also includes basic blocks
like loops, conditionals, arithmetic operations, operations on strings, etc.
Blockly has tools for developers to implement new blocks and code generators. In
this way, Ardublockly extends Blockly by adding a code generator for Arduino and
blocks for the control of digital inputs/outputs, serial communications or timers.
For the development of this remote laboratory, Ardublockly has been extended to
add the necessary blocks to carry out the reading of the sensors and the control of the
actuators of the robotic platform.
This development environment eliminates the possibility of compile errors. Panels
have not been added to show the results of the compilation. Instead, the validate button
changes color when the compilation is complete. In this case, when the validation control
is pressed, the code generated by Ardublockly is downloaded and a file is generated in
the basic project with the downloaded content. At the end of the compilation, the binary
is stored in a directory associated with the author of the program.

3.3.3 Robot Interaction


To perform the communications between the robot and the client, the Open Source
Socketio tool has been used. Socketio is a library that allows bidirectional communica‐
tions and low latencies. This library uses a protocol that allows to switch between
Websockets and longpolling based on the compatibility with these technologies of the
client browser [11].
When the client has a browser compatible with the WebSockets technology, makes
use of this technology, so that the communications are realized bidirectional and in real
time. In contrast, if the user accesses with an unsupported browser, the client performs
longpolling simulating bidirectional communication. This feature provides universality
to the laboratory since, in this way, the application works on all devices without the need
to install plugins or perform communications through specific ports.
To perform this task, the official socket.io library was used in the web client and the
Flask-Socketio library on the robot.

3.4 The Robotic Platform


The robotic platform consists on the Zumo 32u4 Pololu robot and a low-cost embedded
system that allows remote programming the microcontroller and interact with the inputs
and outputs of the Pololu platform.

zamfira@unitbv.ro
Learning to Program in K12 Using a Remote Controlled Robot 355

The main element of the experimentation platform is Pololu Zumo robot 32u4. This
robot is based on the microcontroller compatible with Arduino Atmega32u4. It includes
the following electronic and mechanical components.
Electronic components:
• 4 wide-angle infrared sensors
• 2 DC motors
• 2 encoders
• 5 line sensors matrix
• 3-axis Gyroscope
• 3-Axis Accelerometer
• Magnetometer
• Buzzer
• Bootloader
Chassis:
• 4 sprockets
• 2 tires for crawler type wheels
• Main Chassis
• Metal shovel
Raspberry Pi embedded platform has been used to develop the controller. This device
connects to the robotic platform performing the following functions:
• Bootloader activation
• Programming the robot
• Access to serial communication port
• Activation of buttons remotely
• Reading the status of the outputs
The Zumo robot 32u4 includes several expansion ports that allow to replace the
sensors and actuators of which it has natively by others. In this case, this expansion port
has been used to connect certain outputs of the Raspberry Pi in it. Since the connections
are made directly, it has not been necessary to develop an extra electronic circuit.
At the software level, an object has been implemented that contains the methods
necessary to perform the actions listed above. These methods have been developed in
Python language and are invoked directly from the web application. The methods
included in the driver are as follows:
• ledChecker (): This routine continually checks the status of the microcontroller
outputs that activate/deactivate the robot LEDs.
• startLedChecker (): Method to start the status check routine of the LEDs.
• stopLedChecker (): Method to finalize the status check routine of the LEDs.
• startSerial (): Method to initialize the serial port.
• connectSerial (): Opens the serial port and starts the serialTask () routine.
• serialTask (): This routine checks if data has been received by the serial port and
transmits it to the connected client.

zamfira@unitbv.ro
356 J. García-Zubía et al.

• sendSerial (message): This method is invoked when the client sends data through the
serial port to the robot.
• enableBootloader (): This method is used to activate the bootloader and load a
program.
• eraseMemory (): This method erases the entire program memory of the microcon‐
troller.
• loadBinary (path): This method is used to load a binary on the robot. It is necessary
to mark the path of the binary as a parameter.
• buttonOn (button): This method activates any physical button on the robot remotely.
• buttonOff (button): This method disables any physical button on the robot remotely.

4 RoboBlock in WebLab-Deusto and Labsland

RoboBLock is included in the WebLab-Deusto RLMS (Remote Management Labora‐


tory System) to take the advantages of using this powerful tool:
• Groups and Access. If a school wants to use RoboBlock, WebLab-Deusto will create
a group and the students will access to the robot using the same credentials that they
are using at their school, so no new user/pass are needed for the students. Also Robo‐
Block can be accessed through Moodle, Google Classroom, etc. Even, if the school
is interested, WebLab-Deusto can create a personalized web portal for it (see Fig. 13
to see the WebLab-Deusto portal for the Urdaneta School in Bilbao, Spain).

Fig. 13. WebLab-Deusto Urdaneta School.

• Queues. At this moment there is only one copy of RoboBlock, so if a student wants
to access the RoboBlock but it is being used by other student, the first student must
be in a queue until RoboBlock is released. If there are several schools using

zamfira@unitbv.ro
Learning to Program in K12 Using a Remote Controlled Robot 357

RoboBlock more situations like this can happen. WebLab-Deusto manages the queue
and can assign different priorities to each school or student, or maybe the teacher will
have a higher priority than the students to grant him to access the robot in a short
time.
• Learning analytics. WebLab-Deusto tracks all the activity of the student, so the
teacher can see for every student how many times and when he entered the platform.
Even more WebLab-Deusto records the files that the student downloaded during the
semester. An additional tool allows the teacher to see if there are files copied among
the students.
• Scalability and load balance. If there was different copies of the RoboBlock,
WebLab-Deusto would manage the situation sharing the effort among the different
copies of the robot.
• Federation. A school can implement a copy of RoboBlock in its building, and in this
case it can be federated in WebLab-Deusto. In this situation, instead of having only
one copy of RoboBlock, we will have two copies. The federation mechanism imple‐
mented in WebLab-Deusto allows the school to fix the conditions of use of its own
RoboBlock, and even the school can earn some money from this federation.

5 Conclusions and Future Work

RoboBlock is designed and implemented to help the schools and teachers to teach
programming and robotics to K12 students. This tool will help to promote STEM among
young students.
RoboBlock is a remote experiment and it can be controlled remotely in ZumoLine
by the student using the algorithm written by him in ArduBlocks using a graphical tool
based in Blockly. All the tasks are performed in the web, so if the student has an Internet
connection he can complete his practical activity using any computer in any place at
every time. RoboBlock is a novelty in the K12 scope.
The design of RoboBlock has been recently finished and it has been deeply tested,
and it is time to implement it in several schools in a pilot. The results of the pilot will
help the designers to improve the original design of RoboBlock.

References

1. Roscoe, J.F., Fearn, S., Posey, E.: Teaching computational thinking by playing games and
building robots. In: International Conference on Interactive Technologies and Games (iTAG),
pp. 9–12 (2014)
2. Merrill, M.D.: First principles of instruction. Educ. Technol. Res. Dev. 50(3), 43–59 (2002)
3. Islamgozhayev, T.U., et al.: IICT-bot: educational robotic platform using omni-directional
wheels with open source code and architecture. In: International Siberian Conference on
Control and Communications (SIBCON), pp. 1–3 (2015)
4. Antonio, C.P., et al.: Remote experiments and 3D virtual world in education. In: 3rd
Experiment@, International Conference, exp’at 2015, pp. 65–70 (2015)
5. García-Zubía, J., Orduña, P., López-de-Ipiña, D., Alves, G.: Addressing software impact in
the design of better remote labs. IEEE Trans. Ind. Electr. 56(12), 4757–4767 (2009)

zamfira@unitbv.ro
358 J. García-Zubía et al.

6. García-Zubía, J., López-de-Ipiña, D., Orduña, P.: Mobile devices and remote labs in
engineering education. In: Proceedings of 8th IEEE International Conference on Advanced
Learning Technologies, ICALT 2008, pp. 620–622 (2008)
7. Guimaraes, E., Cardozo, E., Moraes, D., Coelho, P.: Design and implementation issues for
modern remote laboratories. IEEE Trans. Learn. Technol. 4(2), 149–161 (2011)
8. Sarik, J., Kymissis, I.: Lab kits using the Arduino prototyping platform. In: 2010 IEEE
Frontiers in Education Conference, FIE 2010, Washington, DC, pp. T3C-1-T3C-5 (2010)
9. Woodring, I., El-Said, M.: An economical cluster based system for detecting data leakage
from BYOD. In: 2014 11th International Conference on Information Technology, pp. 610–
611. New Generations, Las Vegas (2014)
10. Perate, C.: Ardublockly (2016). https://github.com/carlosperate/ardublockly
11. Pimentel, V., Nickerson, B.G.: Communicating and displaying real-time data with
WebSocket. IEEE Internet Comput. 16(4), 45–53 (2012)

zamfira@unitbv.ro
Spatial Learning of Novice Engineering Students Through
Practice of Interaction with Robot-Manipulators

Igor Verner ✉ and Sergei Gamer


( )

Technion – Israel Institute of Technology, Haifa, Israel


ttrigor@technion.ac.il, gamer@ie.technion.ac.il

Abstract. This paper presents a study in which learning interactions of novice


engineering students with robot manipulators focus on training spatial skills. To
support the interactions, we customized the robots’ workspaces, designed virtual
robotic cells, and developed robot manipulation tasks with oriented blocks. 20
high school students (HSS) majoring in mechanics and 248 Technion first-year
students (TS) participated. The study indicated that following the training, the
HSS improved their performance of spatial tests, and the TS gained awareness of
spatial skills required to handle industrial robot systems.

1 Introduction

The ways to increase the efficiency of learning practices in Robotics and Computer
Integrated Manufacturing (RCIM) laboratories are widely discussed [1]. When
educating unprepared students, the recommended lab practice is that which combines
training technical skills with learning the principles of robot operation and development
of generic skills required in different workplaces [2]. Among the most important of these
is the ability of spatial vision. Industrial robotics laboratories generally implement three
types of learning scenarios [3]: setting up a robot system, programming different indus‐
trial robots, and performing advanced robot-handling tasks. The laboratories offer
learning practice in hands-on, virtual, and remote environments.
To perform robot system setup, programming and operation assignments, the student
needs immediate and detailed visual information from the robot workspace. In the hands-
on environment the student is near the robot system and so all needed information is
acquired directly through observation. In the remote control system visual feedback is
transmitted from video cameras via a computer screen, and so is incomplete and delayed.
In the virtual environment the student works with a graphic simulation of the robot
system on the computer screen under limitations of the given software. The advantages
and constraints of the hands-on, virtual, and remote learning practices have been
discussed and compared in the literature [4]. Less attention has been paid to the analysis
of difficulties that students face while performing tasks in robotic environments, and to
the impact of this practice on the development of fundamental engineering skills,
including spatial skills [5].

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_34

zamfira@unitbv.ro
360 I. Verner and S. Gamer

The current paper reports on the results of our study conducted in the RCIM Labo‐
ratory of the Faculty of Industrial Engineering and Management (IEM) at the Technion–
Israel Institute of Technology. Over four academic years (2011–2015) we ran in the
laboratory robotics workshops for IEM first-year undergraduates and, separately,
outreach robotics courses for 10th-grade students in an underprivileged vocational high
school. Both sets of courses offered learning practice in programming and operation of
robot manipulators, while the tasks focused on training spatial skills. Details of our study
are presented in [6].

2 Spatial Learning in Robotic Environments

Engineering practice depends on visual information, and strong spatial perception,


reasoning, and visualization skills are critical to success in engineering careers [6]. This
is true for practice in design and operation of automated manufacturing systems (AMS).
Engineers responsible for the design, operation, and supervision of AMS must have
aptitude in dynamic perception and dynamic and flexible reasoning, as well as a capacity
for autonomous work and for rapid yet accurate decision making. Strong spatial skills
are crucial for all aspects of robot design and operation, whether hands-on or remote.
Lathan and Tracey [7] showed that performance in teleoperating a robot through a maze
using a single camera significantly correlated with performance in standard spatial
reasoning tests. Menchaca-Brandan et al. [8] found spatial skills, particularly perspective
taking and mental rotation, to be essential for operating robotic manipulation systems.
Spatial skills can be developed through experience and practice, and studies in spatial
cognition suggest that digital technology environments can facilitate effective training
in these skills [9]. Researchers recommend practice with both virtual and real robots.
Modern virtual robotic environments, such as RoboCell [10], enable the learner to setup
robotic cells and develop simulations of production processes. The virtual robotic cells
can be made realistic and create some sense of immersion by displaying simulated
machinery, furniture, and other objects. Although different approaches to training spatial
skills in science and engineering education have been widely discussed, very little
research has considered spatial learning through practice in robotic environments. While
studies relating spatial skills to robotics exist, most of these consider spatial ability skills
only as prerequisites and predictors of learning. In consequence, among 217 studies of
spatial training reviewed by Uttal et al. [11], only two concerned robotics courses and
our work [12] was one of them.

3 The Robotics and CIM Laboratory

The RCIM Laboratory in the Technion’s IEM Faculty conventionally supports courses
and research activities for industrial engineering majors by enabling hands-on experi‐
mentation in the design, control and operation of automated manufacturing systems. The
laboratory facilities include nine semi-industrial robots. In terms of software, the lab is
equipped with the RoboCell.

zamfira@unitbv.ro
Spatial Learning of Novice Engineering Students 361

3.1 Customizing the Robot Workspaces


For each robot we constructed and installed special superstructures that cover the
devices used in advanced courses (buffers, jigs, conveyer belts, etc.) and enable
performance of the manipulation tasks. Figure 1 shows a modified robot setup. We
supplied plastic plates (pushers) that the robot uses to align objects in the assembly
area. For SCARA robots that do not have gripper pitch we constructed a LEGO
rotator that can rotate objects (blocks) around horizontal axes, thus enabling rotation
manipulations using these robots.

Fig. 1. Robot setup adapted for performing manipulations.

3.2 Extending the Robocell Virtual Environment

RoboCell is a software environment, developed by Intelitek, to set up virtual workcells


and program robot handling processes. Robot manipulations in workcells created with
RoboCell can be performed with parts having the shape of cylinders, cubes, and blocks.
To enhance spatial learning, for our request Intelitek updated the RoboCell so as to
enable defining and manipulating cubes with different symbols on their faces (Fig. 2).
This enabled us to offer tasks in which the students rotate and orient the cubes by the
robot.

zamfira@unitbv.ro
362 I. Verner and S. Gamer

Fig. 2. Manipulating cubes with symbols on their faces

4 The Outreach Course

This robotics course was designed at the request of a vocational high school to help 10th
graders majoring in mechanical engineering who were having spatial difficulties
mastering technical drawing. The 16-hour course consisted of eight two-hour sessions.
The curriculum was divided into three parts, where each part focused on a certain aspect
of robot programming and operation, and on training one of the main categories of spatial
ability: spatial perception, mental rotation, and visualization.
The first three sessions focused on robot pick-and-place operations and spatial
perception tasks. In the first session the students learned about the structure of the robotic
arm and its motion in the workspace. In the second and third sessions they studied the
robot control language ACL, learned to define robot positions by coordinates, and prac‐
ticed programming simple pick-and-place manipulations with cubic parts. The next three
sessions dealt with rotation of objects by the robot. In the fourth session the students
learned about rotations around coordinate axes and how to perform them using the
robotic arm. In the fifth and sixth sessions, they learned to use the RoboCell software
and to operate a robot in the virtual environment. They completed this module by
assembling a six-cube picture puzzle from identical cubes with geometrical symbols
drawn on their sides (Fig. 2). The seventh and eighth sessions were devoted to
performing three assembly tasks with real robots. The first task was to assemble a six-
cube picture puzzle through teleoperating the robotic arm based on visual feedback from

zamfira@unitbv.ro
Spatial Learning of Novice Engineering Students 363

two digital cameras. In the second task the students were required to assemble a puzzle
from six identical cubes with geometric figures drawn on their sides. The puzzle was
presented using three orthographic projections (front, top, and side views) and a sketch.
The students were asked to use the sketch to depict a three-dimensional view of the
puzzle by drawing appropriate geometric symbols on the sides. They then had to
assemble the puzzle using the robot.

5 Robotics Workshop

The 6-hour workshop was delivered to first-year students as part of the Introduction to
Industrial Engineering and Management course. The workshop included a 2-hour lecture
and two 2-hour robotics lab classes. The lecture “Principles of Robot System Operation”
introduced the students to the concepts of CIM, robot programming, and robot operation.
The lecture also presented the lab assignments. The first laboratory class was devoted
to practice in the RoboCell virtual environment. The students were assigned to program
a 5 degrees-of-freedom robot to assemble a structure from different blocks. In the second
laboratory class the students operated real robots. The task required to operate the robot
so as to pick up an oriented cube, move it from the storage area to the buffer, rotate it to
the final orientation, and place it in the destination position at the assembly area. The
students planned and operated robot movements using predefined positions of the
mechanical arm and subroutines implementing basic pick-and-place operations (written
in the Advanced Control Language).

6 Evaluation of Learning Outcomes

The evaluation study involved twenty high school students participated in the course
and 248 university students participated in the workshop. We evaluated whether the HS
students improved their performance in spatial tasks following the laboratory practice
in operating robot manipulators. The objective of the university workshop was to expose
first-year students to industrial robotics and foster awareness about spatial challenges in
programming and operating robots. Therefore, in this case our evaluation addressed the
development of spatial awareness.
Evaluation of the outcomes of the university workshop was in line with its objective:
to expose first-year students to industrial robotics and foster their interest and awareness
about spatial challenges in programming and operating robots. Awareness is defined as
individual’s consciousness of something to the degree that it can influence her/his
behavior [13]. Raising interest in industrial engineering and fostering awareness of its
professional requirements, particularly spatial awareness, is one of the core missions in
educating novice IEM students. Therefore, in the evaluation our interest was whether
the practice in operating robot manipulators improve the students’ awareness of spatial
skills in industrial robotics.

zamfira@unitbv.ro
364 I. Verner and S. Gamer

6.1 Gain in Performance of Spatial Tasks


At the beginning of the outreach course we evaluated students’ spatial skills using three
paper-and-pencil spatial tests: the spatial perception test [14, p. 18], the mental rotation
test [14, p. 290], and the visualization test [14, p. 149]. The same three tests were repeated
at the end of the course. In addition, we ran an interim spatial perception test at the end
of the first part of the course and a mental rotation test at the end of the second part. The
purpose of the interim tests was to provide feedback for lesson planning and to encourage
students’ interest in the course. The results of the spatial tests show that the students in
the course improved significantly both in relation to their initial scores, and in compar‐
ison to their classmates who did not take the course (the control group). Specifically,
scores for the experimental group rose by 19.6% in the spatial perception test, by 104.5%
in the mental rotation test, and by 30.1% in the visualization test compared with their
pre-test scores. With respect to the comparison with the control group, the students in
the experimental group achieved higher average grades in the 2013 matriculation exam
in technical drawing (88.0) compared with their classmates from the control group
(83.3). The pre-course tests showed no significant differences in spatial performance
between the experimental and control groups.

6.2 Increase in Spatial Awareness

At the end of each workshop we administered a questionnaire. Eighty of the 93 partic‐


ipants in the 2014–2015 workshop responded. 92% those who responded had never
studied robotics and had no experience with robots. A few students had studied robotics
as an optional subject at school. More than 90% reported that the workshop exposed
them to industrial robotics, and 17% evaluated this contribution as strong. 65% reported
that the workshop effectively presented problems in operating and programming indus‐
trial robots; 23% considered this contribution to be high. The workshop aroused students’
interest in studying robotics (55%), with about a quarter of the respondents reporting
strong interest.
Moderate Pearson correlations were found between the workshop contribution
scores for the presentation of spatial problems and for the exposure to industrial robotics
r = 0.53 (p < 0.0001) as well as between the contribution scores and the interest to study
industrial engineering r = 0.51 (p < 0.0001).
The questionnaire solicited students’ reflections on their spatial learning practice in
the virtual and physical robotic environments. The students’ evaluations of the practice
were highly positive. The repeated reflections:
It is hard to imagine robot operation without seeing how it is performed. I think we need to
practice it because not everyone has good spatial skills.
It is a good practice in planning manipulations in the workspace and enhances spatial vision.

Students note the advantages of the spatial practice in the virtual environment:
The virtual lab lets you perform operations with the robot without fear that something will break
or go wrong.

zamfira@unitbv.ro
Spatial Learning of Novice Engineering Students 365

The virtual lab made it easier to understand considerations in planning robot operations: calcu‐
lating angles, heights and positions.

Evaluations of the spatial practice with real robots were even higher:
The physical lab was much more interesting, since it was a new work environment. The challenge
was to think how to accomplish the task in the most effective way.

The difficulties noted by the students related to the following spatial tasks: determination
of the height of the robot gripper above the working surface, use of coordinates of the
robot arm and their calculation, and collisions the arm with objects in the workspace,
while performing manipulations. From students’ reflections:
It was difficult to estimate distances between objects in the virtual environment.
Cube rotation tasks were complex and required spatial thinking

In response to our request to compare the contributions of the virtual and physical labs
the students did not strongly favor one over the other. Rather, their responses suggest
that both platforms serve important functions:
In the virtual lab it is easier to understand the thinking behind operating the robot, calculating
angles, heights and locations.
The physical lab better demonstrates the robot workspace and gives an idea of the production
process.

7 Conclusion

In this paper we presented our experience in adapting the Technion Robotics and
Computer Integrated Manufacturing Laboratory for introductory engineering courses.
We engaged first year IEM students in robotics activities and opened the laboratory to
high school students majoring in mechanical engineering.
We build the courses on the educational opportunities afforded by placing students
in the loop of robotic system, focusing their practice in the RCIM lab on understanding
the principles of robot operation, fostering spatial skills and awareness of their impor‐
tance in industrial robotics. This practice is crucial for novice engineering students who
are choosing the future profession. The key features of our approach are:
• Customizing the robot workspaces to enable performance of spatial operation tasks.
• Combining practice in direct, virtual and remote robot operation.
• Extending the robotic environment to enable the manipulation of oriented blocks.
• Directing robot operation tasks to train spatial skills.
We implemented and evaluated the developed approach in our RCIM lab for engi‐
neering novices of two categories: high school students majoring in mechanical engi‐
neering and first-year IEM students. As found, the high school students in the course
improved significantly in perception, mental rotation, and visualization tests. In the case
of IEM students, the workshop provided the first-hand experience in operation of real
and virtual robots, helped to understand the spatial problems dealt with by industrial
engineers and recognize the skills needed to cope with them.

zamfira@unitbv.ro
366 I. Verner and S. Gamer

Based on the results of our study, obtained under specific conditions, we argue for
further exploration of the proposed approach and call for using robotic environments
for training spatial skills that are highly demanded in engineering education and practice.

Acknowledgment. This study is supported by the Israel Science Foundation. We appreciate help
of Technion and school instructors: Dr. Assaf Avrahami, Niv Krayner, Elena Baskin and Ronny
Magril.

References

1. Munro, D.: Development of an automated manufacturing course with lab for undergraduates.
In: IEEE Frontiers in Education Conference, pp. 496–501 (2013)
2. Bien, Z.Z., Lee, H.-E.: Effective learning system techniques for human–robot interaction in
service environment. Knowl.-Based Syst. 20(5), 439–456 (2007)
3. Chang, G.A., Stone, W.L.: An effective learning approach for industrial robot programming.
In: ASEE Annual Conference and Exposition, Atlanta, US (2013)
4. Ma, J., Nickerson, J.V.: Hands-on, simulated, and remote laboratories: a comparative
literature review. ACM Comput. Surv. 38(3), 1–24 (2006)
5. Goldstain, O., Ben-Gal, I., Bukchin, Y.: Evaluation of telerobotic interface components for
teaching robot operation. IEEE Trans. Learn. Technol. 4(4), 365–376 (2011)
6. Shin, D., Wysk, R.A., Rothrock, L.: A formal control-theoretic model of human-automation
interactive manufacturing system control. Int. J. Prod. Res. 44(20), 4273–4295 (2006)
7. Lathan, C.E., Tracey, M.: The effects of operator spatial perception and sensory feedback on
human-robot teleoperation performance. Presence 11(4), 368–377 (2002)
8. Menchaca-Brandan, M.A., Liu, A.M., Oman, C.M., Natapoff, A.: Influence of perspective-
taking and mental rotation abilities in space teleoperation. In: Proceedings of the 2nd ACM/
IEEE International Conference on Human-Robot Interaction, pp. 271–278. ACM (2007)
9. Stanney, K.M., Cohn, J., Milham, L., Hale, K., Darken, R., Sullivan, J.: Deriving training
strategies for spatial knowledge acquisition from behavioral, cognitive, and neural
foundations. Mil. Psychol. 25(3), 191–205 (2013)
10. Intelitek: RoboCell for Controller USB. http://www.intelitekdownloads.com/Manuals/
Robots/ER-4u/100346-G%20RoboCell-USB-v56.pdf. Accessed 22 Mar 2015
11. Uttal, D.H., Meadow, N.G., Tipton, E., Hand, L.L., Alden, A.R., Warren, C., Newcombe,
N.S.: The malleability of spatial skills: a meta-analysis of training studies. Psychol. Bull.
139(2), 352–402 (2013)
12. Verner, M.: Robot manipulations: a synergy of visualization, computation and action for
spatial instruction. Int. J. Comput. Math. Learn. 9(2), 213–234 (2004)
13. Bower, G.: Awareness, the unconscious, and repression: an experimental psychologist’s
perspective. In: Singer, J. (ed.) Repression and Dissociation, pp. 209–231. University of
Chicago Press, Chicago (1990)
14. Eliot, J., Smith, I.: An International Directory of Spatial Tests. NFER-Nelson, Atlantic
Highlands (1983)

zamfira@unitbv.ro
Concurrent Remote Group Experiments in the Cyber
Laboratory
A FPGA-Based Remote Laboratory in the Hybrid Cloud

Nobuhiko Koike ✉
( )

Faculty of Computer and Information Sciences, Hosei University,


3-7-2 Kajino-cho, Koganei-shi, Tokyo 184-8584, Japan
koike@k.hosei.ac.jp

Abstract. With the advent of M2M and IoT, it becomes important for the educa‐
tional remote laboratory to realize group M2M/IoT experiment environments,
where a number of group experiments are concurrently carried out by making use
of LAN-connected FPGA devices. Docker containers are employed to realize
separate FPGA-Run service environments, corresponding to every FPGA
devices. The Cyber laboratory can contain hundreds of FPGA evaluation boards
and FPGA-Run service containers. Each of those pairs is allocated to one of
twenty laboratory servers together. The Docker Swarm is also adopted to realize
multi FPGA group experiments by allocating a set of FPGA board and FPGA-
Run service container pairs. Each FPGA-Run Service container consists of a Web
server application, a Web-camera motion, the FPGA-run application and associ‐
ated individual FPGA device driver. A combination of the container and the
corresponding FPGA-board pair realized a separate FPGA run service virtual
machine. Newly designed gang scheduler issues a set of the Web services to start
a group experiment together. By making use of Docker volume plugins, FPGA-
run results and recorded videos can be sent to the common faculty data base for
post experiment analysis. The use of inexpensive public cloud enables to offload
most private cloud side workloads and to be migrated to public cloud. It realizes
an easy scale out or shrinking functionalities. The hybrid cloud organization and
the use of many FPGA-boards together with associated containers realized an
efficient sharing of servers and the FPGA-devices. The use of the Web services
and the Docker Swarm manager allow a flexible and easy device allocation/gang
scheduling and initiation of group experiments. The paper showed the Cyber
laboratory’s applicability for M2M and IoT kinds of remote experiments.

Keywords: Remote laboratory · FPGA hardware design laboratory · Hybrid


cloud · M2M · The Web services · Docker container · Docker Swarm

1 Introduction

Recent advancement in IoT and M2M technologies motivated the author to implement
the concurrent remote group experiment functionalities for the existing Cyber Labora‐
tory [2, 3], which is an educational FPGA-based remote laboratory in the hybrid cloud.

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_35

zamfira@unitbv.ro
368 N. Koike

The new configuration allows students to conduct IoT or M2M related remote experi‐
ments independently or collaboratively among students by making use of available
FPGA boards. If the number of available FPGA boards is insufficient, those group
experiments are kept in the waiting queue until sufficient FPGAs become available.
Those experiment services are handled in both space division and time division fashions.
Although, the former work [4, 5] already realized an online FPGA device remote
experiment, the exclusive use of FPGA board resulted in poor use of FPGAs and labo‐
ratory platforms. It also has no support for IoT or M2M kind of experiments, where
number of IoT/FPGA devices as well as edge/cloud servers should work together. In
order to realize such IoT kind of remote experiments, the remote laboratory should
realize a group remote experiment environment that contains number of FPGA devices
and quite a few edge or cloud servers connected by the internet.
Thanks to the advancement in device technologies, an affordable remote laboratory,
which contains hundreds of FPGA evaluation boards with Web cameras allocated over
twenty laboratory servers, can be constructed at a nominal cost. As the use of the Docker
containers associated with each FPGA board/Web camera pairs [1] contributed to realize
a lightweight virtual machine, each laboratory server can easily handle quite a few these
containers. It is also possible to add additional IoT edge server containers, these mimic
IoT or M2M server side applications in the experiments. The use of the Docker Swarm
makes it possible to construct a light weight on premise private cloud, where hundreds
of FPGA experiment platform and container pairs become available as remote experi‐
ment platforms.
The Docker Swarm manager/job scheduler can allocate FPGA experiment platforms
over twenty laboratory servers both in space division and time division fashions. For an
IoT experiment, edge server containers can also join the group experiment to realize a
network-connected group remote experiment environment,
The design has been completed and the prototype of single FPGA-evaluation board
version is operational. The job entry/dashboards, rendezvous/gang scheduling for
students and administrators are under construction. The hybrid cloud organization and
the use of many FPGA-boards together with associated containers realized an efficient
sharing of servers and the FPGA-devices. The use of Docker containers contributed to
realize separate lightweight FPGA experiment virtual machines. The Docker Swarm
enables to allocate them over the twenty laboratory servers to organize as the on premise
private cloud. The use of the Web services and the Docker Swarm allow realizing a
flexible and easy device allocation/gang scheduling and initiating the experiments
mechanisms. The paper showed the Cyber laboratory’s applicability for M2M and IoT
kinds of remote experiments.

2 System Considerations and the Design

The author has been engaged in the development of the former Cyber Laboratory in the
hybrid cloud [1–3], which consists of laboratory servers as a private cloud and a public
cloud. The use of the public cloud was employed to cope with the dynamically changing
student workload by scaling out or shrinking the number of servers in the public cloud.

zamfira@unitbv.ro
Concurrent Remote Group Experiments in the Cyber Laboratory 369

On the other hand, the use of the on premise private cloud should be inevitable, as on
premise laboratory servers should be equipped with FPGA-evaluation devices. The use
of FPGA boards and associated device dependent tools make it difficult to be migrated
to a public cloud. Thus, the hybrid cloud organization became an unavoidable solution.
Fortunately, most design automation tasks except for the FPGA experiment service can
be migrated to the public cloud. So, the laboratory servers can concentrate on FPGA
experiment services and the twenty laboratory servers were more than enough for
handling 80 student uses. Although the former Cyber laboratory can handle a number
of single FPGA use experiments both in parallel and time-shared fashions by making
use of available laboratory servers, it could not properly support group experiments by
making use of plural FPGAs and edge servers such as the cases for IoT experiments. If
students should start such remote group experiments simultaneously by making use of
plural FPGA devices and edge servers, the system should become overloaded. To over‐
come such shortcomings, up to eight FPGA devices have been added to each laboratory
server. As those workloads are rather lightweight, one server can handle quite a few
FPGA evaluation boards simultaneously. The proposed system could handle a group
remote experiment which contains hundreds of FPGA devices as edge devices and quite
a few edge servers connected by the internet. So, even for the IoT or M2M kinds of
experiments, an efficient sharing of laboratory experiment platforms can be realized by
the new Cyber Laboratory as shown in the Fig. 1.
The key technologies for implementing the new Cyber laboratory are summarized
as follows:
– The use of a public cloud for handling most Design Automation tasks to offload the
private cloud side workloads and realizing the scale out ability
– The use of the on premise private cloud for implementing the FPGA device dependent
services and connecting them with the public cloud counterpart to organize as a
hybrid cloud
– The use of the Docker containers for realizing isolated lightweight virtual environ‐
ments of FPGA experiments in the on premise cloud
– The use of the Docker Swarm for realizing a lightweight multi-host virtual machine
network to organize as the on premise private cloud
– The gang scheduler handles collection of FPGA containers to perform a group
experiment
– The rendezvous mechanism realizes the scheduling for group of group experiments
In this way, Each FPGA experiment service which is associated with each FPGA
evaluation board can be handled in a separate operating environment. As an average,
one laboratory server contains more than eight such virtual environments at the same
time. The use of Docker containers is promising as it can handle number of such virtual
environments with little overhead in the form of containers. It can provide users with
separate and isolated virtual environments. On the other hand from the operating system
point of view, each container seems to be a collection of processes and thus quite a few
containers can reside on a server and run more efficiently without virtual machine change
overhead, which is much heavier than process change overhead. Since each container

zamfira@unitbv.ro
370 N. Koike

Fig. 1. New cyber laboratory system organization

works like a standalone machine, communications can be realized in the form of the
Web Services and the Web Sockets via the http ports.
In order to setup twenty laboratory servers to form as a private cloud, the Docker
Swarm [6] is useful. The laboratory server cluster can easily be organized as a Docker
Swarm multi-server network configuration. The Docker Swarm provides the system

zamfira@unitbv.ro
Concurrent Remote Group Experiments in the Cyber Laboratory 371

manager/scheduler with easy and centralized management means for controlling the
FPGA service containers in the private cloud. In order to guard the FPGA service
containers against outside service accesses, additional Swarm manager server is neces‐
sary to run a container for the Swarm management and service scheduling. A key value
store is necessary as well to manage the multi host network cluster, where all Docker
daemons in all laboratory servers have accesses to the key value store to join the network.
After the join operations, the Swarm manager becomes accessible to all joined laboratory
servers and can deploy containers on any laboratory server by making use of the remote
Docker APIs. Users can only send their FPGA service requests to this Swarm manager/
scheduler. Then, the scheduler find out idle FPGA devices and assigns corresponding
FPGA service containers by issuing a set of Docker swarm deploy commands to the
laboratory servers in the Docker swarm network.
The FPGA service containers can be allocated in either spread mode or gang sched‐
uling mode. After FPGA service containers are deployed in the laboratory servers, the
scheduler forwards the FPGA service requests to the designated FPGA service
containers by making use of the Web services via the http port. In case of a group
experiment request, namely a gang scheduling, it is translated into a set of FPGA service
requests to be deployed together and put into the wait queue specified as a gang sched‐
uling mode. When sufficient number of idle FPGAs becomes available, the scheduler
deploys participating containers by making use of the Docker remote APIs and issues a
set of FPGA run commands to them all by making use of the Web services.
If IoT edge servers are also specified, appropriate server containers are allocated as
well and the Docker image of them are pulled from the repository and deployed. It is
easy to add such Docker containers as the ones, each containing http daemon and user
defined server side applications in the laboratory servers. They mimic IoT edge server
functionalities.
If a request contains number of differently designed FPGAs, then it is handled as a
group of group experiments request. The request is first put into the rendezvous pool to
wait for remaining group experiment requests to arrive, because the experiment cannot
be started until all participating FPGA designs become ready. When all designated
number of group experiment requests become available, they are sent to the scheduler
and the group of group experiments should be carried out in the same way as the group
experiment.
In the Docker Swarm networking, the allocations of FPGA service containers are
automatically handled by the scheduler and the users usually do not care of the physical
container allocations. However, the users should care of their FPGA experiment results
and would like to observe the actual FPGA behaviors by making use of their recorded
videos. Web cameras are convenient to take shots of such experiments in videos, which
become useful for post experiment inspections. So, all Docker containers associated
with individual FPGA boards have been also installed motion application programs,
those control the attached Web cameras and take videos during the experiment and
transfer them to the faculty database.
The Docker volume plugins are employed to mount the faculty database network
directory in place of local directory for sharing files and achieving the file persistency. As
the faculty database stores all student designs, result files and recorded videos, a

zamfira@unitbv.ro
372 N. Koike

convenient sharing of the experiment information among students and teachers can be
realized. It also contributes to minimize the file transmissions among the on premise
private cloud, the public cloud and student PCs.
As all communications in the Cyber laboratory are realized in the form of the Web
services and the Web sockets, messages are exchanged in the form of XMLs in the
packets. They contain only directory information for the faculty database. So, a number
of unnecessary transmissions for the raw design data can be minimized.
The public cloud handles the remaining services other than the FPGA experiment
services. The organization of the public cloud can be done in a straightforward way.
Free FPGA design automation tools except FPGA setup/run tools have been installed
in every student virtual machine. As for the FPGA setup/run parts, they have to be
installed in the on premise cloud side servers. So, the newly designed programs instead
of the original tool generate the FPGA setup/run requests and communicate with the
Swarm manager/scheduler in the on premise private cloud by means of the Web services
and the Web sockets. Allocations of these student VMs to public side servers are deter‐
mined according to the amount of student workloads.
Aside from the student VMs, limited number of physical servers in the public cloud
is also allocated for servicing licensed Verilog-HDL high level logic synthesis software.
As the license limits the number of platforms, installing them in all student VMs is not
affordable. Students may access to these licensed Verilog-HDL Synthesis services via
the Web services in order to share these tools in both space and time shared fashions. A
VPN is setup among the on premise cloud, the public cloud, the faculty database and
student PC/Laptops to guard against unauthorized accesses. In order to use the cyber
laboratory, the student has to join the VPN and login the system. After the successful
authentication, one student VM is allocated to handle entire experimental services in the
public cloud. The student can have an access to the allocated VM by making use of
either the remote desktop service or http Web applications.

3 Realizing Remote Group IoT/M2M Experiments

Figure 2 shows the general overview of the proposed remote group experiment envi‐
ronments as well as the simple on-demand FPGA experiments. Users can perform such
group experiments or standalone single FPGA-use experiments concurrently by making
use of idle FPGA devices. Up to a hundred and sixty FPGA experiment container/FPGA
device pairs over twenty laboratory platforms can be organized for FPGA experiment
services. In this way, plural IoT remote experiments can be carried out by making use
of groups of FPGAs and server containers simultaneously. The cyber laboratory
manager in the Swarm manager/scheduler container performs the system management
tasks, such as experiment request receptions, obtaining file location directories in the
faculty database, container deployments and initiating FPGA experiments. Each student
VM in the public cloud setups the connection with the Swarm manager/scheduler
container in the On-Premise private cloud to request FPGA experiment services in on-
demand basis. The dash board service lets students know of current workload status and
select appropriate containers.

zamfira@unitbv.ro
Concurrent Remote Group Experiments in the Cyber Laboratory 373

Fig. 2. Containers scheduling for standalone and group experiments

For a group experiment, a set of FPGA experiment service requests and IoT server
service requests have been wrapped into a JSON file and sent to the scheduler. It is un-
wrapped and put into the experiment request queue specified as the gang scheduling
mode. In case when a group of group experiments request has been received, it is un-
wrapped and sent to the rendezvous pool. It should be kept waiting until all group
experiment requests have arrived. Then these group experiment requests are put into the
experiment request queue together in the same way as the group experiment request
case.

4 Conclusion

The new Cyber laboratory has been presented. It showed the feasibility of conducting
remote group experiments concurrently, applicable for IoT or M2M related experiments.
The use of Docker containers and Docker Swarm technologies contributed to realize a
network of hundreds of independent FPGA experiment containers together with quite a
few server containers. It can contain various kinds of subnetworks, where group

zamfira@unitbv.ro
374 N. Koike

experiments utilizing FPGA devices and edge servers can be conducted concurrently.
The newly designed Swarm manager and scheduler have been developed to realize the
gang scheduling and rendezvous mechanisms for handling incoming experiment
requests. A group of group experiments, which contains different designs of FPGA
devices and IoT edge servers, can be handled by making use of the rendezvous pool.
Although the VPN and authentication process assure the minimum security
level, much higher level of security protection mechanisms should be introduced for
practical use.

Acknowledgments. The author would like to thank Mr. Yuichi Toyoda for his zealous efforts
in realizing the test bed for the semi-automatic experiments in the hybrid cloud.

References

1. Toyoda, Y., Koike, N., Li, Y.: An FPGA-based remote laboratory: implementing semi-
automatic experiments in the hybrid cloud. In: 14th International Conference on Remote
Engineering and Virtual Instrumentation (REV2016)
2. Koike, N.: Cyber laboratory: migration to the hybrid cloud solution for device dependent
hardware experiments. In: International Conference on Information Technology Based Higher
Education and Training (ITHET 2014), Kent, U.K., June 2014
3. Koike, N.: A cyber laboratory for device dependent hardware experiments in a hybrid cloud.
In: 6th International Conference on Computer Supported Education (CSEDU 2014),
Barcelona, Spain, April 2014
4. Morgan, F., Cawley, S., Kane, M., Coffey, A., Callaly, F.: Remote FPGA lab applications,
interactive timing diagrams and assessment. In: Irish Signals & Systems Conference 2014 and
2014 China-Ireland International Conference on Information and Communications
5. Jethra, J.S.T., Patkar, S.B., Datta, S.: Remote triggered FPGA based automated system. In:
11th International Conference on Remote Engineering and Virtual Instrumentation
(REV2014), pp. 309–314, 26–28 February 2014
6. https://docs.docker.com/swarm/networking/

zamfira@unitbv.ro
The VISIR+ Project – Preliminary Results
of the Training Actions

M.C. Viegas1(&), G. Alves1, A. Marques1, N. Lima1, C. Felgueiras1,


R. Costa1, A. Fidalgo1, I. Pozzo2, E. Dobboletta2, J. Garcia-Zubia3,
U. Hernandez3, M. Castro4, F. Loro4, Danilo Garbi Zutin5,
and C. Kreiter5
1
School of Engineering, Polytechnic of Porto, Porto, Portugal
mcm@isep.ipp.pt
2
National Council of Scientific and Technical Research, Rosario, Argentina
3
University of Deusto, Deusto, Spain
4
National University of Distance Education, Madrid, Spain
5
Carinthia University of Applied Sciences, Villach, Austria

Abstract. Experimental competences allow engineering students to consolidate


knowledge and skills. Remote labs are a powerful tool to aid students in those
developments. The VISIR remote lab was considered the best remote lab in the
world in 2015. The VISIR+ project main goal is to spread VISIR usage in Brazil
and Argentina, providing technical and didactical support. This paper presents
an analysis of the already prosecuted actions regarding this project and an
assessment of their impact in terms of conditioning factors. The overall out-
comes are highly positive since, in each Latin American Higher Education
Institution, all training actions were successful, the first didactical implemen-
tations were designed and ongoing in the current semester. In some cases,
instead of one foreseen implementation, there are several. The most statistically
conditioning factors which affected the outcomes were the pre-experience with
remote labs, the pre-experience with VISIR and the training actions duration.
The teachers’ perceptions that most conditioned their enrollment in imple-
menting VISIR in their courses were related to their consciousness of the VISIR
effectiveness to teach and learn. The lack of time to practice and discuss their
doubts and the fulfillment of their expectations in the training actions, also
affected how comfortable in modifying their course curricula teachers were.

Keywords: Remote labs  VISIR  Didactical approaches

1 Introduction

Engineering students need to perform experiments in order to full understand theo-


retical concepts thoroughly as well as to interact with instruments and equipment
efficiently [1, 2]. These experimental competences, which traditionally could only be
developed in hands-on laboratories, allow students to consolidate knowledge and skills,
preparing them to their futures jobs as engineers. The use of simulations and remote
labs has been growing exponentially over the last decades. They provide not only an
alternative and/or complementary way to develop experimental competences, but also
© Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_36
zamfira@unitbv.ro
376 M.C. Viegas et al.

becomes a resource that potentiates students’ autonomous learning activities and


supports lifelong learning [3, 4]. Furthermore, the use of Information and Communi-
cation Technology (ICT) tools can provide a stimulus for todays’ generation since they
have been immersed in a world infused with network and digital technologies [5].
Although there is still some controversy about web-based laboratories efficacy [6],
a recently and exhaustive study main conclusion [3] is that student learning outcome
achievement is equal or higher in non-traditional simulation and remote labs versus
hands-on traditional labs. Nowadays, these resources are being widely used by teachers
who are aware that the educational objectives associated with each of these resources
differ, as each allows the development of different competences [7]. Remote labs are
becoming a popular learning tool, as they allow to get real experimental results as
opposed to computation model results obtained by simulations. A remote laboratory is
a real lab, in which the user and the physical apparatus are physically apart. To perform
the experiment, the user has to access the Internet and usually a particular user interface
to operate the remote equipment [8], being able to configure and control the physical
parameters of a real experiment.
Considering the advantages of this educational resource, which include education
and research collaboration between institutions all over the world, several remote labs
have been developed and enhanced over the years, in many different areas [9]. Within
scientific disciplines, this resource is most widespread used in electrical and mechanical
engineering [10]. Virtual Instrument Systems in Reality (VISIR), developed by Ble-
kinge Institute of Technology (BTH) in Sweden, is one of the most used labs in
Engineering Education. It deals with experiments with electrical and electronics circuits
and was considered in 2015 the best remote lab in the world by the Executive Committee
of the Global Online Laboratory Consortium [11]. Several VISIR systems already exist
in Europe as well as in Asia (India), and with the support of the VISIR+ project, a
consortium between the European countries using VISIR, Brazil and Argentina [12] it
became possible to spread its usage throughout these two Latin America countries.
Reaching the midterm point, the VISIR+ project is now assessing the preliminary
results in order to improve and tune the following tasks.
This work presents the preliminary results of the Training Actions (TA) within the
VISIR+ project. First the VISIR remote lab and the VISIR+ project will be presented
in Sect. 2, with a special focus in the description of the TA’s. The problematic tackled
in this work, assessing TA impact, is fully explained in Sect. 3. In Sect. 4, some results
are presented and analyzed in Sect. 5. Finally, in Sect. 6 some preliminary conclusions
are drawn.

2 VISIR Remote Lab and the VISIR+ Project

As previously stated, VISIR system is a widespread remote lab used mainly in the
study of electrical and electronic circuits, with increased popularity in the last 5 years
[13], mainly due the intrinsic advantages of being a remote lab: accessibility, avail-
ability and safety - since the users are not exposed to any electrical signal, and in turn
they are not able to damage the physical equipment, due to a series of protection layers
that prevent this. The only physical interaction between the user and the real equipment

zamfira@unitbv.ro
The VISIR+ Project – Preliminary Results of the Training 377

is through a computer interface (or more recently, a smartphone or tablet) which


replicates a physical breadboard, showing all available components and the instrument
front panels (Fig. 1), which enables the user to connect the desired circuit and analyze
its behavior with several instruments [14, 15].
The feeling of immersion in this remote lab is provided by accurate replication,
either of the breadboard or the instruments front panels [8]. Dragging the available
component with the mouse and positioning in the breadboard, replicates the action of
grabbing a component with the fingers and mounting it in the breadboard, in real labs.
It is mainly this similarity to a real lab working environment that leads to user’s opinion
of considering VISIR as a complementary and useful resource to hands-on, simulation
or other resources, such as calculus or theory. Each one of this type of resources can
address different skills and develop distinct competences during the learning process.
Being a remote lab, the major unpredictable drawback can be the quality and reliability
of the network/internet connections, which can become a handicap in regions or
countries with a less efficient internet service provider. This aspect, although not
directly related to the VISIR system itself, has been considered a cause of the loss of
interest and subsequent lack of motivation among teachers and students.
Some of the published work related to VISIR [8, 13, 16–21] although also con-
cerned with aspects related to the use of computer-based tools or systems, has been
mainly focused on the learning process, either from the student’s or teacher’s per-
spective. The overall perception of the students is positive, although in many cases, this
was not directly reflected in their final results within the course, in spite of teachers’
perception considering VISIR as a good complementary tool for hands-on practice. In
fact, globally, VISIR is considered a need-to-have tool whenever teachers are interested
in diversifying their teaching methodologies by addressing in this way an increasing
number of students with various learning styles, as well as by easily motivating the new
tech-savvy student generation. On the other hand, teachers of this new generation have
to share the same technological interests, being willing to learn new technological
based approaches.
The VISIR+ project aims to define, develop and evaluate a set of educational
modules related to the subject: electrical and electronic circuits theory and practice,
comprising hands-on, virtual and remote lab (VISIR), together with calculus, applying
an enquiry-based teaching and learning methodology. The stated aims of the project are
threefold: to contribute in providing the labor market with high-skilled professionals in
the area of Electric and Electronics Engineering; to contribute to student’s dropout’s
reduction; to contribute to the increase of STEM careers appeal. The main goal is
spreading the usage of VISIR among Brazil and Argentina, first with the Latin
American (LA) partners and then replicating the phenomenon amongst their associated
partners (AP). The specific objectives aim at helping teachers to enrich their course
curriculum on electric and electronic circuits including hands-on, simulation and
remote-labs and at encouraging teachers scaffold students’ learning and foster their
autonomy. They also aim at increasing students access to lab experiments with no
restrictions on time, schedules or availability, providing them opportunities to improve
their competences development (namely when comparing experimental results from
different resources) and contributing to the support of their continuous assessment and
success rates. Since VISIR is being used by several Higher Education Institutions

zamfira@unitbv.ro
378 M.C. Viegas et al.

(HEI), mainly in Europe, for the past years, the technical and didactical experience
gained by the five Europeans Institutions partners (EU partners) in the project, is now
being shared with five Latin American Institutions (LA partners). Their characteristics
and main role within the Project are described in Table 1.

Table 1. VISIR+ project partners institutions description


Partner Higher institution characteristics Main role in the project
liaisons
IPP-ISEP (Porto, Public higher education institution > 18,500 Leader + Tutor of IFSC and
PT) students (6,500 engineering students) UFSC
Eng. courses: 11 BSc + 11 MSc + 10
R& D Units
UNED (Madrid, E-learning Academic Institution Tutor of UNSE
ES) > 260,000 students
27 Grad. St. including Eng. + 43
MSc + PhD programs
UDEUSTO Private Non-profit University Tutor of UNR
(Bilbao, ES) 11,000 students 23 BSc + 5 deg. + 39
MSc + 12 MSc + 10 PhD Programs
BTH Public Higher Education Institution Technical support to all LA
(Karlskrona,SE) 5,900 students partners
CUAS Public University Tutor of PUC-Rio
(Carinthia, AT) 1,700 students in engineering, health and
business
30 BSc + MSc
IFSC (SC, BR) Public Higher Education Institution VISIR installation + VISIR
24,000 students implementation
UFSC (SC, BR) Public Federal University VISIR installation +
34,000 students implementation
PUC-Rio (RJ, Private Non-profit University VISIR installation +
BR) 15,000 students implementation
UNSE (Santiago National Public University VISIR
del Estero, AR) >12,000 students (1,200 engineering installation + implementation
students)
19 Undergraduate courses + 4 postgraduate
courses
UNR (Rosario, National Public University VISIR installation +
AR) > 74,500 students implementation
124 grad courses + 10 undergrad eng
courses + 19 post graduate eng. courses
ABENGE (BR) Engineering Education Association Dissemination & exploitation
> 40 years
> 4,000 members
CONICET – National Council of Scientific and Data collection and quality
IRICE (AR) Technical Research monitoring
> 50 years
> 9,000 researchers

zamfira@unitbv.ro
The VISIR+ Project – Preliminary Results of the Training 379

Apart from spreading VISIR usage, the project includes the VISIR system purchase
by each HEI in LA, fostering their sense of ownership and contributing to enlarge the
VISIR community. BTH is the EU partner in charge of technical support during
installation and organizing a training technical workshop for each technical team.
Overall, the project purpose is to enlarge the VISIR usage community by progressively
enlarging its coverage: firstly, through a one-to-one relation between EU and LA
partners, where the EU partner acts as the tutor; and secondly each LA HEI partner
with their AP, working closely with LA HEI, also implementing VISIR in their
courses. These AP involved serve different education levels (higher, secondary and
professional).
In order to guarantee the implementations success in all LA HEI partners and AP,
three TA’s were defined in different project stages. The first two have been performed
by the EU partners and the third one will be carried out by each LA HEI partner in their
AP. The objective was to replicate and enlarge the community of usage, share expe-
riences, render its advantages and contextualize their implementations. In order to
better understand the outcomes obtained from the different approaches and the insights,
an external observer was present in all TA’s. TA1 took place in Europe and its goal was
to introduce VISIR and its capabilities, where each EU partner shared their experience.
The TA2 meant to specifically address teachers’ needs (in each institution), particularly
to those implementing VISIR in their classes. TA2 took place in each target LA HEI.
The TA3 was designed to be delivered by LA HEI teachers who used VISIR and to
take place in their AP, with the objective of sharing their own contextualized experi-
ences and involving more teachers. The TA’s are sequential and intended to support the
different implementation phases, where the 1st implementation phase is meant to be
unique - one course per LA HEI, and the 2nd phase is meant to spread into several
implementations. The 3rd phase is meant to occur both on LA HEI partners as in AP. In
sum, the major outcomes of the VISIR+ project will necessarily be: trained local
technicians, trained local teachers, educational modules development and enlargement
of the VISIR facilitators group.

3 Assessing TA’s Impact Methodology

At the present stage of VISIR+ Project development, not all planned actions for each
LA HEI partner took place, namely the VISIR acquisition. In most cases economic and
administrative constrains delayed the acquisition procedure. Still, and due to the remote
lab characteristics, the project actions “TA2” and “1st implementation” could be per-
formed successfully by using the EU partner’s VISIR system. In order to better
understand TA impact in those circumstances and preview steps to corrective/redirect
the development process, the action results were assessed.

3.1 Focus
This paper presents the preliminary results of two Training Actions (TA1 and TA2). As
in most didactical implementation, teachers’ perception of different tools, their recep-
tivity and motivation to change their classes, strongly conditions the outcomes. So, the

zamfira@unitbv.ro
380 M.C. Viegas et al.

global impact of these TA is probably a good indicator of teachers’ interest and the
success of subsequent implementations. The goal of this study is to assess each LA HEI
implementation and analyze their differences in order to adjust the following phase of
implementations. The research questions are: Which factors can be considered
important in terms of conditioning the TA and the didactic implementations using
VISIR? Is there any relation between TA characteristics and the implementations
designs?

3.2 Approach
The research methodology used is a Multi-Case Study [22], in which five cases (LA
HEI Partners) will be presented and assessed. Due to the diversity of contexts, back-
grounds and experience, there were natural differences between TA’s, even though a
common base had been established. In order to characterize these differences three
categories were defined according to their timeline: pre-TA; during TA and post-TA
(Table 2).

Table 2. Categorization of potential factors of impact


Factor Categorization
Pre-TA HEI type Public/private
HEI dimension Big/medium/small
Pre-experience with ICT tools Large/some/none
Pre-experience with remote lab Large/some/none
Pre-experience with VISIR Large/some/none
until TA2
Owned VISIR in TA2 Yes/no
TA Dissemination among HEI Large/focused on target teachers
During EU team approach in TA Interactive/some interaction/interaction
TA postponed to the end
TA duration 1 day–4 days
TA language use Native/English
Post-TA Number of on-going implementations
Number of teachers involved
Number of students involved
VISIR usage in the course sporadic/frequent/continuous

3.3 Collected Data


The quantitative and qualitative collected data includes information about each TA,
teachers’ participation as attendees and their feedback (collected through a satisfaction
questionnaire (SQ) [22]). The SQ, designed by researchers of IRICE, had 8 closed and
1 open question, all questions expressed in statements about the TA (Table 3).

zamfira@unitbv.ro
The VISIR+ Project – Preliminary Results of the Training 381

Table 3. Satisfaction Questionnaire used in the TA


Subject Questions Scale
Objectives Q1. The objectives for the session were 1.Unsatisfactory;
clearly explained 2. Below average;
Interaction between Q2. The instructor raised questions and posed 3.Average;
lecturers and problems for workshop participants 4.Above average;
participants Q3. The lecturer was sensitive to the 5. Excellent
participants’ interests, priorities, and concerns
Q4. There was a genuine effort to get
participants involved in discussions about the
use of VISIR
Time allotted Q5. The time allotted for presentation and
discussions was enough
The use of Q6. The technological equipment enhanced
technological the effectiveness of teaching and learning
equipment
Participants’ Q7. Overall, the presentation about the VISIR 1.Poor, 2.Fair,
expectations system met my expectations 3.Satisfactory,
4. Highly satisfactory,
5.Excellent.
Practical use Q8. How difficult do you feel about the 1. Too difficult,
practice for VISIR? 2. Difficult, 3. Just
right, 4. Easy, 5. Too
easy.
Open question Q9. Please write other comments you think are relevant for future
workshops

Regarding the followed activities, data also included teachers’ schedule imple-
mentations of VISIR in their classes, the number of teachers and students involved in
each case and the kind of VISIR’s usage interaction that would be asked from them.

4 Training Actions and Implementations Results

4.1 Institutional Characteristics Pre-TA


Among the five cases there were several differences in terms of the starting point of
each LA HEI. This characterization identified potential factors of impact: the status of
the VISIR’s acquisition; Project and TA dissemination among the HEI staff and AP;
past experience with ICT tools remote labs and VISIR.

zamfira@unitbv.ro
382 M.C. Viegas et al.

Regarding VISIR’s acquisition, only PUC-Rio was able to perform the planned
sequence of actions: TA1 ! VISIR acquisition ! Technical Workshop ! TA2 ! 1st
Implementation. In all other cases, the administrative constrains within each Institution,
Governmental and European directives, forced them to resort to an alternative plan. This
plan was made possible due to the resourcefulness of remote labs: each European tutors
made available their own VISIR system in order to allow to plan TA2 and didactical
implementations. In this case, the sequence was altered to: TA1 ! TA2 ! 1st Imple-
mentation ! VISIR acquisition ! Technical Workshop.
Concerning the Project and TA2 dissemination, there were cases where the partners
assumed a general dissemination to all potential teachers and, especially in TA2, profit
from the EU partners visit to enlarge the bounds with LA institutions and associated
partners. This was the case for instance in UFSC and UNSE. Others, like PUC-Rio or
IFSC interpreted that TA2 was meant for the teachers already motivated to use VISIR
and did not centered their efforts on encouraging more teachers to attend.
Finally, in terms of past experience with ICT tools, remote labs or VISIR, some
differences are worth mentioning. UFSC uses remote labs since 1997 and were
responsible for the development of the RexLab project [23]. IFSC and UNR have
already used VISIR together with their tutors (IPP-ISEP and UDEUSTO, respectively)
in the past [24]. Another example of previous experience is PUC-Rio, that have been
using ICT tools in education for 21 years with their Maxwell platform [25]. And even
though they did not have past experience with VISIR, PUC-Rio was the only HEI who
actually performed a pre-implementation (within the VISIR+ Project scope), using
their tutors’ VISIR system, before the project-planned implementations.

4.2 Training Actions Characteristics Results


TA1 was held in Europe (Karlskrona, Sweden) during the project kick-off meeting in
February 2016. The EU partners shared their experience with VISIR, presenting the
results of their implementations and addressed VISIRs’ added-value and also some
constrains to be aware. In addition to this session there was a hands-on session. TA2
took place in the LA HEI during August and September, 2016. The time load of
agendas varied. During sessions, lecturers presented the Project and developed tech-
nical, practical and didactical aspects of the VISIR remote lab. In general, all attendees
showed interest in VISIR. The rich outcomes of every experience exceed the present
overview which focuses on Project development and quality indicators.
• Regarding TA’s participation
In TA1, even though the number of teachers from each LA HEI who could par-
ticipate locally was limited, teachers were able to access remotely (video stream-
ing). TA’s number of participants and SQ answers can be observed in Table 4 and
Fig. 1. SQ1 was taken two times, one per part of TA, which was on different days;
since it did not correspond to exactly the same sample, the average was considered
(Table 4).

zamfira@unitbv.ro
The VISIR+ Project – Preliminary Results of the Training 383

Table 4. VISIR+ Project TA’s participation


Participation IFSC UFSC PUC-Rio UNSE UNR Total
TA1 2 5 4 4 3 18
SQ1 3 7 6 7 6 29
TA2 8 50 7 31 28 124
SQ2 8 31 7 22 19 87

• Regarding the language used in TA


In all institutions, lecturers and audience spoke the same language, though some
dialectal differences (Spanish and Rioplantense Spanish, Portuguese and Brazilian
Portuguese) which did not interfere with communication. Only in PUC-Rio the
sessions itself were in English and the interactions with the audience in Portuguese.
This fact stands as a great difference comparing communication during TA1 (all in
English), not just regarding listening comprehension but most important, regarding
the possibility of asking questions or sharing queries.
• Regarding TA’s duration
TA varied in terms of time allocated to the event itself (Fig. 2): from 1 to 4 days.
Some partners thought it would also be useful to use some of the time to establish
contacts and scheduled visits to other institutions (mainly associated partners).

Fig. 1. Distribution of TA2 participants (HEI Fig. 2. Distribution of TA2 time duration (in
partner and AP) between HEI holder. days) in each HEI.

• Regarding TA’s presentation approach used


In every TA2 session, lecturers were senior professionals, junior professionals or
both. Beyond differences, all lecturers evidenced sound professional background
and presentation skills. Training methodology was mainly expository, with varied
interaction with the audience about typical problems in the academic and technical
fields. In some HEI, interaction between lecturers and audience was postponed to
the end (more notorious in UFSC and IFSC) while in some others, questions and

zamfira@unitbv.ro
384 M.C. Viegas et al.

comments were made from the start (more meaningfully in UNR and UNSE where
the presentations became more interactive). This distribution is showed in Fig. 3
relatively to the five cases. The questions and queries from the audience facilitated
the observation of attendees’ attention and interest. The practical activities, such as
accessing lab, designing circuits, measuring and analyzing results, got attendees
involved in the lab use straightaway, and their questions and queries were readily
answered by the lecturers.
• Regarding TA’s attendees’ perception (quantitative assessment)
The global feedback in both TA was highly positive which evidences the satis-
faction of LA HEI. Even though EU partners were present in TA and some also
answered the satisfaction questionnaire, the results shown in Fig. 4 only refers to
LA’ answers. In general, the global average level of satisfaction even grows in
IFSC, UFSC an PUC Rio. About presentation interaction with the audience (Q2),
the level of satisfaction maintains or increases in TA2. On the other hand, partic-
ipants’ expectations (Q7) and difficulties in practical use (Q8), maintains or
decreases in almost every case. The answers in this last question had a more
notorious decrease in UFSC, UNSE and UNR. PUC-Rio was the only one who had
a slight increase in this question.
• Regarding TA’s attendee’s perception (qualitative assessment)
The purpose of the open question of the TA Satisfaction Questionnaire aimed at
eliciting qualitative information about positive and negative aspects of the TA. Four
main categories about aspects of the TA1 became salient after analyzing partici-
pants’ answers: content of presentations, VISIR practice, time management and
sharing experiences. As regards content of the presentations, most answers referred
to their relevance and clarity while some pointed out the fact that the content of each
presentation was discrete and failed to reach common objectives (“[…] the training
session is not the addition of few sessions, this must be a common session, with a set
of objectives. Each of these objectives will be reached by each presenter, and so
on”; “Maybe first pedagogy and after technology”). Most participants agreed that
more practice with VISIR Lab equipment could have been introduced: “We had no
practice hands-on”; “hands-on activity is mandatory to understand better the
possibilities”; “the time allotted for practice/hands-on was null”; “I would have
liked to have real practice on the setting up of components in the lab, not just using
it” (our translation); “Maybe a training session with PC’s doing
circuits/experimenting in VISIR could be really interesting”. Timing was the aspect
of the presentation which most participants referred to, although it was considered
from multiple perspectives: time assigned for each presentation slot (“Speakers did
not fit to their time slots, this disturbed the following speakers”; “Not enough time
for all presentations and questions”; “Time allocation was uneven, so some
speakers ended up with little time to explain their results” and “The time for the
conference was not enough for all”); time lost (“The time to set up the presentations
could be avoided by using the same computer for the entire session”) and time for
more actual practice with VISIR. Finally, most participants found the presentation
of EU HEI experiences an asset in the training action (“[positive] Present
experiments and experiences at different institutions using VISIR”), although some

zamfira@unitbv.ro
The VISIR+ Project – Preliminary Results of the Training 385

argued more opportunities for open interaction could have been present (“Every-
thing was clearly explained, however we should have kind of round table to discuss
more about the experience the colleagues had had”). TA1 also had virtual
streaming. Virtual attendees found the videoconference positive (“interesting”,
“excellent”) although when answering Question 9, they referred mostly to technical
problems: sound problems; questions asked by participants were not heard; only
slides were shown during the presentation.

Fig. 3. Distribution of TA2 presentation Fig. 4. Distribution of TA1 and TA2 satisfac-
approach in each HEI. tion questionnaires results for each case.

It is worth mentioning that perspectives in the answers from EU and LA participants


seem to vary widely as regards expectations and VISIR experience. Even when LA HEI
attendees had experience in the use remote lab, most of them were being acquainted with
VISIR Remote Lab. On the other hand, EU participants not only had a wealth of
experience with VISIR Lab in their own institutions but they had also already shared
know-how with other Project EU partners by the time TA1 took place. Unlike TA1,
results from TA2 Open Questions were 71 answers out of 87, which represents 81,6%
total of the survey, and provided very rich information. On the positive aspects there is
reference to the learning environment as regards lecturers’ assets (“kindness”, “clarity”,
“feedback”) and their presentations (“I could understand information about VISIR and
how to use it”; “Visual presentations were very effective”). Also some positive com-
ments refer to the value of VISIR Lab as a tool (“the potential usefulness of VISIR could
be observed”). The TA organization and the possibility of attending them was also
pointed out. As to the aspects to be improved, most comments refer to the need to count
on more time availability to practice the use of the remote lab, to exchange experiences
and to explore the possibilities VISIR has. WiFi connection was also highlighted as a
key aspect to facilitate or hinder lab use (“There was saturation in WiFi connection

zamfira@unitbv.ro
386 M.C. Viegas et al.

making the online use slow”). Finally, some recommendations for extension of the
experience were given: “I hope VISIR could be taken to Angola, my country” and “The
lab has to be promoted to many departments of electric engineering careers”.

4.3 Implementation Results


TA2 took place in the middle of the second semester of the LA HEI, as their academic
year starts in March and ends up in the last week of November. By that time, classes
were already on going and most teachers still didn’t have the opportunity to get
acquainted with VISIR. Nevertheless, and accordingly to the Project definition, a course
implementation per HEI partner should be started after TA2. Table 5, summarizes the
implementations that are ongoing, presenting the course’s name, the number of teachers
and students involved in each course as well as the type of VISIR’s usage.

Table 5. Implementations per LA HEI partner


Courses LA HEI Teachers team Students VISIR’s usage
Calculus IV UFSC 3 40 Sporadic
Probability and statistics 1 50 Sporadic
Electronics II IFSC 1a 13 Frequent
Basic electronics 13 Frequent
Amplifying structures 10 Frequent
Electric circuits I 1 31 Frequent
Electric circuits I 1 40 Frequent
Electricity I 1 50 Frequent
Electric and electronic circuits PUC-Rio 1 18 Frequent
Complementary activity 1 Eng. students Continuous
Physics of devices UNR 2 17 Sporadic
Electronica 1b UNSE 4a 15–20 Frequent
Electronica 2b 15–20 Frequent
a
same teachers team
b
to be implemented soon

As it can be observed, more than one implementation per LA HEI is already


ongoing, which exceeds positively the Project’s request. In fact, at IFSC, there are six
simultaneous implementations occurring. These results - the amount and variety of
courses, in a total of 10 courses involving 12 teachers and 282 students - are quite
above our expectations for each individual LA HEI. Teachers still need to get better
acquainted with VISIR is patent in VISIR’s usage – in 3 courses, VISIR was only used
in one lab class to cover a specific topic although in the majority it was already used in
several. UNSE didn´t feel comfortable to start it this semester. PUC-Rio already made a
pilot implementation last semester, after TA1, with the help of CUAS and EU VISIR
system. This semester, and after their VISIR acquisition, PUC-Rio shows a cared

zamfira@unitbv.ro
The VISIR+ Project – Preliminary Results of the Training 387

integration of VISIR in the course, using their material which was already designed to
accommodate several different resources, in which students can complete their tasks (in
a similar way as the VISIR project stimulates teachers to use simultaneously hands-on,
simulators, remote labs and calculus). PUC-Rio also implemented a Complementary
Activity using VISIR, open to all engineering students from various backgrounds: an
online course, covering up basic electricity concepts. So, even though not in a large
number, the quality of these implementations cannot be underestimated. Several AP are
also already using VISIR, but mainly to test it and implement it next academic year.

5 Analysis

5.1 How Pre-TA Factors Affected TA Satisfaction Level


In relation with the identified pre TA factors, a statistical analysis was performed to
assess the significance each factor has in the different cases. Table 6 shows the sig-
nificant correlations (using a Chi-square test with 95% confidence interval). Questions
related to “Interaction between lecturers and participants” (Q2, Q3 and Q4) show
almost total independency on the identified factors. The same is visible with Q8 about
the difficulty. Q1, Q7, Q5 and Q6 are the questions that show dependency mainly with
pre experience with remote labs in general, and VISIR in particular. Curiously, the fact
of already having their own VISIR system installed or using the VISIR’s system of the
EU partner did not influence the results.

Table 6. TA’s satisfaction questionnaire cross analysis with identified factors


HEI Pre-exp. Pre-exp. Pre-exp. Own TA
ICT RL VISIR VISIR dissemination
Q1 p = 0.043 p = 0.009 p = 0.003 p = 0.003
Q2 p = 0.032
Q3
Q4
Q5 p = 0.042 p = 0.025 p = 0.025 p = 0.028
Q6 p = 0.009 p = 0.015
Q7 p = 0.004 p < 0.001 p < 0.001 p < 0.001
Q8 p = 0.013

5.2 How TA Related Factors Affected TA Global Satisfaction Level


This analysis was made in terms of the language used by EU partners in TA, the
duration of the TA in each case and the global approach used. The only factor that
shows correlation with attendees’ answers to the satisfaction questionnaire was the
duration of the TA (using a Chi-square test with 95% confidence interval): questions
Q1, Q6 and Q7 shows p-values of 0.009; 0.0034 and 0.002, respectively.

zamfira@unitbv.ro
388 M.C. Viegas et al.

5.3 How Pre-TA and TA Factors Affected Post-TA Actions


(Implementations)
Regarding the influence pre-TA identified factors had in promoting the implementa-
tions development in each LA HEI, the analysis does not show any statistically
dependency relatively to the number of courses, teachers involved, number of students
and VISIR usage. As for the TA identified factors (which differentiate the cases) they
seem to have no significant correlation with the type of implementations that are being
developed. As for the factors that can be inferred through the satisfaction questionnaire,
the level of TA participants’ satisfaction shows some positive correlations with some
aspects of the implementations: “number of courses” with Q5 (p = 0.022); “number of
students” with Q5 (p = 0.040) and Q6 (p = 0.008); and “VISIRs’ usage” with Q1
(p = 0.036) and Q7 (p < 0.001). The correlation test used was Fisher transformation
test (using a confidence level of 95%).

6 Discussion and Conclusions

The challenge endured in this work, was to assess in which terms external and internal
factors to the VISIR+ project was affecting the ongoing actions. In particular, at what
extent the pre-project experience of the LA HEI partners and particular aspects that
made TA different in each case were significant in terms of affecting teachers
involvement and the developing VISIR’s harmonious integration in their course
curricula.
Since the main turning point in each case was the TA, the analysis was divided into
three chronological stages in which cases could be differentiated: pre TA; during TA
and post TA. Pre TA characterization showed some differences between cases, namely:
PUC-Rio was the only one who managed to acquire VISIR system on schedule; UFSC,
UNSE and UNR performed a larger dissemination of the Project and TA among their
HEI fellows and including their AP; PUC-Rio shows a high level of performance while
ICT tools users; UFSC shows a vast experience using remote labs; UFSC, IFSC and
UNR shows some experience with VISIR.
TA characterization was made regarding attendees’ participation, the language used
by EU partners, as its duration and presentation approach. TA where assessed through a
satisfaction questionnaire in which quantitative and qualitative data was collected. The
major results are now summarized: UFSC, UNSE and UNR had a large number of
participants, including their AP; A significant difference in terms of language used by
EU partners was possible from TA1 to TA2. Due to their affinities, EU partners
performed their presentations (or in case of PUC-Rio, their discussions) in participants’
native language; The time allocated in each case to TA2 varied from one day (in UFSC
and IFSC) and four days in UNSE; In UFSC and IFSC the presentation approach was
less interactive (questions were mostly postponed to the end); Global feedback in both
TA was highly positive; The global participants’ perception of the presentations
interaction with the audience level (Q2) maintains or even grows in some cases from
TA1 to TA2; Regarding the level of achievement of participants’ expectations (Q7) and
the sensed difficulty in using VISIR (Q8), the results shows a maintenance (of the lower

zamfira@unitbv.ro
The VISIR+ Project – Preliminary Results of the Training 389

level in the satisfaction questionnaire) or even decreases, more notoriously in UFSC,


UNSE and UNR; This fact is in accordance with the previous result in terms of their
dissemination efforts and their higher participation levels (a more significant number of
participants who have never interacted with VISIR); The quality overview of the
participants’ perception shows that TA should have more time to questions and
practice.
The cases were categorized and the major results showed: Even though TA2 was
performed in the middle of LA semester, teachers from all HEI managed to embrace the
Project and start planning their implementations; In all cases there is already on-going
experiences with students (with the exception of UNSE, who planned but did not start
yet); In IFSC there are six simultaneous course implementations and in UFSC and in
PUC-Rio there are two; The level of confidence showed by these teachers can be
considered high when they embrace the implementation to a great number of students
or plan to use VISIR more frequently, as is the case in IFSC.
After cross analyzing these results and the research questions, we can identify:
Which factors can be considered important in terms of conditioning the TA and the
didactic implementations using VISIR? The factors that most significantly affected TA
were: the “pre-experience with remote labs” and “pre-experience with VISIR” in
particular. These two factors are significantly correlated with what participants refer-
enced about the objectives of TA (Q1), their expectations (Q7), the time allotted of the
TA (Q5) and their acknowledgment of VISIR as a useful tool to enhance effectiveness
of teaching and learning (Q6). In fact, the identified TA factor duration of the TA (1–4
days) is analyzed it is found to have a significant correlation with Q1, Q6 and Q7, but
not with Q5. This might seem odd at a first glance, but it means that probably par-
ticipants who had more experience with VISIR or remote labs, might feel more
comfortable with the period of time allocated, but in the overall, TA duration was not
perceived significantly different between the participants. Again, the “ownership of
VISIR” factor appears to not have significant influence on the obtained results.
Is there relation between TA characteristics and the implementations designs? No
statistical correlation was found between the identified pre-TA factors or TA factors
and the on-going implementations. Even though not statistically significant, the quality
analysis suggests that in HEI with more historical knowhow with ICT tools or similar
didactical implementations, their teachers were more at ease modifying their courses to
include this new tool. However, when analyzing data regarding TA through the sat-
isfaction questionnaires, the post-TA factor “number of courses” is significantly cor-
related with Q5. The “number of students involved” correlated with Q5 and Q6 and the
“degree of the integration: VISIR usage” correlated with Q7.
Concluding, the pre-experience with remote labs or with VISIR and the TA
duration were the most conditioning factors that affected the outcomes of the TA.
Teachers’ perceptions that most conditioned their involvement in developing their
implementations were related to the lack of time to practice and discuss their doubts in
TA (as was also referenced in the quality analysis), the teachers’ consciousness of the
effectiveness of VISIR to teach and learn (as discussed in literature about any didactical
tool, this teacher’s awareness is fundamental [2]) and finally, if their expectations in TA
were more fulfill, most likely they feel comfortable modifying their course curricula.

zamfira@unitbv.ro
390 M.C. Viegas et al.

Acknowledgment. The authors would like to acknowledge the support given by the European
Commission through grant 561735-EPP-1-2015-1-PT-EPPKA2-CBHE-JP.

References
1. Jara, C., Candelas, F., Puentes, S., Torres, F.: Hands-on experiences of undergraduate
students in Automatics and Robotics. Comput. Educ. 57, 2451–2461 (2011)
2. Feisel, L., Rosa, A.: The role of the laboratory in undergraduate engineering education.
J. Eng. Educ. 94, 121–130 (2005)
3. Brinson, J.: Learning outcome achievement in non-traditional (virtual and remote) versus
traditional (hands-on) laboratories: a review of the empirical reserach. Comput. Educ. 87,
218–237 (2015)
4. Corter, J., Nickerson, J., Esche, S., Chassapis, C., Im, S., Ma, J.: Constructing reality: a study
of remote, hand-on and simulated laboratories. ACM Trans. Comput. Hum. Interact. 14(2),
1–27 (2007)
5. Bochicchio, M., Longo, A.: Hands-on remote labs: collaborative web laboratories as a case
study for IT engineering classes. IEEE Trans. Learn. Technol. 2(4), 320–330 (2009)
6. Corter, J., Esche, S., Chassapis, C., Ma, J., Nickeson, J.: Process and learning outcomes from
remotely-operated, simulated and hands-on student laboratories. Comput. Educ. 57, 2054–
2067 (2011)
7. Ma, J., Nickerson, J.: Hands-on, simulated and remote laboratories: a comparative literature
review. ACM Comput. Surv. 38(3), 1–24 (2006)
8. Marques, A., Viegas, C., Costa-Lobo, C., Fidalgo, A., Alves, G., Rocha, J., Gustavsson, I.:
How remote labs impact on course outcomes: various practises using VISIR. IEEE Trans.
Educ. 57(3), 151–159 (2014)
9. Gustavsson, I.: On remote electronics experiments. In: Zubía, J.G., Alves, G.R. (eds.) Using
Remote labs in Education: Two Little Ducks in Remote Experimentation, pp. 157–176.
Universiy of Deusto, Bilbao (2011)
10. Gomes, L., Bogosyan, S.: Current Trends in Remote Laboratories. IEEE Trans. Industr.
Electron. 56(12), 4744–4756 (2009)
11. [IAOE] Winners of the GOLC Online Laboratory Award, 11 February 2015. http://lists.
online-lists.org/pipermail/iaoe-members/2015-February/000120.html. Accessed 2016
12. Alves, G., Fidalgo, A., Marques, A., Viegas, C., Felgueiras, M., Costa, R., Lima, N.,
Garcia-Zubia, J., Hernández-Jayo, U., Castro, M., Díaz-Orueta, G., Pester, A., Zutin, D.,
Kulesza, W.: Spreading remote labs usage: A System – A Community – A Federation. In:
Proceedings of the 2nd International Conference of the Portuguese Society for Engineering
Education (CISPEE2016), Vila Real, Portugal (2016)
13. Lima, N., Viegas, C., Alves, G., Garcia-Peñalvo, F.: VISIR’s usage as a learning resource: a
review of the empirical research. In: Proceedings TEEM 2016 - Fourth International
Conference on Technological Ecosystems for Enhancing Multiculturality (TEEM 2016),
Salamanca, Spain (2016)
14. Gustavsson, I., et al.: The VISIR project - an open source software initiative for distibuted
online laboratories. In: Remote Engineering & Virtual Instrumentation (REV 2007), June
2007
15. Gustavsson, I., Zackrisson, J., Nilsson, K., Garcia-Zubia, J., Hakansson, L., Claesson, I.,
Lago, T.: A flexible electronics laboratory with local and remote workbenches in a grid. Int.
J. Online Eng. (iJOE) 4(2), 12–16 (2008)

zamfira@unitbv.ro
The VISIR+ Project – Preliminary Results of the Training 391

16. Alves, G., Viegas, C., Lima, N., Gustavsson, I.: Simultaneous usage of methods for the
development of experimental competences. Int. J. Hum. Cap. Inf. Technol. Prof. 7(1), 48–63
(2016)
17. Claesson, L., Hakansson, L.: Using an online remote laboratory for electrical experiments in
upper secondary education. Int. J. Online Eng. (iJOE) 8(S2), 24–30 (2012)
18. Fidalgo, A., Alves, G., Marques, A., Viegas, C., Costa-Lobo, C., Hernadez-Jayo, U.,
Garcia-Zubia, J., Gustavsson, I.: Adapting remote labs to learning scenarios: case studies
using VISIR and RemotElectLab. IEEE Revista Iberoamericana de Tecnologias del
Aprendizage 9(1), 33–39 (2014)
19. Garcia-Zubia, J.: Using VISIR experiments, subjects and students. Int. J. Online Eng. (iJOE)
7(2), 11–14 (2011). REV2011
20. Lima, N., Alves, G., Viegas, C., Gustavsson, I.: Combined efforts to develop students
experimental competences. In: Proceedings Exp. at 2015 3rd International Experimental
Conference, Ponta Delgada, Azores (2015)
21. Viegas, C., Lima, N., Alves, G., Gustavsson, I.: Improving students experimental
competences using simultaneous methods in class and assessments. In: TEEEM 2014
Proceedings of the second International Conference on Technological Ecosystems for
Enhancing Multiculturality, Salamanca, Spain (2014)
22. Cohen, L., Manion, L., Morrison, K.: Research Methods in Education, 6th edn. Routledge,
Taylor & Francis Group (2007)
23. UFSC. http://rexlab.ufsc.br. Acedido em 11 2016
24. Lerro, F., Orduña, P., Marchisio, S., García-Zubía, J.: Development of a remote laboratory
management system and integration with social networks. Int. J. Recent Contrib. Eng. Sci.
IT (iJES) 2(3), 33–37 (2014)
25. PUC-RIO. http://www.maxwell.vrac.puc-rio.br. Accessed Nov 2016

zamfira@unitbv.ro
Laboratory Model of Coupled Electrical Drives
for Supervision and Control via Internet

Milan Matijević1(&), Željko V. Despotović2, Miloš Milanović1,


Nikola Jović1, and Slobodan Vukosavić3
1
Faculty of Engineering, University of Kragujevac, Kragujevac, Serbia
matijevic@kg.ac.rs
2
Institute “Mihajlo Pupin”, University of Belgrade, Belgrade, Serbia
3
Faculty of Electrical Engineering, University of Belgrade, Belgrade, Serbia

Abstract. Servo drives are used in a wide range of industrial applications


including metal cutting, packaging, textiles, web-handling, automated assembly
and printing. Servomotors in a typical industrial environment are linked to their
end effectuators by transmission mechanisms having a finite stiffness. The
elastically coupled two-mass motor/load system introduces finite zeros and the
pair of conjugate complex poles in the transfer function of the system plant and,
thus, brings up the problem of mechanical resonance. The resonance phe-
nomenon may provoke weakly damped oscillations of the link. Vibration sup-
pression and disturbance rejection in torsional systems are important issue in a
high performance motion control. For experimental verification of mentioned
phenomena at Faculty of Engineering at University of Kragujevac is developed
a laboratory model of coupled electrical drives. The paper describes develop-
ment and potential use of this laboratory model for engineering education and
training. This experimental setup is very expensive according to Serbian stan-
dards and unique at Faculty of Engineering. In order to enable wider access to
the laboratory model, and exemplary teaching/learning materials concerning
with the laboratory model, the laboratory model is integrated in WEB
laboratory.

Keywords: Electric drive control  Disturbance rejection  Remote


laboratories  High-performance speed servo drives  Torsional resonance 
Oscillation suppression

1 Introduction

Control of electrical drives offers insight into electric drives and their usage in motion
control environment [1, 2]. It provides links among electrical machine and control theory,
practical hardware aspects, programming issues, and application-specific problems [1].
Most of the machine centers, industrial robots, servomechanisms and other rotating
machinery have the geared reduction mechanisms between output shafts of motors and
driven machine parts. The insufficiency of the torsional stiffness of the geared reduction
mechanism often induces transient vibrations mainly related to eigenvalues of the
mechanical parts in the lower-frequency range when the motor starts or stops [3]. Elastic
couplings and joints within the machine system are major impediments to the
© Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_37
zamfira@unitbv.ro
Laboratory Model of Coupled Electrical Drives for Supervision and Control via Internet 393

performance enhancement, since high loop gains often destabilize torsional resonance
modes associated with the transmission flexibility. The presence of torsional resonance in
motion control systems limits the maximum achievable performance and causes unde-
sirable oscillations in the control system response [3, 4]. Vibration suppression of
rotating machinery is an important engineering problem [3, 4].
This paper presents a laboratory model which should help users to understand the
key elements of motion control systems, introduce them in hands on practice with
industrial servo drives, analyze and design discrete-time speed and position controllers,
set adjustable feedback parameters, and evaluate closed-loop performances including
of suppression of torsional resonance phenomena [4, 5]. The laboratory model covers
wide span applications for problem based learning and research [5].
Laboratories are inherent part of engineering education. Good designed student
experimental work bridges the gap between theoretical analysis and industrial practice
[5–11]. But, many universities worldwide, especially in poor countries, cannot afford
adequate laboratories for engineering education because of high cost of laboratory
equipment. One solution is blended learning approach and the development of WEB
laboratories where laboratory resources can be shared among a lot of users from any
place in any time [5, 6, 11].
This paper presents WEB pages of designed laboratory model integrated within
Internet mediated laboratory with purpose of support for teaching/learning in electrical
and servo drives use in mechatronics applications.

2 Torsional Resonance and Servo System with Flexible


Coupling

The problem on torsional oscillation suppression and disturbance rejection in flexible


system originates in steel rolling mill systems, where the load is coupled to the driving
motor by long shaft. The small elasticity of the shaft is magnified and has a vibrational
effect on the load speed. Vibrations caused by the load impact and the step input
endanger the integrity of the mechanical structure and deteriorate the product quality.
This vibration is not only undesirable but also the origin of the instability of the system
in some cases. As the newly required speed response is very close to the first resonant
frequency, the conventional controllers are not longer effective [3, 4].
Resonance is a steady state phenomenon that occurs when motor’s natural resonant
frequencies are excited at particular velocities. For example, if we slowly increase
motor’s speed, we may notice “rough” spots at certain speeds. The “roughness” is
resonance (Fig. 1). But, resonance is affected by load. Some loads are resonant, and can
make motor resonance worse. Other loads can damp motor resonance. Unlike reso-
nance, ringing is a transient phenomenon, that can be caused both by accelerating or
decelerating to a reference velocity. Namely, when controlled to quickly accelerate to a
given velocity, the motor shaft can “ring” about that velocity, oscillating back and
forth. Like a resonance, ringing causes error in motor shaft position. Also, ringing (or
vibration) can cause audible noise. [3]

zamfira@unitbv.ro
394 M. Matijević et al.

Actual

Velocity
Resonance

Reference

Time

Fig. 1. Illustration of a resonance phenomenon in servo drives.

In order to solve these problems, system designers will sometimes attach a damping
load, such as an inertial damper, to the back of the motor. However, such a load has the
undesired effects of decreasing overall performance, and increasing system cost [3]. On
the other side, to overcome the problem, various control strategies have been proposed,
that may be divided into the following three groups [4]: (1) control strategies based on
the direct measurement of motor- and load- side variables, (2) strategies involving only
one feedback device attached to the motor and the observer that estimating remaining
states, and (3) vibration suppression strategies based upon the notch filtering and
phase-lead compensation applied in conventional control structures.
Designers of the control part of a servo system, usually use the simplest motor/load
models that haven’t information about resonance modes and fast dynamics. The more
realistic model of an AC motor with load is illustrated on Figs. 2 and 3, as a two-mass
motor/load system with flexible coupling [4].

ΔΘ Δω
Mem cs
Motor Load
Θl
Jm Jl
Θ m ωm bv ωl Ml

Fig. 2. Flexible coupling of the motor shaft and load

Fig. 3. Block diagram of the servo system’s plant with flexible coupling

zamfira@unitbv.ro
Laboratory Model of Coupled Electrical Drives for Supervision and Control via Internet 395

The electromagnetic torque Mem is control variable, and the torque on loaded shaft
Ml presents disturbance. The motor inertia Jm and load inertia Jl are coupled by the
shaft or the transimission system having a finite stiffness coeficient cs. The friction
coefficient bv generally assumes very low values, giving rise to weakly damped
mechanical oscillations [4]. The torsional torque Mo equals the load torque Ml only in
the steady state. During transients, the speeds of motor and load differ, and torsional
torque Mo is given by

Mo ¼ cs Dh þ bv Dx ð1Þ

Contrary to the traditional model Wm(s) = 1/(Jl + Jm)s, if the shaft sensor is
mounted on the motor, the transfer function of the mechanical subsystem is defined by
2fz
xm ðsÞ 1 1þ xz s þ x2z s
1 2
Wm ðsÞ ¼ ¼ ð2Þ
Mem ðsÞ ðJm þ Jl Þs 1 þ 2fp
xp s þ x2p s
1 2

where undamped natural frequencies (xp, xz) and relative damping coefficients (fp, fz)
are given by
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi rffiffiffiffi
cs ð Jm þ Jl Þ cs
xp ¼ ; xz ¼ ;
Jm J l Jl
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sffiffiffiffiffiffiffiffiffiffi ð3Þ
b2v ðJm þ Jl Þ b2v
fp ¼ ; fz ¼
4cs Jm Jl 4cs Jl

Undamped natural frequency xp and xz of the pole- and zero-pairs in (2) are
referred as the resonance and antiresonance frequencies [4], and their quotient is known
as the resonance ratio
rffiffiffiffiffiffiffiffiffiffiffiffiffiffi
xp Jl
Rr ¼ ¼ 1þ ð4Þ
xz Jm

In the case under consideration, a low value of resonance ratio reduces the influence
of torsional load on dynamics of the speed control loop. With Jm Jl, oscillations of
torsional torque are filtered by a large motor inertia Jm and their influence on the control
of the motor speed becomes smaller. A damped control of Hm and xm is favorable, but
most applications require fast and precise control of the load variables Hl and xl [3].
Also, in that case, the estimation of resonance modes from detected signals (Hm and
xm) is not possible, and the load speed and position might exhibit weakly damped
oscillations that cannot be disclosed and compensated from the feedback signals [3].
In the case that sensor is mounted on the load shaft, the mechanical subsystem of
the drive has the transfer function Wl(s) given by

zamfira@unitbv.ro
396 M. Matijević et al.

2fz
xl ðsÞ 1 1 þ xz s
Wl ðsÞ ¼ ¼ ð5Þ
Mem ðsÞ ðJm þ Jl Þs 1 þ 2fp s þ 12 s2
xp xp

where undamped natural frequencies (xp, xz) and relative damping coefficients (fp, fz)
are given by (3), too.

3 Control Strategies for Compensating Torsional Resonance

Many controllers already exist in the field of motion control, but all most of them are
designed by assuming an ideal, rigid transmission train [1–4]. However, the desired
speed-loop bandwidth in modern machining centers approaches the frequency of tor-
sional resonance and coincides, at the same time, with most disturbing statistical and
deterministic noises [3].
Under these conditions, PI control laws are not suitable. Standard improvement of
conventional motion control laws and structures is based on anti-resonant compensator
inclusion as it is shown on Fig. 4.

Ml
Conrolling structure
ω ref ω m ,θ m
Anti-resonant Me m
Control law compensator OU
ωl, θl

Fig. 4. System with antiresonant compensator [4]

The notch filter compensator

s2 þ 2fzz xnf s þ x2nf


Wnotch ðsÞ ¼ ; fpp  fzz ð6Þ
s2 þ 2fpp xnf s þ x2nf

as antiresonance comensator (Fig. 4) is the most frequently used in practice. The notch
filter zeros cancels critical poles (of the torsional load), while the poles of the filter
become
 a new pair of conjugate complex poles with  increased relative damping

fpp  fzz . Digital implementation of notch filter fpp ¼ fp ; fzz ¼ fz ; xnf ¼ xp is
given by discrete transfer function
 qffiffiffiffiffiffiffiffiffiffiffiffiffi
xT xT
ðfp fz Þ fp xp T
cos xp T 1  f2z z1 þ eðfp þ fz Þ z2
p p
e  2e
 
Wnotch z1 ¼  qffiffiffiffiffiffiffiffiffiffiffiffiffi ð7Þ
T
1  2efp xp T cos xp T 1  f2p z1 þ e2fp xp z2

zamfira@unitbv.ro
Laboratory Model of Coupled Electrical Drives for Supervision and Control via Internet 397

For an exact cancellation of resonance modes, both the resonance frequency and
damping factor must be known while tuning all parameters of the notch filter [4]. But,
the exact location of critical poles is unknown and, thus, the cancellation is generally
imprecise. The notch filter (6) suppresses the resonant mode by the ratio fpp =fzz . Since
a low damping coefficient of zeros increases greatly the snesitivity to parameter vari-
ations, the ratio fpp =fzz is limited. Hence, the excitation of resonance modes can be only
reduced, but not eliminated completely, by the notch serial compensator. The notch
compensator is very sensitive to parameter variation, and it presents a serious problem
in tuning and implementation the notch filter [4].

Ωm Me m
PI K Torsional ωl
ωm
Ml load

1-K 1 Jm s
τs+1 τs+1

Fig. 5. Resonance ratio control [3]

Ml
ωr -1 1 Me m ωl
P r( z ) -1 Torsional load
+ - + - R( z )
o
z-1Puo (z-1) - Q (z-1)
+
+ cp
+
z-1 1-cp
Py( z-1)

Fig. 6. IMPACT structure of digitally controlled speed servosystem [3]

In the literature, as antiresonant control strategies are proposed model based control
approaches, control techniques based on the disturbance observers (Fig. 5), approaches
based on IMPACT structure (Fig. 6) [3], two freedom structure based on H2 control,
approaches based on antiresonant compensators (Fig. 4) [4], RST controller [12],
robust control and vibration suppression control in two-mass drive systems [13–16].
A good review of control strategies of vibration supression is given in [4]. In [4] is also
proposed a new antiresonance compensator, practically, FIR filter

1 þ zn Tosc
WNF ðz1 Þ ¼ ;n ¼ ð8Þ
2 2T
where the n stands for the ratio between the resonance mode half-period (Tosc/2) and

zamfira@unitbv.ro
398 M. Matijević et al.

the sampling time (T) of the discrete time controller. The oscillation period of the
resonance mode Tosc is given by

2p
Tosc ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffi ð9Þ
xp 1  fp

and it is adjustable parameter of the FIR filter (8), that could be experimentally defined.
The idea of the synthesis of filter (8) was elaborated in [4]. The conceived cascade
anti-resonance compensator is simpler, less sensitive to parameter changes, and
requiring a setting of only one parameter, but parameter n have to be identified pre-
cisely. The theoretical value of the suppression at xosc ¼ 1=Tosc frequency is infinite,
rather then a finite f/fp notch filter suppression value.
The resonance ratio control is proposed as an improvement of model-based control
techniques (i.e. model following control, application of disturbance observer, time
derivate feedback, state feedback control). The resonance ratio is defined by relation (4),
pffiffiffi
and should be about 5 because of effective vibration suppression [3]. In [3], as a
simple and practical strategy, it is mentioned the resonance ratio control based on the
pffiffiffi
fast disturbance observer (see Fig. 5), with optimal resonance ratio 0.8 5. In conven-
tional disturbance observer applications 100% of the estimated disturbances is feed
back. In the case on Fig. 5, 1 − K of the estimated disturbances is used. Parameter
K (0 < K  1) and time constant s (which defines observer’s cutoff frequency) are
adjustable parameter for vibration suppression. But, as it is previous commented, this
control strategy cannot efficiently provide vibration suppression on the load side.
Synthesis of IMPACT structure starts from following plant model (see Fig. 2 –
flexible coupling is neglected)

1 1 T z1
Wou ðsÞ ¼ ¼ ; Wou ðz1 Þ ¼ ð10Þ
Js ðJm þ Jl Þs Jm þ Jl 1  z1

Selection of sampling period is coupled with period of torsional oscillation, so

Tosc p
T¼ ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffi ð11Þ
8 4xp 1  f2p

Namely, in the IMPACT structure, the control plant is given by its nominal discrete
model

z1k Pou ðz1 Þ


W o ðz1 Þ ¼ ð12Þ
Qo ðz1 Þ

which is included into the control part of the IMPACT structure as a two-input internal
plant model. According to the standard procedure of IMPACT structure synthesis, for a

zamfira@unitbv.ro
Laboratory Model of Coupled Electrical Drives for Supervision and Control via Internet 399

minimum phase control plant, polynomial Rðz1 Þ should be taken on as


Rðz1 Þ ¼ Pou ðz1 Þ. The polynomials Pr ðz1 Þ and Py ðz1 Þ in the main external loop of
the controlling structure in Fig. 6 determine the dynamic behavior of closed-loop
system and these polynomials are determined independently from the design of local
inner control loop of the structure. The desired pole spectrum of the closed-loop control
system may be specified by desired relative damping coefficient f and undamped
natural frequency xn of the system dominant poles. In doing so and taking into account
the required zero steady-state error for step reference signal, the desired second order
discrete closed-loop system transfer function becomes
 
ð1  ðz1 þ z2 Þ þ z1 z2 Þz2 z1 Pr ðz1 Þ
Gde ðz1 Þ ¼ ¼
1  ðz1 þ z2 Þz1 þ z1 z2 z2 Qo ðz1 þ z1 Py ðz1 ÞÞ

where
qffiffiffiffiffiffiffiffiffiffiffiffiffi
s1=2 T
z1=2 ¼ e ; s1=2 ¼ fxn  jxn 1  f2 ð13Þ

Then polynomials Pr ðz1 Þ and Py ðz1 Þ are calculated in a straightforward manner


from

Pr ðz1 Þ ¼ ð1  ðz1 þ z2 Þ þ z1 z2 Þz1


ð14Þ
Py ðz1 Þ ¼ 1  ðz1 þ z2 Þ þ z1 z2 z1

The role of the RLSN (Recursive Linear Smoothed Newton) predictive filter with
user friendly parameter cp < 1 in inner loop of the IMPACT structure at Fig. 6 is
explained in [3]. Adjustment of parameters cp simply influences on efficiency of
absorption of disturbance effects or expansion of area of robust stability and sup-
pression of torsional oscillations. The presented structure is simple with small number
of adjustable parameters that could be easily set to achieve the desired robust, filtering,
and dynamic properties of the system.

3.1 Illustrative Example


The proposed control algorithms based on IMPACT structure are tested by simulation
trials, under the same conditions as performed in [4]. Two identical motors are inter-
connected by elastic hollow shaft. Motors are independently controlled and used as a
motor and a load. We distinguish following relevant data (see Figs. 2 and 3) Jm =
0.000620 kgm2, Jm = 0.000220 kgm2, cs ¼ 350 Nm=rad, bv ¼ 0:004 NMs=rad. The
desired closed-loop system transfer function is specified by undamped natural fre-
quency xn = 400 rad/s and relative damping coefficient f = 0.7, in accordance with
the time response, obtained at the Fig. 7.

zamfira@unitbv.ro
400 M. Matijević et al.

4 4
ωm [rad/s] ωl [rad/s]
3 3
2 2

1 1

0 0
-1 -1

-2 -2
0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2
t[s] t[s]

2 1.5
Me [Nm] Mo [Nm]
1 1

0 0.5

-1 0

-2 -0.5
0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2
t[s] t[s]

Fig. 7. Operation of IMPACT structure for cp = 0.2, xn = 400 rad/s, f = 0.7, xr(t) = 3h(t−0.05)
[rad/s], Ml(t) = 1h(t−0.1) [Nm]

4 Laboratory Model of Coupled Electrical Drives

Mechanical resonance is a current problem in servo systems, and falls into two cate-
gories: low-frequency and high-frequency. High-frequency resonance usually causes
instability at the natural frequency of the mechanical system, typically between 500 Hz
and 1200 Hz. Low-frequency resonance occurs more often in general industrial
machines, at the first phase crossover, typically between 200 Hz and 400 Hz. Standard
servo control laws are structured for rigidly coupled loads. However, when a such
control law is applied to a flexible coupled servo system, the system performance can
be unsatisfactory. Typical experimental set-ups for testing of control laws for electrical
drives with flexible coupling are consisting of identical motors interconnected by
elastic shaft. For example, in number papers of T. Orlowska-Kowalska et al. in this
area, like in [14], the laboratory model on Fig. 8. is used.
The laboratory model, presented at Fig. 8, is composed of a DC motor driven by a
static converter/four-quadrant chopper/and a DC loading machine. The motor is cou-
pled to a load machine by an elastic shaft (a steel shaft of 5 mm diameter and 600 mm
length). The moment of inertia can be varied by flywheel, where the inertia ratio of the
motor to the load machine varies from 0.125 to 8. Both motors had the nominal power
of 500 W each. The speed and position of both motors were measured by incremental
encoders (5000 pulses per rotation). The mechanical system has a natural frequency
approximately 9.5 Hz. The control and estimation algorithms were implemented using
a digital signal processor with dSPACE software.
In [4], the laboratory model for experimental verification of anti-resonant control
laws in servo systems with flexible coupling, is also arranged according to the block
diagram shown in Fig. 3. The laboratory model consists of two synchronous permanent
magnet servomotors connected by a flexible hollow shaft. Torsional oscillations upon
rapid load steps are found at 156 Hz, decaying to zero with the time constant of
approximately 200 ms [4]. Both motors are fed and controlled by a digital servo
amplifier capable of both the torque- and the speed-control operating modes. The motor
M1 (Fig. 3) is speed controlled, while the motor M2 is set in the torque-control mode

zamfira@unitbv.ro
Laboratory Model of Coupled Electrical Drives for Supervision and Control via Internet 401

Fig. 8. The mechanical part of the laboratory model (a); The general view of the lab. model (b);
Schematic diagram of experimental setup in [14] (c);

and used as a programmable load. The motors are equipped with electromagnetic
resolvers. The resolver signals are decoded into the motor- and the load-side positions
and in (Fig. 1) within the servoamplifier’s control section. The R/D converter’s
bandwidth is 1 kHz and its resolution is 12 b. Having two shaft sensors at each end of
the flexible coupling, it was possible to perform experiments with motor-side feedback
and load-side feedback [4].
Based on described laboratory model in [4], at Faculty of Engineering at University
of Kragujevac, based on industrial automation components is developed the laboratory
model of coupled electrical drives shown on the Fig. 9.
The laboratory model (Fig. 9, [5]) consists of
• Desktop computer equipped with internet/ethernet connection; College Teaching
License for LabVIEW (NI Academic Site License), Matlab/Simulink and NI PCI
6229 interface for control, measurement and data acquisition.
• Two servo drives (i.e. servo amplifiers) Yaskawa Omron SGDH-04AE-0Y [17, 18].
• Two AC synchronous permanent magnet servomotors Yaskawa Omron SGMAH
04A1A61D-OY [17, 18].
• Shaft or flexible shaft and inertial masses on the shaft because of coupling of two
motors like in the experimental setup described in [4].

zamfira@unitbv.ro
402 M. Matijević et al.

Fig. 9. Laboratory model of servo system with flexible coupling at Centre for Applied
Automatic Control at Faculty of Engineering at Univ. of Kragujevac

5 Laboratory Practice and Web Lab Support

The laboratory model (Fig. 9, [5]) is equipped with web published practicum for
students [5]. These practices are aimed at introducing the users in motion control
systems [1, 5, 17, 18]. Similar to [11], all practices have several parts that users have to
do in a logical sequential order: A theoretical description of the equipment and its
operation mode, a scheme of the practice that has to be done and an explanation of the
tasks that has to be developed during the laboratory practice. The practices are detailed
briefly below:
• Practice 1: Configuration of the laboratory model. Introduction to basic servo and
motion control technology concepts. Components of motion control systems and
AC servos are discussed within the context of Yaskawa components and applica-
tions: servomotors, servo amplifiers, wiring, trial operation [5, 17, 18]. Students
should be introduced in basic principles and technologies of connection of used
components. Safety aspects of design of servo systems are encompassed.
• Practice 2: Safely and properly procedures for commissioning of a servo system.
A practical overview and operation aspects of AC servo systems and their imple-
mentation. Introduction in servo amplifier’s operation modes: How to tune different
operation modes related to referent torque or referent angular velocity? Students in
small groups, up to 4 persons, have hands on trainings about seting of desired an
operation mode of servoamplifier and turning on the laboratory model wihtout
coupling between servomotors at desired angular velocity (two decoupled ser-
vosystems are used indipendently and command/referred signal of desired angular
velocity per servomotor is settled by potenciometer) [5, 17, 18].
• Practice 3: Procedures for servo amplifier settings [5, 17, 18]. Introduction in list of
CN1 terminals, parameters, I/O signal names and functions. Parameter configura-
tions for 3.1. Function selection constants, 3.2. Servo gain and other constants, 3.3.
Position control constants, 3.4. Speed control constants, 3.5. Torque control con-
stants, 3.6. Sequence constants, 3.7. Reserved parameters. Servo amplifier settings
and speed, position and torque control loop performances.

zamfira@unitbv.ro
Laboratory Model of Coupled Electrical Drives for Supervision and Control via Internet 403

• Practice 4: Integration NI PCI 6229 card in laboratory system for control/reference


signals settings and data acquisition by Matlab/Simulink and/or LabVIEW. The
practice goal: measurement and acquisition of step angular velocity response of
servomotors. Two decoupled servosystems are used for indipendent experiments
and the command/referred signal is settled via Matlab/Simulink or LabVIEW.
• Practice 5: Modeling and identification. The students should: (a) perform the
measurement, in order to receive transient characteristics, (b) chose the structure of
mathematical model which provides a good approximation of measured system,
(c) based on step response of speed servo system (servo amplifier and servo motor),
perform the identification of unknown parameters of selected structure of mathe-
matical model, (d) validate the final form of mathematical model, by comparing it’s
simulated characteristics with data from real plant.
• Practice 6: Design of speed and position servo system via Matlab/Simulink or
LabVIEW on real laboratory system (the plant is a servo amplifier with servomo-
tor). The students should chose: (a) the appropriate form of controller with integral
action (PI, PID) and method for parameter selection and tuning for speed servo
system. (b) the P controller for design of position servo system.
• Practice 7: Servo adjustment. Speed/Torque/Position control mode and servo
adjustments.
Smooth operation: Using the soft start function; Smoothing; Adjusting gain;
Adjusting offset; Setting the torque reference filter time; Notch filter. High-speed
positioning: Setting servo gain; Using feed-forward control; Using proportional con-
trol; Setting speed bias; Using mode switch; Speed feedback compensation.
Auto-Tuning: Online Auto-Tuning; Mechanical rigidity settings for online; Saving
results of online auto-tuning; Parameters related to online auto-tuning. Servo gain
adjustment: Servo gain parameters; Basic rules of gain adjustment; Making manual
adjustments; Gain setting reference values. Analog monitor.
• Practice 8: Coupled electrical drives (see Fig. 9). The first motor (Figs. 9 and 3) is
speed controlled, while the second motor is set in the torque-control mode and used
as a programmable load (see Sects. 2 and 3).
Practicums with theoretical introductions for listed laboratory exercises are given in
[5] and supported by web pages at web laboratory [19].

6 Web Laboratory and Remote Practice

Software for this Web laboratory is written using Node.JS JavaScript framework.
Real-time communication between user and the server is ensured by Socket.IO library.
Node.JS is acting like a middleman between user and the laboratory model, where user
is communicating to the Node.JS server through Socket.IO protocol, and Node.JS is
communicating to the Matlab/Simulink via TCP/IP sockets (Fig. 10).

zamfira@unitbv.ro
404 M. Matijević et al.

Fig. 10. Web laboratory structural diagram

One part of mentioned hands-on laboratory exercises are available for realization
via Internet. Servo amplifiers are tuned and servomotors are decoupled. A user can ran
an experiment via Web Lab GUI and change some controller parameters from
Matlab/Simulink. The experiment with coupled servomotors can be used via Web Lab
also. Because of possible hazard effects of this experiment, its use via Internet is
organized only from time to time.
In the first experiment, a user can pass an arbitrary step input to the laboratory
model and see achieved speed of the motor. With information gathered from this
experiment, user can identify model via time-domain data. Simulink model is shown on
Fig. 11.

Fig. 11. Simulink model for the first experiment

In the second experiment, user must tune gain for P controller for the position
servomechanism.
There are two opened TCP/IP connections from Node.JS, one for each parameter.
Simulink model is shown in Fig. 12.

zamfira@unitbv.ro
Laboratory Model of Coupled Electrical Drives for Supervision and Control via Internet 405

Fig. 12. Simulink model for position control of the servo system

Last online exercise is setting up parameters for PI speed servo controller. Simulink
model is given on Fig. 13, and Web laboratory GUI is on the Fig. 14.

Fig. 13. Simulink model for PI motor speed controller

Fig. 14. Web lab GUI for speed control of the motor

7 Conclusions

In the majority of practical cases of servo systems with flexible coupling, where the
range of applicable gains is limited due to mechanical resonance, the problem reveals
itself in the form of sustained oscillations. The audible noise and excessive tracking

zamfira@unitbv.ro
406 M. Matijević et al.

error reach unacceptable levels as gains are increased. This phenomenon depends upon
the drive-train wear, temperature, control-loop bandwidth, offset of stator current
controller, number of motor poles, shaft sensor characteristics, etc. The most of the
aforementioned secondary effects could not be modeled and covered by the simulations
and adequate laboratory model is necessary for research and verification of servo
systems with flexible coupling.
This paper presents a laboratory model for education, training and research in servo
drives applications including servo drives with flexible coupling. This paper presents
several methods for dealing with torsional resonance in servo drives, and proposes a
special case of the IMPACT structure with simple parameters adjustment. The
antiresonant feature of the structure is not based on the exact cancellation of resonance
poles. Due to the simplicity and robustness of the proposed structure, it can be easily
applied to various flexible systems with different regulator combinations. Didactic span
of possibilities of the laboratory model (with and without flexible coupling) is a special
part of this paper. In special situation of use, the laboratory model can be used via
Internet. Its integration in the WEB laboratory and possibilities of remote laboratory
practice are described. Students have opportunity to use laboratory model for experi-
ments conducting via Internet concerning with servo drives monitoring and control
(speed, torque and position of motor shaft are supervised and controlled). Web pages of
the laboratory model are dedicated to both theoretical and practical aspects of problem
based learning concerning with this laboratory model.

Acknowledgment. Work on this paper was partly funded by the SCOPES project
IZ74Z0_160454/1 “Enabling Web-based Remote Laboratory Community and Infrastructure” of
Swiss National Science Foundation.

References
1. Vukosavic, S.N.: Digital Control of Electrical Drives, Power Electronics and Power
Systems. Springer, New York (2007)
2. Vukosavic, S.N.: Electrical Machines, Power Electronics and Power Systems. Springer,
Heidelberg (2012)
3. Matijević, M.S., Vukosavić, S.N., Schlacher, K.: Eliminating instabilities in computer
controlled motion control systems caused by torsional resonance. Electronics 10(1), 35–40
(2006)
4. Vukosavic, S.N., Stojic, M.R.: Suppression of torsional oscillations in a high-performance
speed servo drives. IEEE Trans. Ind. Electron. 45, 108–117 (1998)
5. Milanovic, M.: Development of laboratory model of coupled electrical drives for supervision
and control via Internet (in Serbian). M.Sc. thesis, Faculty of Engineering at University of
Kragujevac (2016)
6. The Go-Lab Project and the Go-Lab Portal (2016). http://www.golabz.eu
7. Tawfik, M., Salzmann, C., Gillet, D., Lowe, D., Saliah-Hassane, H., Sancristobal, E., Castro,
M.: Laboratory as a service (LaaS): a novel paradigm for developing and implementing
modular remote laboratories. iJOE 10(4) (2014)
8. Nedic, Z., Nafalski, A.: Development of online power laboratory with renewable generation.
iJOE 11(3) (2015)

zamfira@unitbv.ro
Laboratory Model of Coupled Electrical Drives for Supervision and Control via Internet 407

9. Krein, P.T., Sauer, P.W.: An integrated laboratory for electric machines, power systems, and
power electronics. IEEE Trans. Power Syst. 7, 1060–1067 (1992)
10. Huang, T.C., El-Sharkawi, M.A., Chen, M.: Laboratory set-up for instruction and research in
electric drives control. IEEE Trans. Power Syst. 5, 331–337 (1990)
11. Dom´ınguez, M., Fuertes, J.J., Reguera, P., Mor´an, A., Alonso, S., Prada, M.A.: Remote
laboratory for learning of AC drive control. In: Proceedings of the 18th IFAC World
Congress, Milano (Italy) (2011)
12. Matijević, M.S., Sredojević, R., Stojanović, V.M.: Robust RST controller design by convex
optimization. Electronics 15(1), 24–29 (2011)
13. Khan, I.U., Dhaouadi, R.: Robust control of elastic drives through immersion and invariance.
IEEE Trans. Industr. Electron. 62(3), 1572–1580 (2014)
14. Szabat, K., Orlowska-Kowalska, T.: Vibration suppression in a two-mass drive system using
PI speed controller and additional feedbacks – comparative study. IEEE Trans. Ind. Electron.
54, 1193–1206 (2007)
15. Szabat, K., Tran-Van, T., Kaminski, M.: A modified fuzzy luenberger observer for a
two-mass drive system. IEEE Trans. Ind. Inform. 11(2), 531–539 (2014)
16. Li, Q., Xu, Q., Wu, R.: Low-frequency vibration suppression control in a two-mass system
by using a torque feed-forward and disturbance torque observer. J. Power Electron. 16(1),
249–258 (2016)
17. Yaskawa America Training in Servo Basic Concepts. https://www.youtube.com/playlist?
list=PLNAENlyEDCkw0gUDF1BMu3brEJArhD8Kt, YouTube Channel, 2014 (21.11.
2016.)
18. Yaskawa eLearning Curriculum (eLearning Modules and eLearning Videos), 21 Nov 2016.
https://www.yaskawa.com/pycprd/training/elearning-curriculum/tab0/link00
19. Web Laboratory Aggregator Service http://cpa.fin.kg.ac.rs/weblab/index from the SCOPES
project I37430/160454 “Enabling Web-based Remote Laboratory Community and Infras-
tructure” of Swiss National Science Foundation, at Faculty of Engineering at University of
Kragujevac (2016)

zamfira@unitbv.ro
Online Course on Cyberphysical Systems
with Remote Access to Robotic Devices

Janusz Zalewski(&) and Fernando Gonzalez

Department of Software Engineering, Florida Gulf Coast University,


Ft. Myers, FL 33965, USA
{zalewski,fgonzalez}@fgcu.edu

Abstract. The objective of this paper is to present an approach and experiences


with introducing robotic devices accessible online to a course on Cyberphysical
Systems in an undergraduate Software Engineering program. A closer look at
both technologies, online labs and cyberphysical systems education, reveals that
they are not in sync. Remote labs have embraced a wide variety of science and
engineering disciplines, but they are not popular in software engineering. On the
other hand, software engineering education, being crucial to the development of
cyberphysical systems has not focused on such systems by any measure. This
project and paper aim at addressing this gap.

Keywords: Cyberphysical systems education  Embedded systems education 


Robotic devices in education  Online labs

1 Introduction

Cyberphysical systems, which combine access to physical devices with connectivity to


the Internet, are critical to the nation’s wealth and security. They are all networked and
almost exclusively software controlled, thus becoming a new issue in software engi-
neering and technology, in addition to being a factor in national economies. Traditionally
structured educational programs in computing did not catch up, yet, with respective
developments in industry, therefore, education and curriculum development should be an
integral component of construction and use of such systems. This is especially true in
undergraduate software engineering programs, which this paper addresses.
Several activities have been launched in recent years dealing with issues of cyber-
physical systems curricula and education, among them workshops and conferences [1, 2],
summer schools running each year since 2011 [3, 4], and aggressive government funding
by technologically developed nations. Both researchers and educators began intensively
developing and offering related courses, including youtube videos [5], started publishing
their findings on curriculum development [6–9] and published related textbooks [10].
While this work is ongoing and covers multiple engineering disciplines, from aerospace

This work was supported by a grant from NASA through University of Central Florida’s NASA-
Florida Space Grant Consortium (UCF-FSGC 66016015). Partial support was provided by a WIDER
grant from the National Science Foundation, Award No. DUE-1347640.

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_38
zamfira@unitbv.ro
Online Course on Cyberphysical Systems 409

engineering to chemical engineering to mechatronics, two related issues have not been
covered sufficiently well: (a) teaching cyberphysical systems in software engineering
programs, and (b) designing online laboratories for teaching cyberphysical systems in
such programs. There are not that many reports in the literature on dealing with related
problems or proposed solutions [11].
Consequently, the objective of this paper is to present an insight into the devel-
opment of a lab and related projects within this lab, which could help establish similar
laboratories in undergraduate software engineering programs and related disciplines.
The rest of the paper is structured as follows. Section 2 discusses the Cyberphysical
Systems course itself, Sect. 3 outlines the need for and essence of development pro-
jects, and Sect. 4 presents the contents of the lab and some results from first experi-
ences. Section 5 concludes the paper with discussion of some findings.

2 Undergraduate Course on Cyberphysical Systems

The Cyberphysical Systems course discussed in this paper is a new course based on a
previously offered Embedded Systems Programming course, with added networking
component and changed focus. The learning objectives for this new course have been
formulated as follows. The students will acquire:
• an awareness of the interactions of a cyberphysical system with the environment
• the ability to address sensor and control operations in cyberphysical systems
• the ability to understand the software lifecycle for cyberphysical systems
• the ability to design, analyze and document software for cyberphysical systems
• the ability to work effectively in teams to address collectively software issues in
cyberphysical systems
• the ability to complete an integrated project in the cyberphysical systems domain
• the ability to present project related material in a variety of forms
• an awareness of non-functional requirements for cyberphysical systems, such as
safety, security and reliability.
The course is offered online and its principal structure includes two major components:
lecture modules offered via the web and software development projects. There are
twelve lecture modules offered in the following order, according to the top-down
principle of software development, from high-level design to low-level hardware issues,
which proved effective in software engineering education, with selected application
topics and coverage of non-functional system and software properties at the end:
1. Introduction to Cyberphysical Systems
2. Design of Embedded Real-Time Software
3. Designing Software Architecture
4. Programming Languages for Cyberphysical and Embedded Applications
5. Real-Time Kernels
6. Advanced Real-Time Kernels Concepts
7. Timing in Embedded and Cyberphysical systems
8. Hardware Issues in Programming

zamfira@unitbv.ro
410 J. Zalewski and F. Gonzalez

9. Data Acquisition and Control Applications


10. Internet of Things: The Principles
11. Cyberphysical Systems Security
12. Safety, Reliability and Fault Tolerance
Each lecture module is divided into six separate parts, including:
• Objectives, briefly formulating what the student will learn
• Guest Faculty, who is an invited expert to interact briefly with students online
• Introduction, outlining the contents of the module (with References, if needed)
• Student Activities, as the major component, which involves studying the slides and
reading required material (most often, a paper by an expert invited as Guest)
• Assessment, involving participation in a Discussion Forum related to this module
• Follow-up, which requires responding to feedback received from the Guest Faculty.
One interesting, uncommon and unique, feature of this course is the interaction with
experts in the field, who are invited guest faculty. This is, however, outside the scope of
current paper and related results are scheduled for publication elsewhere. The software
development projects, which make use of online labs, are discussed next.

3 Software Development Projects

The second major component of the course is the software development project, with
extensive lab activities. The essential question faced by the instructor, in this regard, is:
How to organize the software engineering lab in an online course on cyberphysical
systems, to facilitate software development? Two critical and inter-related issues are to
be considered here:
• access to the lab, or more broadly, how the lab work is to be done, whether cen-
tralized inside the lab, individual at the student’s location, or remotely online, and
• the selection of project topics, whether proposed by students, solicited from
industrial partners, a pool to choose from provided by instructor, or a single topic
across the board generated by instructor.
Regarding the lab access, the ultimate goal is to provide students with such access to
use respective devices. It must be made clear, however, that it is not meant to be only a
remote control of these devices, which is common [12] in courses in disciplines, such as
control engineering, electronics or mechanical engineering, simply to test certain mea-
surement methods or control algorithms by choosing various device parameters. In other
words, it is not only the online use of remote devices, even the most sophisticated
medical, chemical or physical instruments, which has to be provided. In software
engineering, a new qualitative step is needed, which is consistent with the mission of this
profession. Namely, software engineers develop software, so their access to remote
devices must be provided for a substantially different reason: to be able to upload soft-
ware to the target device online and test the software and debug it on the remote target.
Perhaps a word of explanation is worthwhile here, since this goal may be perceived
differently by different stakeholders. The most prominent example of what is

zamfira@unitbv.ro
Online Course on Cyberphysical Systems 411

considered is the case of the NASA’s Pathfinder mission to Mars, in 1997, when the
rover control software had a glitch involving its real-time kernel, VxWorks, and had to
be analyzed back on Earth at the mission control in Jet Propulsion Laboratory. Once
the bug was fixed the software was uploaded back to the rover on Mars [13]. A more
contemporary, but also spectacular, example that can be mentioned is remote access to
the experiments at Large Hadron Collider (LHC), in Geneva, where physicists and
engineers around the world can program their data acquisition and control systems over
the Internet (Fig. 1). In today’s terms, with the widespread proliferation of the Internet
of Things, this issue is no longer so spectacular, but gradually becomes a matter of
everyday life, and it is the responsibility of educators to adequately prepare the soft-
ware engineering workforce for this task.

Fig. 1. LHC control room at Fermilab (photo by the author).

With respect to the choice of project topics, a whole spectrum of lab projects has
been pursued in previous courses on Embedded Systems Programming and are
described in separate publications [14, 15]. They involve an entire array of hardware
platforms, real-time kernels and input-output devices, including single board com-
puters, microcontrollers, game boards, FPGA boards, wireless sensor networks,
Atmel/Arduino/Raspberry Pi platforms, and simple robotic devices (such as Lego,
IntelliBot, Parallax Boe-bot and multiple others). While all were suitable for previous
courses, they are not well suited for projects in this specific course, which is focused on
online access and requires more intelligence on the part of the device, especially
network connectivity. Therefore, the decision was made to attempt the use of a single
class of devices, with multiple “incarnations”, which would exhibit a wide range of
sensing and actuating elements. This led us to focusing on various kinds of robotic
devices, which additionally have networking capabilities.

zamfira@unitbv.ro
412 J. Zalewski and F. Gonzalez

4 Online Robotics Lab

Just like in case of integrating cyberphysical systems into software engineering edu-
cation, where very few papers have been published, yet [11], there are only a few
attempts to introduce robotics into software engineering curricula [16, 17]. To the best
of the authors’ knowledge, none of these publications make any recommendations on
creating such labs, except of reporting on using Lego robots, for example. Thus, there
is not much guideline material to base a software engineering robotics lab on.
Consequently, the lab created for this course was based on somewhat unstructured
principles and relied on focusing to achieve course objectives according to the fol-
lowing five assumptions:
• make sure the emphasis is on one category of equipment, that is, robotics devices,
as opposed to a different equipment, such as sensor networks, FPGA, etc.
• provide a wide variety (diversity) of robotics devices to view multiple aspects of the
software development process
• realize projects with a full but simple software development cycle, to focus on
online access to remote equipment
• ensure possibilities of invasive remote labs, that is, allowing student developers to
change robotic software by uploading updates and modifications
• address a variety of software requirements, including non-functional ones, such as,
safety, security, and reliability.
These assumptions resulted in a somewhat eclectic aggregation of robotic devices,
assembled over a period of several years, with required functionality as shown in
Table 1. Around a dozen robotic units are accessible in the lab, exhibiting an array of

Table 1. List of robotic devices in the lab.

zamfira@unitbv.ro
Online Course on Cyberphysical Systems 413

different properties. What is important to this course is that each device operates with at
least one networking protocol suitable for handling remote connections.
Software development projects assigned to use these different robotic devices
varied from using an existing native network connectivity (for instance, in case of
NAO) to developing network connectivity in case it was not provided by the vendor (as
for Lego EV3). Projects followed strictly the software development lifecycle, using (for
simplicity) the waterfall model divided into four phases: requirements, design,
implementation and testing. The documentation included project reports from all four
phases.

Fig. 2. Outline of the template architecture for online robotics platforms.

Connectivity of all platforms essentially followed the architecture illustrated in


Fig. 2, with access to the robotic devices via secure servers. Technical issues, such as
the details of middleware, backend/frontend technologies, etc., are omitted here, due to
a limited space. For the same reason, of more than a dozen projects, utilizing some of
the devices, only the two most representative ones are mentioned below.
Both these projects made use of a Kinect sensor, one with modern humanoid NAO
robot and another with an obsolete RTX robotic arm, as illustrated in Fig. 3. Both
projects involve software development for the Kinect sensor, as well for the robotic
device (NAO and RTX, respectively) to follow the motion detected by Kinect. The
only crucial difference is that NAO has a native TCP connectivity, so the developer is
capable of uploading the software and its upgrades directly to the robot, while in case
of RTX this involves mediation of a server directly connected to the robot. The
comparison of projects’ results involves a crucial lesson on using remote labs in, what
the authors call, the invasive mode (with software changing on target).

zamfira@unitbv.ro
414 J. Zalewski and F. Gonzalez

Fig. 3. Robotic devices in the online lab.

5 Conclusion

The main objective, to enhance software engineering education by the use of online
robotics lab was accomplished. Additionally, the projects were successful in shedding a
light how diversity in a specific category of equipment (robotic devices) affects
understanding of the subject matter? In this regard, various views of essentially the
same problem, as presented in project discussions, converged to a better understanding
of the necessary software mechanisms. An unexpected issue arose due to different
levels of students’ familiarity with robotic devices (a prerequisite course in robotics
was not required [18]). This was addressed by gradual introduction of complexity in
meeting software project goals. An unanswered question remains, to what extent this
kind of online labs can be invasive, that is, allow remote access to the robot to develop
software (with uploading new versions and debugging), not only test it.
Of the 3 kinds of issues always facing instructors in remote labs, administrative,
technical and pedagogy, the pedagogy outcomes can be summarized as follows:
• remote interaction and software design for robotic devices enhances understanding
of a functionality of cyberphysical systems by the use of physical inputs/outputs
• since not all students had familiarity with robots, enforcing knowledge acquisition
was diversified in a sequence: demo, exercise, assignment, experiment and project
• there was insufficient time in this edition to fully address the professional knowl-
edge of non-functional requirements, such as reliability, safety, and security.

References
1. Interim Report on 21st Century Cyber-Physical Systems Education, National Research
Council, Washington, DC (2015)
2. 3rd International Workshop on Cyber-Physical Systems, IWCPS 2016, Gdansk, Poland,
11–14 September 2016. https://fedcsis.org/2016/iwcps

zamfira@unitbv.ro
Online Course on Cyberphysical Systems 415

3. Third NSF/Georgia Tech Summer School on Cyber-Physical Systems, Atlanta, Georgia, 27–
29 June 2011. http://www.ece.gatech.edu/research/labs/esl/Activities/CPS-2011/index.html
4. International Summer School on Cyberphysical Systems, Toulouse, France, 5–9 September
2016. https://www.laas.fr/public/en/1st-summer-school-cyber-physical-systems-cps2016
5. Marwedel, P.: Course on Cyber-Physical System Fundamentals. Youtube video. Technische
Universität Dortmund (2016). http://www.youtube.com/user/cyphysystems
6. Törngren, M., et al.: Education and training challenges in the era of cyber-physical systems:
beyond traditional engineering. In: Proceedings of WESE 2015, Workshop on Embedded
and Cyber-Physical Systems Education, Amsterdam, 8 October 2015. Paper 8
7. Peter, S., Momtaz, F., Givargis, T.: From the browser to remote physical lab: programming
cyber-physical systems. In: Proceedings of Frontiers in Education Conference, FIE 2015,
El Paso, Texas, 21–24 October 2015
8. Grega, W., Kornecki, A.J.: Real-time cyber-physical systems: transatlantic engineering
curricula framework. In: Proceedings of Federated Conference on Computer Science and
Information Systems, FedCSIS 2015, Lodz, Poland, 13–16 September 2015, pp. 755–762
(2015)
9. Wade, J., et al.: Systems engineering of cyber-physical systems: an integrated education
program. In: Proceedings of 123rd Annual ASEE Conference, ASEE 2016, New Orleans,
LA, 26–29 June 2016. Paper No. 17162
10. Lee, E.A., Seshia, A.A.: Introduction to embedded systems: a cyber-physical systems
approach, 2nd edn. (2016). http://leeseshia.org/
11. Laird, L., Bowen, N.: A new software engineering undergraduate program supporting the
Internet of Things (IoT) and Cyber-Physical Systems (CPS). In: Proceedings of 123rd
Annual ASEE Conference, ASEE 2016, New Orleans, LA, 26–29 June 2016. Paper
No. 16378
12. Zubía, J.G., Alves, G.R. (eds.): Using Remote Labs in Education. University of Deusto,
Bilbao (2011)
13. Reeves, G.E.: What really happened on Mars? (1997). https://trs.jpl.nasa.gov/handle/2014/
19020
14. Zalewski, J.: Web-based labs for cyberphysical systems: a disruptive technology. In:
Proceedings of 10th IFIP World Conference on Computers in Education, Torun, Poland, 2–5
July 2013
15. Zalewski, J.: Cyberlab for cyberphysical systems: remote lab stations in software
engineering curriculum. In: Proceedings of 4th International Conference on e-Learning,
ICeL 2013, Ostrava, Czech Republic, 8–10 July 2013, pp. 1–7 (2013)
16. Göbel, S., Jubeh, R., Raesch, S.L.: A robotics environment for software engineering courses.
In: Proceedings of 25th AAAI Conference on Artificial Intelligence, San Francisco,
California, 7–11 August 2011, pp. 1874–1878 (2011)
17. Gonzalez, F., Zalewski, J.: Online robotic labs in software engineering courses, Research
Papers of the Faculty of Electrical Engineering and Automation of Gdansk Polytechnic, no.
37, pp. 15–18 (2014)
18. Gonzalez, F., Zalewski, J.: A robotic arm simulator software tool for use in introductory
robotics courses. In: Proceedings of IEEE Global Engineering Education Conference,
EDUCON 2014, Istanbul, Turkey, 3–5 April 2014

zamfira@unitbv.ro
Models and Smart Adaptive Interfaces
for the Improvement of the Remote Laboratories
User Experience in Education

Luis Felipe Zapata Rivera ✉ and Maria M. Larrondo Petrie


( )

Florida Atlantic University, Boca Raton, USA


lzapatariver2014@fau.edu

Abstract. Remote laboratories in the educational context are made possible by


the integration of the latest advances in telecommunication technology, software
architectures and educational standards support. Remote laboratories are impor‐
tant in education because they provide access to equipment that some institutions
cannot afford to purchase or maintain, reduce the need for dedicated physical
space for equipment and personnel to staff laboratories. But more than just fill the
absence of a real physical laboratory, remote laboratories can improve the users
experience through the use of enhanced adaptive interfaces that, when comple‐
mented with the use of educational standards like Tin can API, can provide infor‐
mation important in the educational context, for example, the mastery level of the
student and the complexity of the experiment. Based on that information, the
remote laboratory could take actions related to the controls of the experiment, for
example, disabling or enabling part of the experiment controls. Using smart
adaptive interfaces, the experiments can gradually increase their complexity,
taking into account variables that are clearly identified as part of the learning
processes, such as: difficulty level of the topic, students’ knowledge, and course
level among others.
This paper proposes a model and set of diagrams that define the integration
of adaptive interfaces in remote laboratories for educational purposes.

Keywords: Agents systems · Smart adaptive interfaces · Educational technology ·


Software architecture · Remote laboratories

1 Introduction

Remote laboratories normally have been implemented as isolated systems, which means
that they are not integrated to any other systems. In the educational context it is important
to interchange information back and forth between the remote lab and the learning
management systems (LMS), the curricular manager or at least be able to send results
of the student experimentation experience to the grading systems. This current lack of
integration can negatively impact the motivation for using remote laboratories a part of
the education processes.

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_39

zamfira@unitbv.ro
Models and Smart Adaptive Interfaces 417

Advances in software systems, specifically in web applications, standard protocols


for the communications between applications, cloud based architectures, and educa‐
tional standards for educational content, allow the design of simple and efficient mech‐
anisms of interaction between the remote experimentation process and the formal educa‐
tional learning processes.
Recently many services have been released to be integrated into the LMS. The
EduApps Center [1] is a centralized repository of software applications, these applica‐
tions make use of some level of educational standards, such as: Learning Tools Intero‐
perability (LTI) or Tin Can API, but there currently is not much applications available
in the topic of remote laboratories. Recently some advances in integrating remote labo‐
ratories with LMS’s use of federation protocols methodology [2].
The interest of this research is to extend remote laboratories interoperability by inte‐
grating the architecture purposed by the IEEE P1876 Networked Smart Learning Objects
for Online Laboratories Working Group [3], and adding a smart component through the
integration of smart adaptive interfaces.

2 Models of Use Cases for a Remote Laboratory Platform

Remote laboratories definitions, taxonomies, general architectures and models for the
integration of remote laboratories with academic environments have been developed
during the last couple of years [4, 5], showing the importance and the benefits of inte‐
grating this technology in the education context. To have a clear understanding about
which are the requirements of a software system, specifically a software tool, the design
of uses cases is well known as a good technique.
The following diagram describe the basic operations made by different roles such
as: students, teachers and administrators during the interaction with a remote laboratories
platform integrated with the LMS or any other virtual learning environment. This section
describes operations such as user sessions, authentication and authorization, scheduling,
user and resources management, learning objects integration, results management and
more important the student interaction process with the remote laboratory.
The laboratory reports data to the platform based on the educational standard Tin
Can Api, from which is possible for students, teachers or administrators gather infor‐
mation about the students results and general information about the use of the labora‐
tories (Fig. 1).

zamfira@unitbv.ro
418 L.F. Zapata Rivera and M.M. Larrondo Petrie

Fig. 1. General use case for the remote laboratories platform

The following diagrams show the details of each use case, they include either oper‐
ation from the point of view of students, teachers and administrators as well. The first
one is the process of the user session creation in which the user validate its credentials
and is authenticated and authorized according with its role, see Fig. 2.

Fig. 2. Use case for the creation of the user session

zamfira@unitbv.ro
Models and Smart Adaptive Interfaces 419

The teacher schedule an appointment or work session in the laboratory for a group
of students or for a specific student. The process use the student profile to take infor‐
mation about the student level, and is going to restrict the access to some content and
will suggest the type of view for the student. Once the profiles are validated the system
will check the availability and reserve the requested remote experiment, see Fig. 3.

Fig. 3. Use case for student schedule appointment.

Additionally the teacher can improve the student learning experience wrapping the
remote lab into a learning objects that can include for example: original content, or
integrate external tools such as: animations, videos or simulations. To help in the eval‐
uation process, the teacher can also integrate rubrics for the remote laboratory evalua‐
tion. To evaluate the student results during the experimentation process the platform
provides an interface to post the grades and also to give feedback to the students, see
Figs. 4 and 5.

Fig. 4. Use case for the learning experience based on learning objects.

zamfira@unitbv.ro
420 L.F. Zapata Rivera and M.M. Larrondo Petrie

Fig. 5. Use case for teacher grading and feedback tools.

The administrator role will manage the user accounts and the remote experiments
available in the platform, this include actions such as: add new resources, edit the experi‐
ments configuration of a specific experiment or even remove one of the experiments. The
following figures show the user and resources management, see Figs. 6 and 7.

Fig. 6. Use case for the user management.

Fig. 7. Use case for the laboratory management.

zamfira@unitbv.ro
Models and Smart Adaptive Interfaces 421

The results view process will provide adapted reports according with the rights of
each role. This results will be available from the learning environment and will be inte‐
grated with the grades books of the courses, see Fig. 8.

Fig. 8. Use case for the collection of results

The interaction with the experiment will be made through an adaptive interface that
adapt the view of the laboratory providing the student more or less control of the equip‐
ment according with his level of knowledge, see Fig. 9.

Fig. 9. Use case for the student adaptive user interface

3 Adaptive User Interfaces for Remote Laboratories

An adaptive user interface in software is defined as the possibility of a software interface


to change according to some input parameters defined for the users or identified auto‐
matically for the system, this type of approach has been applied in a variety of software
applications, such as commercial web applications, educational and research

zamfira@unitbv.ro
422 L.F. Zapata Rivera and M.M. Larrondo Petrie

applications, among others. Some advances have been done in the user interfaces for
Virtual Laboratories [6] and Remote Laboratories [7–10] that provide alternatives to
improve the user experience with the remote laboratory.
The following simplified architecture shows the remote laboratory which informs
on the student level and according to the complexity of the experiment, the laboratory
provides one of the views of the experiment, these views can block or hide some of the
controls, restricting the view of part of the controls or blocking the option of configuring
some others (view 1, view 2 and view 3).
One of the new concepts proposed is to integrate the remote lab and its interface as
a learning object into the learning environment. The students start using a very simple
version of the remote laboratory based on the difficulty level and their knowledge level,
as their knowledge and mastery level increase, the remote laboratory can turn on more
controls, increasing the complexity of the experiment. Information must flow in both
directions, from the student profile to the remote laboratory and vice versa, keeping
updated the student level and the remote laboratory interface (Fig. 10).

Fig. 10. Different user visual interfaces and the integration with the academic platform

4 Conclusions

The paper describes the basic use cases and some details the behavior of the smart
adaptive interfaces for remote laboratories, based on the integration of the platform of
the laboratories with the learning environments.
Smart adaptive interfaces will create a more meaningful experience for the students,
guiding them in a process that will change the difficulty level by changing the interface,
this creates a whole new learning experience in which the teacher and administrators
will be participants in the process.
By the implementation of this design it is expected to have, in the near future, wider
use of remote laboratories adapted to the requirements of the curriculum.

zamfira@unitbv.ro
Models and Smart Adaptive Interfaces 423

The development of the IEEE P1876 Networked Smart Learning Objects for Online
Laboratories Standard, will generate a unified view about the meaning of a learning
object that includes remote experimentation as a main component.

References

1. EduApps Center. https://www.eduappcenter.com/: Learning Tools Interoperability (LTI) is


a trademark of the IMS Global Learning Consortium, Inc. (www.imsglobal.org)
2. Orduna, P., Botero, S, Hock, N, Sancristobal, E., et al.: Generic integration of remote
laboratories in learning and content management systems through federation protocols. http://
morelab.deusto.es/media/publications/2013/conferencepaper/generic-integration-of-
remote-laboratories-in-learning-and-content-management-systems-through-federation-
protocols. pdf
3. IEEE Networked Smart Learning Objects for Online Laboratories Working Group. https://
ieee-sa.imeetcentral.com/1876public/
4. Zapata Rivera, L.F., Larrondo Petrie, M.M.: Models of remote laboratories and collaborative
roles for learning environments. In: Proceedings of the 13th International Conference on
Remote Engineering and Virtual Instrumentation (REV 2016), Madrid, Spain, 24–26
February 2016. IEEE (2016). doi:10.1109/REV.2016.7444517
5. Zapata Rivera, L.F., Larrondo Petrie, M.M.: Models of collaborative remote laboratories and
integration with learning environments. Int. J. Online Eng. 12(9), 14–21 (2016). International
Association of Online Engineering, Austria
6. Villar-Zafra, A., Zarza-Sánchez, S., Lázaro-Villa, J.A., Fernández-Cantí, R.M.:
Multiplatform virtual laboratory for engineering education. In: Proceedings of 9th
International Conference on Remote Engineering and Virtual Instrumentation (REV 2012),
Bilbao, Spain, 4–6 July 2012, pp. 1–6. IEEE. doi:10.1109/REV.2012.6293127
7. Rochadel, W., Bento da Silva, J., Shardosim Simão, J.P., da Costa Alves, G.R.: Educational
application of remote experimentation for mobile devices. In: Proceedings of 10th
International Conference on Remote Engineering and Virtual Instrumentation (REV 2013),
Sydney, NSW, 6–8 February 2013, pp. 1–6 (2013). doi:10.1109/REV.2013.6502905
8. Bull, S., Kay, J.: Open learner models. In: Nkambou, R., Bourdeau, R., Mizoguchi, J. (eds.)
Advances in Intelligent Tutoring Systems, pp. 301–322. Springer, Berlin (2010)
9. Saliah-Hassane, H., Dumont-Burnett, P., Christian Loizeau, C.: Design of a web-based virtual
laboratory instrument measurement interface. In: Proceedings of 2001 International
Conference on Engineering Education, Oslo/Bergen, Norway, 6–10 August 2001, p.
8D1.12-16 (2001)
10. Salzmann, C., Govaerts, S., Halimi, W., Gillet, W.: The smart device specification for remote
labs. In: Proceedings of 12th International Conference on Remote Engineering and Virtual
Instrumentation, (REV 2015), Bangkok, Thailand, IEEE, 25–27 February 2015, pp. 199–208
(2015). doi:10.1109/REV.2015.7087292

zamfira@unitbv.ro
Empowerment of University Education
Through Internet Laboratories

Abdallah Al-Zoubi(&)

Communication Engineering Department,


Princess Sumaya University for Technology, Amman, Jordan
zoubi@psut.edu.jo

Abstract. Curriculum reform in engineering education has become a salient


landmark in higher education landscape worldwide. Notably, the prevalence of
collaborative online initiatives has dominated the scenes of rehabilitation of
teaching and learning techniques that are most appropriate for the
technology-native new generations of students. In particular, developing coun-
tries in the Middle East constantly strive to develop sustainable long-term plans
to confront challenges, harness opportunities and maximise benefits of inter-
national trends in order to open up and attain equity and access of higher
education. Internet-based physical laboratories represent an eloquent paradigm
for reform, cooperation and modernisation of higher education. A communica-
tion engineering laboratory shared by a number of universities in Jordan is such
an attempt that may worth presenting.

Keywords: Remote labs  Communication engineering  eLearning 


Engineering education  Jordan

1 Introduction

Internet-based remote and virtual laboratories are positioned in the heart of the edu-
cational reform agenda worldwide. This emerging paradigm has already had a pro-
found impact on engineering education and in the process of advancement and
consolidation of universities and their role in socio-economic development [1]. There is
ample evidence that online laboratories, particularly remote labs, have a positive effect
on students’ learning, practical experience and engineering skills, and may offer a great
potential for cooperation and collaboration among teachers, researchers and profes-
sionals as well as higher education institutions all over the world. In addition, the use of
remote laboratories as a complementary distance-learning tool has already refuted the
misconception of any possible threat to replace conventional ones [2].
The state of utilization, advancement and profiteering of remote labs in the Middle
East has been timid and prudent, to say the least. Limited attempts, mainly as individual
efforts, have been made to promote the implementation of remote labs as an important
educational tool in universities across the region. Most of the pioneering researchers in
the field are based in institutions outside the region and no serious indigenous and
sustainable education infrastructure can be detected in the literature. In addition, the
patterns of cooperation and collaboration among universities in the Middle East are

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_40
zamfira@unitbv.ro
Empowerment of University Education Through Internet Laboratories 425

limited, modest and fragile. The research community is therefore urgently invited to
establish collaboration links in order to strength its capacity of influencing the adoption
of online labs in the region.
In this paper, an initiative to disseminate a culture of online engineering through the
utilization and integration of a communication engineering lab at Princess Sumaya
University for Technology is presented. First, a brief on a number of collaboration
projects with universities in Europe and USA that demonstrate the process of capacity
building in the field is described. The design choice, methodology and associated IT
infrastructure is then discussed in details and results presented. The authors aim to
serve students and teachers in the Middle East to benefit greatly from using the lab
without restrictions and to increase the quality of teaching and learning process by
allowing them to work remotely. This endeavour may also serve as an attempt to
establish science partnerships between countries and can give researchers the oppor-
tunity needed to help boosting the educational systems of their countries.

2 International Cooperation Initiatives

There are over 60,000 engineering students currently studying in 98 departments in 20


universities across Jordan. The estimated number of laboratories in the engineering
disciplines exceeds 700, with a tremendous amount of investment, infrastructure, and
human resource, technical and management exigencies. The lack of sufficient number
of laboratory equipment does not satisfy the needs of the large number of students in
engineering classes. In addition, students perform laboratory experiments in groups in
which not all participate efficiently in experiment setup, data recording and interpre-
tation of results. This poses as a great challenge that faces Jordan to rejuvenate its
engineering educational system and to lead efforts towards modernity.
Internet online-laboratories were an obvious choice for reform and serious efforts
were exerted since 2006 to build the capacity of the higher education system in Jordan
through cooperation with experienced universities in the field worldwide. The first
potential pursue of collaboration started with Carinthia University of Applied Sciences,
Austria, through in-house training of IT and technical staff, join research and publication
[3–7], and organizing conferences such as International Conference on Interactive
Mobile and Computer Aided Learning (IMCL), http://imclconference.org/imcl2016/
index.php, and IEEE Global Engineering Education Conference (EDUCON), http://
educonconference.org. Joint research concentrated mainly on application-specific inte-
grated circuits in a network of remote labs, and design and implementation of accessible
mobile clients for eLearning and e-research [3–7]. On the other hand, cooperation with
TU-Berlin, Germany has focused on application of remote technology on an electronic
engineering mobile lab, digital logic design and electrical power system labs with
integration and full classroom-laboratory interaction and information management using
tablet and OneNote [8–12]. The last bilateral initiative of collaboration in the field was
with Hertfordshire University, USA, which laid the foundation for the design and
implementation of the analog and digital communications remote lab [13–15].
Multilateral cooperation has however been recently established through efforts by
Princess Sumaya University for Technology to prepare, organise and lead two

zamfira@unitbv.ro
426 A. Al-Zoubi

European Tempus projects, the first entitled: “Modernizing Undergraduate Renewable


Energy Education: EU Experience for Jordan”, http://muree.psut.edu.jo/Home.aspx,
and the second entitled: “Enhancing Quality of Technology-Enhanced Learning at
Jordanian Universities”, http://eqtel.psut.edu.jo/Home.aspx. One major output of the
first project was the establishment of 4 renewable energy remote lab NI trainers, hosted
by 4 universities in Jordan covering almost the whole geography of the country.
Serious efforts were made, led by the National University for Distance Learning
(UNED), Spain, to integrate the trainers to a Moodle platform such that the experiments
were delivered online with full interactivity and flexibility to ensure the opportunity for
students to conduct the experiments online anytime and from anywhere without even
knowing where the experiment was physically located [16–18]. Integration techniques
were adopted in the design with plugins connected to the platform in order to monitor
users’ actions, both students and instructors. Every remote laboratory was provided
with authentication, authorization, and scheduling services to ensure exclusive access
through a queue or calendar-based booking, and user tracking services.
The communication engineering lab under consideration was treated as one of three
pilot courses in the second project in order to run quality assurance processes during its
design, delivery and adoption in the curriculum. Its design and integration to the
Moodle platform was the responsibility of Princess Sumaya University for Technology
as it has already embarked on its uptake prior to the launch of the project.

3 Remote Lab System Design

The design of the remote lab system architecture has taken several criteria into con-
sideration in order to widen the use and implementation of the technology in the
country. The first criterion was ensuring participation of a large number of students
from different universities in performing experiments easily and freely. Cost was also
an important factor which was decisive in the design preference due to limited
resources. The system consequently has followed the scheme shown in Fig. 1.
The actual hardware experiments were accommodated by the NI ELVIS work-
station and an EmonaFOTEx board. The workstation is a comprehensive tool for
teaching subjects such as circuit design, instrumentation, controls and telecommuni-
cations as it offers the flexibility of virtual instrumentation and the ability of cus-
tomizing application. It is a hands-on design and prototyping platform that integrates
the most commonly used instruments such as oscilloscope, digital multi-meter, func-
tion generator and bode analyzer into a compact form ideal for the lab or classroom.
The EmonaFOTEx board, on the other hand, has been specifically devised for teaching
introductory fiber-optic telecommunications topics in electrical and computer engi-
neering curricula. The board allows students to create communications systems by
wiring together system components with plug-in fiber-optic patch cables. This is
facilitated with detailed user documentation and a lab manual which contains
step-by-step detailed procedures to build, experiment and measure communications
systems. With this particular board, students could perform up to 11 experiments on the
theory of fibre optics communication. Other boards can also be used to conduct
experiments on analogue and digital communication circuits.

zamfira@unitbv.ro
Empowerment of University Education Through Internet Laboratories 427

Fig. 1. Remote lab system architecture.

The workstation was connected to a virtual instruments (VI) server where all
simulated instruments and equipment such as oscilloscope, function generator and
various fibre optics communication components were programmed and accommodated.
The VIs were programmed by LabVIEW while the VI server acted as an intermediary
between the ELVIS workstation and web server which hosted the learning management
system (LMS), the Moodle open-source platform in this particular case, through which
students could access the experimental hardware.
Moodle was preferred to proprietary platforms due to limited financial resources
and its powerful features such as ease of use and the wide supports it provides for a
large community of users. It was used as an entrance point for students to facilitate and
ease the provision of online classrooms by means of integrated features and tools such
as administrative, synchronous and asynchronous communication, assessment and
tracking, and multimedia sharing tools. One can apply all such services provided by
platform which accommodates the remote practical laboratory sessions. The platform is
actually based on PHP language and operated under an open source Apache web
server. In particular, Moodle 2.4.3+, PHP 5.4.7 and MySQL 5.5.27 version was
selected. It was supported by plugins and other software tools to manage e-content,
virtual class rooms, assignments, task submissions and grading, quizzes, exams, tasks
queue, lab booking, and scheduling online experiments. The VPN and server IP
addresses were located at 193.188.67.34 and 172.31.0.53 respectively, and physically
hosted at Princess Sumaya University for Technology servers which can be accessed at
the link http://eqtel-vle.psut.edu.jo.
Suitable educational and teaching material were selected to suite students and
professors while care was taken to integrate the remote lab to the LMS such that
experiments were delivered with full interactivity. The learning tools included syn-
chronous mode such as chat rooms and video conferencing for instant real-time
interaction; and asynchronous modes for students and teachers’ communication
through email and forums, assessment and evaluation, as well as cooperation and social

zamfira@unitbv.ro
428 A. Al-Zoubi

media tools, thus offering flexibility in delivery and providing the opportunity to share
social learning material amongst several institutions. In addition, four types of accounts
were created for administrator, teacher, student and guest. Appropriate privileges were
given to these accounts according to their roles. Students could access the content of
the lab, experiment, by visiting the main page. A description of each topic was given
prior to accessing the actual content of each experiment to enable students to famil-
iarize themselves with the scientific content. The interactive electronic content could be
accessed by clicking on “eContent” link and student could navigate easily using pre-
vious and next buttons and a table of content designed as a menu on to easily allow
access of a specific lesson.
In addition, a Microsoft SQL server was used to host tables of data for the purposes
of access and authentication, storage and retrieval of experimental data, and students’
feedback. The first table included a login student’s ID and name. In addition, for each
experiment, a table was created to allow students to answers questions in every single
experiment by filling in data into fields reserved specifically for this purpose as the
student was performing the experiment and executing step-by-step procedures. The
fields were created using available components in Microsoft Visual Studio ASP.NET
framework, and their dimensions such as height and width were adjusted carefully by
editing layouts using a web development tool called the inspect element that allow
designers to test, debug, and mainly edit the code for each online experiment. An
additional table was reserved for the feedback questionnaire at the end of each
experiment. In total, 13 tables were created in the database.

4 System Operation

The operation of the system is explained following the steps depicted in the flowchart
shown in Fig. 2.

Fig. 2. Operation flowchart.

zamfira@unitbv.ro
Empowerment of University Education Through Internet Laboratories 429

A number of communication engineering experiments were posted online into the


platform including experiments on sampling and Nyquist in PCM, PCM encoding and
decoding, time division multiplexing (TDM), line coding and bit-clock regeneration,
fiber optic transmission, PCM-TDM ‘T1’ implementation, optical digital filtering,
splitting and combining, fiber optic bi-directional communication, wave division mul-
tiplexing (WDM), and optical losses. Full detailed procedures were prepared and pre-
sented on the platform as an introduction for students to follow in an easy step-by-step
procedure prior to conducting the experiment. The platform hence acts as gateway to
enter the remote lab to control, manage and facilitate the operation of the experiments
online. To reach the desired experiment, a student should first login to the VLE by
entering a specific username and password created specifically for each user and stored
in the users’ tables database dedicated for access authentication as shown in Fig. 3.

Fig. 3. Access of Moodle remote lab page.

Students should then choose the desired experiment from amongst the list of 11
experiments, and read the appropriate step-by-step procedure.
A brief description on the experiment appear such as its name, documentation,
number of sessions that is allowed for each student, duration available to carry out the
experiment and server time. Student have to book a specific time slot on a specific date
to enter the experiments as shown in Fig. 4, one student at a time in order to avoid
conflicts in requesting and using VIs.
A student would then wait for her/his reserved time before being able to enter the
experiment. When the session starts, the experiment login page will appear as an iframe
inside the experiment page and the student logs in again using .Net+SQL server
authentication. An SQL connection object will check if the entered student ID and
name were already existing in the database.

zamfira@unitbv.ro
430 A. Al-Zoubi

Fig. 4. Experiment timeslot booking procedure.

A third segment was the user’s interface that included the experiments forms in
ASP.NET components for input fields and HTML pages. Since various types of input
fields such as submit button, survey page, and students’ answers, a solution to connect
all together as a one system was sought. The ‘view state’ method has consequently
been used in the code to reserve the entered answers in case of any unpredictable error,
or in case of page reloading. However, this method could not be applied to the
screenshots answers since images were clients’ property.
An example for a sample experiment is Nyquist in PCM which is illustrated below
to showcase the experimental procedure a student would encounter while performing
the experiment. The definition of PCM modulation was first given as a kind of source
coding which was a conversion of a signal from analog to digital. The Nyquist theorem
states that a signal must be sampled at a rate greater than twice the highest frequency
component of the signal to accurately reconstruct the waveform. In performing this
experiment, the student would first be asked to install the run time engine program,
only once at the beginning of undertaking the lab, to operate each VI.

zamfira@unitbv.ro
Empowerment of University Education Through Internet Laboratories 431

The next step would be to activate the functions generator, scope and DSA, and the
VI page appears as shown in Fig. 5. Then, adjustment of the function generator using
its soft controls to produce a sinusoidal wave output with specific frequency and
amplitude were performed. The oscilloscope could be adjusted for amplitude and
frequency in each of its two channels with appropriate DC offset using the knobs for
amplitude and time/division.

Fig. 5. The VIs of the DSA, scope and function generator as they appear during the experiment.

zamfira@unitbv.ro
432 A. Al-Zoubi

The student would then follow a step-by-step adjustment and measurement pro-
cedure by setting the DSA with values as shown in the tables below:

Input Settings Source Channel (0) to Channel (1)

Frequency span to 45,000

FFT Settings Resolution to 400

Windows to 7 term B-Harris

Trigger Settings Edge

Units to dB

Mode to RMS
Frequency
Display
Scale to Auto

Voltage Range to ±10 V

Mode to RMS

Averaging Weighting to exponential

# of averages to 3

The student would then be asked to run the oscilloscope at Channel (0) and to take
a screenshot for the input signal and upload into the system database. The peak-to-peak
voltage of the signal was reordered, the DSA was run and a screenshot for the input
power signal was taken and uploaded. The student would then be asked to determine
the power signal for the input signal with the following values where the required input
field have been created for students to fill with the recorded data obtained.

zamfira@unitbv.ro
Empowerment of University Education Through Internet Laboratories 433

Frequency (KHz) Power dB Vrms

15

20

25

30

35

The student was consequently instructed to change to the output channel (1), to take
a screenshot for the output signal an upload, record the peak-to-peak output voltage,
take and upload a screenshot for the output power signal and determine the output
power signal at given frequency values as below:

Frequency (KHz) Power dBVrms

15

20

25

30

35

Screenshots of both the input and the output signal from channels (0) and (1) would
then be taken and uploaded, and the frequency of the first six aliases in the sampled
message were determined and recorded in the field as below.

zamfira@unitbv.ro
434 A. Al-Zoubi

1.

2.

3.

4.

5.

6.

Finally, a screenshot for the first aliasing waveform was taken and uploaded and the
results submitted. After the students submitted the answers in the experiment page, a
survey page will appear with seven questions that must be answered and a suggestions
block to be filled as well to enable the design team to enhance or apply advantageous
marbles in future modifications to the system. The answers were eventually stored at
the survey’s database. Full details of operation and experimental procedure could be
viewed at https://youtu.be/VAdJJ454_C8.

5 Results and Discussion

The remote lab was first operated at the university in the second semester of the
academic year 2015/2016, were 30 students registered in the traditional communication
engineering lab offered as part of the requirements of the bachelor degree programme
on communication engineering. The students were requested to perform only 3 online
experiments out of 11 available which could be open at any time upon request. The
remaining required experiments were physically conducted by students as part of the
regular traditional lab. This approach was followed in order to gradually introduce the
online lab into curriculum with the minimum of ramifications and subsequently fertilize
a paradigm shift and disseminate a culture of technology-enhanced learning.
The feedback survey consisted of 7 questions related to the quality of the online
operational manual and the information it provided to set up and run the experiment,
easiness of conducting the experiments, understanding practical aspects of communi-
cation system, sense of operating the virtual equipment, flexibility to fit the laboratory
into schedule, future preference and choice of using remote experiments over traditional
ones and overall rating of performance of the remote lab. Ratings were based on a scale
from 1 to 5 (1 very low to 5 very high). The survey was actually aimed at investigating
students’ comprehension, perception and views, and satisfaction rather than seeking
acceptance or consent or even trying to reach a verdict on integrating remote labs into
the curriculum at this early stage of development. However, initial results, shown in
Fig. 6, have already shown promising prospects with an average of 78% of students
expressed satisfaction with the manual of the remote lab providing enough information
for easy setup of the experiment, 96% found it was easy to conduct the experiment,

zamfira@unitbv.ro
Empowerment of University Education Through Internet Laboratories 435

Fig. 6. Initial results on implementation of remote lab.

72% understood the practical aspects of communication systems, 76% felt like they
were operating real experiments, 87% were allowed to fit the laboratory work into their
schedule, 83% rated favourably the overall performance of the remote lab.
When asked to provide suggestions and observations to improve and enhance the
process, procedures and operation of the remote lab and experiments, 18 students had
no suggestions while 5 positively and favourably commented on its flexible, great
experience and time and efforts saving, and 12 students required the need to have more
trails and time to perform the experiment. The trail at hand may offer the university,
and higher education institutions in Jordan and beyond, the opportunity to enhance its
capacity to productively engage in development challenges and contribute to the
indigenization of technology-enhanced education through a viable context-based
approach. It is indeed an attempt to turn teaching tools inward in order to unpack the
foundations of university teaching, learning and research environments.
A full online lab was subsequently offered at Princess Sumaya University for
Technology in the summer semester of the academic year 2015/2016 and partially in,
Yarmouk University, University of Jordan and Hashemite University. The total number
of students in all universities was 61. Strikingly similar evaluation results to the pre-
vious semester were observed with 80% of students expressing satisfaction with the
use, implementation, performance and preference of the remote lab. Despite some
initial technical difficulties, it is pretty safe to assume that students indeed find the
implementation of the remote lab in university education practical, interesting and
challenging. Subsequently, the same lab is being offered at the current fall semester of
the academic year 2016/2017 in five different universities in Jordan: Princess Sumaya
University for Technology, Yarmouk University, University of Jordan, Hashemite
University and Al-Hussein Bin Talal University, covering a wide geographical area of
the country.

zamfira@unitbv.ro
436 A. Al-Zoubi

6 Conclusions

A low-cost remote communication engineering laboratory containing a number of


experiments that can be accessed via the internet have been designed and operated
successfully at Princess Sumaya University for Technology and 4 other universities in
Jordan. Students were enabled to perform experiments online in a blended learning and
full eLearning modes utilizing an integrated virtual leaning environment based on
Moodle open source learning management system. Students conducted several exper-
iments and the results show that the remote lab represented an appropriate environment
for engineering education that can be accepted formally and officially adopted by uni-
versity authorities to cope with advancement in global higher education. Further
investigation however, still need to be carried out to explore deeper the mode of delivery
of engineering labs. Future work will be mainly focused on establishing a network of
labs in various engineering and scientific fields in order to share resources with other
universities in Jordan and abroad.

Acknowledgment. The author would like to acknowledge the generosity of the European
Education, Audiovisual and Culture Executive Agency (EACEA) for the support provided to the
TEMPUS project entitled: “Modernizing Undergraduate Renewable Energy Education: EU
Experience for Jordan”, number 530332-TEMPUS-1-2012-1-JO-TEMPUS-JPCR and the
TEMPUS project entitled: “Enhancing Quality of Technology-Enhanced Learning at Jordanian
Universities”, number 544491-TEMPUS-1-2013-1-ES-TEMPUS-SMGR. Additionally, the
author are especially grateful to Professor Manuel Castro and his team at the Electrical and
Computer Engineering Department (DIEEC) of the Universidad Nacional de Educación a Dis-
tancia (UNED), Madrid, Spain, for their encouragement, support and corroboration throughout
the life time of the project.

References
1. Salah, R.M., Alves, G.R., Guerreiro, P.: IT based education with online labs in the MENA
region: profiling the research community. Int. J. Hum. Capital Inf. Technol. 6(4), 1–21
(2015)
2. Salah, R.M., Alves, G.R., Guerreiro, P.: A federation of online labs for assisting engineering
and science education in the MENA region. In: 3rd International Conference Technological
Ecosystems for Enhancing Multiculturality, Porto, Portugal, 7–9 October 2015
3. Auer, M.E., Zutin, D.G., Al-Zoubi, A.Y.: Online laboratories for eLearning and eResearch.
In: The 5th Congress of Scientific Research Outlook and Technology Development in the
Arab World (SRO5), Fez, Morocco, 25–30 October 2008
4. Auer, M., Al-Zoubi, A.Y., Zutin, D.G., Bakhiet, H.: Design of application-specific integrated
circuits for implementation in a network of remote labs. In: 2008 ASEE Annual Conference
and Exposition, Pittsburgh, Pennsylvania, USA, 22–25 June 2008
5. Danilo, G.Z., Auer, M.E., Al-Zoubi, A.Y.: Design and verification of application-specific
integrated circuits in a network of remote labs. Int. J. Online Eng. (iJOE) 5(3), 25–29 (2009)
6. Auer, M., Al-Zoubi, A.Y., Zutin, D.G.: Design and implementation of mobile clients for
remote labs. In: 5th International Symposium on Remote Engineering and Virtual
Instrumentation, Düsseldorf, Germany, 22–23 June 2008

zamfira@unitbv.ro
Empowerment of University Education Through Internet Laboratories 437

7. Auer, M., Zutin, D.G., Al-Zoubi, A.Y.: Implementation of a mobile accessible remote lab.
Int. J. Interact. Mobile Technol. (iJIM) 2(3) (2008)
8. Jeschke, S., Al-Zoubi, A.Y., Pfeiffer, O., Natho, N., Nsour, J.: Integration of an online digital
logic design lab for IT education. In: The SIGITE 2008 Annual Conference, Cincinnati, OH,
USA, 16–18 October 2008
9. Al-Zoubi, A.Y., Nsour, J., Jeschke, S., Pfeiffer, O., Richter, T.: An electronic engineering
mobile remote laboratory. In: 8th World Conference on Mobile and Contextual Learning,
Orlando, Florida, USA, 26–30 October 2009
10. Al-Jufout, S., Al-Zoubi, A.Y., Jeschke, S., Nsour, J., Pfeiffer, O.: Application of remote
technology to electrical power system laboratory. In: The International Conference on
Education and New Learning Technologies, EDULEARN 2009, Barcelona, Spain, 6–8 July
2009
11. Jeschke, S., Pfeiffer, O., Al-Zoubi, A.Y., Nsour, J., Al-Jufout, S.: Web-based laboratories for
distance engineering education in Jordan. In: Sloan-C: International Symposium-Emerging
Technology Applications for Online Learning, San Francisco, USA, 17–19 June 2009
12. Jeschke, S., Al-Zoubi, A.Y., Pfeiffer, O., Natho, N., Nsour, J.: Classroom-laboratory
interaction in an electronic engineering course. In: International Conference on Innovations
in Information Technology, IIT, Al Ain, United Arab Emirates, 16–18 December 2008,
pp. 337–341 (2008). ISBN 978-1-4244-3396-4
13. Abu-Aisheh, A.A., Eppes, T.E., Otoum, O.M., AlZoubi, A.Y.: Remote laboratory
collaboration plan in communications engineering. In: IEEE Second Global Engineering
Education Conference (EDUCON), Learning Environments and Ecosystems in Engineering
Education, Amman, Jordan, pp. 837–840, 4–6 April 2011
14. Abu-Aisheh, A.A., Eppes, T., Al-Zoubi, A.Y.: Implementation of a remote analog and
digital communications laboratory for eLearning. Int. J. Online Eng. (iJOE), 6(2) (2010)
15. Alkouz, A., Al-Zoubi, A.Y., Otair, M.: J2ME-based mobile virtual laboratory for
engineering education. Int. J. Interact. Mobile Technol. (iJIM) 2(2) (2008)
16. Al-Zoubi, A., Hammad, B., Ros, S., Tobarra, L., Hernández, R., Rafael, P., Castro, M.:
Remote laboratories for renewable energy courses at Jordan Universities. In: IEEE Frontiers
in Education Conference Proceedings, Madrid, Spain, 22–25 October 2014
17. Tobarra, L., Ros, S., Hernández, R., Pastor, R., Castro, M., Al-Zoubi, A.Y., Hammad, B.,
Dmour, M., Robles-Gómez, A., Caminero, A.C.: Analysis of integration of remote
laboratories for renewable energy courses at Jordan Universities. In: IEEE Frontiers in
Education Conference Proceedings, El Paso, Texas, USA, 21–24 October 2015
18. Tobarra, L., Ros, S., Pastor, R., Hernandez, R., Castro, M., Al-Zoubi, A., Dmour, M.,
Robles-Gomez, A., Caminero, A., Cano, J.: Laboratories as a service integrated into learning
management systems. Int. J. Online Eng. (iJOE) 12(9), 32–39 (2016)

zamfira@unitbv.ro
Expert Competence in Remote Diagnostics - Industrial
Interests, Educational Goals, Flipped Classroom
& Laboratory Settings

Lena Claesson1 ✉ , Jenny Lundberg2, Johan Zackrisson1,


( )

Sven Johansson1, and Lars Håkansson2


1
Blekinge Institute of Technology, Karlskrona, Sweden
{lena.claesson,johan.zackrisson,sven.johansson}@bth.se
2
Linnaeus University, Växjö, Sweden
{jenny.lundberg,lars.hakansson}@lnu.se

Abstract. The manufacturing industry are dependent of engineering expertise.


Currently the ability to supply the industry with engineering graduates and staff
that have an up-to-date and relevant competences might be considered as a chal‐
lenge for the society. In this paper an education approach is presented where
academia - industry - research institutes cooperate around the development and
implementation of master level courses. The methods applied to reach the educa‐
tional goals, concerning expert competence within remote diagnostics, have been
on site and remote lectures given by engineering, medical and metrology experts.
The pedagogical approach utilized has been flipped classroom. The main results
show that academic courses developed in cooperation with industry requires
flexibility, time and effort from the involved partners. The evaluation interviews
indicate that student are satisfied with the courses and pedagogical approach but
suggests more reconciliation meetings for course development. Labs early in the
course was considered good, and division of labs at the system and the component
level. However further long-term studies of evaluation of impact is necessary.

Keywords: Engineering education · Flipped classroom · Smart home and health ·


Diabetes · Scientific literacy · Engineering competence · Academia - industry ·
Expert competence · Metrology · Internet-of-Things

1 Introduction

Engineering skills and competences that are desired from industry are important to inte‐
grate in the education system. This is to increase the employability of engineering grad‐
uates as well as to provide an education platform further education adequate also for
engineers active in industry. In the end, this is expected to result in increased competi‐
tiveness, productivity and innovation rate within the industrial system. The importance
of recruiting staff with suitable competences has been examined in several studies, for
example in an EU study from 2010 [1].
In engineering, there is a gap between the skills developed at universities and the
skills required by the industry [2]. Academia has an important role in bridging this gap.

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_41

zamfira@unitbv.ro
Expert Competence in Remote Diagnostics 439

The strength in academia in combined with the competences in industry seems to be a


given success for society. However, the “art” of creating academic courses at an expert
level that are of relevance to industry is a challenge in itself.
According to Xia in [3], there are possibilities to link research and teaching for a
win-win situation. Xia mentions that combining education, work experience, and knowl‐
edge generation in work-integrated learning provides students with opportunities for
learning to be a professional by working and communicating with professionals.
Universities and companies are increasingly working together to define new degree
and certificate programs. For example, Harris Corp. worked closely with the Florida
Institute of Technology to develop a master of science program in systems engineering
providing a M. Sc. Degree with complementary enterprise architecture certificate. Other
examples are for instance the DARPA Grand/Urban Challenges and the Association for
Unmanned Vehicle Systems International, Autonomous Underwater Vehicle Competi‐
tion, where industry, government, and academia partners address real world prob‐
lems [4].
A common way to increase competences in industry is to procure courses from other
educational or industrial partners. Today Internet provides both free online courses and
courses providing higher education credits, see e.g. Study, MOOC-List and FutureLearn
[5–7].
The MOOC-List provides a collection of Massive Open Online Courses offered by
different providers. To facilitate learning and training activities e-learning provides a
way that also meet today’s demand on learning to be conducted anywhere and at any
time. In [8] the phenomena of MOOCs are described, placing them in the wider context
of open education, online learning and the changes that are currently taking place in
higher education at a time of globalization of education and constrained budgets. Chang
presents in [9] a high level review concerning e-learning and proposes the use of inter‐
active learning as a recommended method for staff training in industry and academia.
In the [10] EU initiative ICo-op, Industrial cooperation and creative engineering
education based on remote engineering and virtual instrumentation, interesting
approaches within the focus of this ongoing project remote diagnostics is presented.
With the logical frameworks, laboratory settings and similar set an example of possible
approaches within academia-industry cooperation.
The intention with our approach to develop and implement master level courses in
collaboration with industry and research institutes is to complement the existing
approaches and to build novel types of collaboration to further extend the expert compe‐
tences to increase the innovation capacity. In this paper we introduce an education
project focusing on academia - industry - research institutes collaboration concerning
the production of engineering courses in smart home and health applications directed
towards expertise for innovation.
We describe, the principles applied, the pedagogical approach, the setup, the educa‐
tional goals and the laboratory settings. Within the framework of this Knowledge Foun‐
dation (KKS) funded project, “Remote Diagnosis - online engineering, at master level”
BTH students and in particular employees of the companies participating in the project
but also other professionals in the industry, were able to attend the newly designed
distance learning courses in Measurement and Sensors [11]. KKS the research financier

zamfira@unitbv.ro
440 L. Claesson et al.

for universities with the task of strengthening Sweden’s competitiveness and ability to
create value [12].
The project is divided into three categories (a, b and c) based on the following areas:
(a) Electric power transmission, (b) Mechanical power transmission and (c) Smart home
and health applications. Furthermore, the design of three courses within in the area of
smart home and health applications will be described in Sect. 2.
Online Engineering can be described as methods to control and monitor physical
equipment, but also smart home and people, remotely over the Internet. In Sect. 3 the
experimental setups in the remotely controlled laboratories and the design of the instruc‐
tions for the experimental work in the remote laboratory will be described.
Furthermore, the pedagogical design, Flipped Classroom, implemented in the
courses will be discussed in Sect. 4. Flipped Classroom is a form of blended learning
which encompasses any use of Internet technology to leverage the learning in a class‐
room, providing teachers with more time to spend on interacting with students instead
of lecturing. This is commonly implemented with e.g. the aid of teacher-created videos
that students may study outside their scheduled time for lectures [13, 14].
Finally, in Sect. 5 the result of a questionnaire answered by students enrolled in the
courses will be presented.
This study aims to answer two questions;
• How can academia make engineering courses of relevance to industry?
• How can we make use of resources in academia such as laboratory resources, and
flipped classroom as pedagogical approach to reach the educational goals?

2 Outline of the Courses in Smart Home and Health Applications

The courses are divided into two blocks, one block that covers methods for diagnosis
and one block concerning technology for remote diagnosis. The courses in the block
concerning methods for diagnosis are based on each other while the courses in the block
which covers technology can be studied independent of each other. Each course with
green frame in the block diagram shown in Fig. 1 results in 3 European Credit Transfer
and Accumulation System credits (ECTS-credits), except for the Ethics and Architec‐
ture’s lecture series. One academic year corresponds to 60 ECTS-credits.
The course block that concerns methods for diagnosis is divided into three special‐
izations; Mechanical systems, electrical systems and Smart homes and health applica‐
tions. Each course consists of six lecture sessions (2 × 45 min), six exercises (1 × 45 min)
and one laboratory session (3 × 45 min). Each course is given during a period of eight
weeks. Each course is given as a distance course and includes experimental work on
remotely controlled experimental setups. Thus, the lab assignments in the courses are
carried out remotely. Adjustments to the course schedule were made based on interest
from the companies involved. The courses include application modules and these can
be adjusted according to the participants’ interests.
Furthermore, the first time a course was given within the project only employees
form the companies participating in the project attended the course. The second time a
course was given both students from academia and industry may attend the course.

zamfira@unitbv.ro
Expert Competence in Remote Diagnostics 441

Fig. 1. Block diagram of the courses developed in the project and how different courses and areas
of expertise support a remote diagnostic system. Real equipment, with control, calculations and
automation inside blue frames and courses inside green frames.

The pedagogical approach utilized was Flipped classroom, in essence, the students
were provided with course material before each lecture, and the lecture focused on
deeper discussions and analysis concerning the course subject. In some of the courses
an inquiry was compulsory for the students to fill in before each lecture. The enquiry
was used as a tool to identify if the students had read and understood the lecture material.
In essence the considered courses within Smart home and health applications were
as follows.

2.1 Course 1: Technical Metrology

The technical metrology course was held in cooperation with the SP Technical Research
Institute of Sweden. SP works close in cooperation with industry and academia, and
evaluate technologies, material, products, and process and provide an effective link
between research and commercialization [15]. Metrology is a special niche for SP, it is
Sweden’s national metrology institute.
In the course 11 students participated, company representatives. The course was
divided into three courses corresponding to the subject division in the project with elec‐
trical, mechanical and smart home & health applications.
The whole series of lectures in this course followed the well-known quality-assur‐
ance loop (originally formulated by Deming) [16]. Lecture 1 - A general lecture with
common concepts were held, furthermore specifics related to the different focus areas
were presented. Lecture 2 - Metrology with human, Lecture 3 - Temperature, fall and

zamfira@unitbv.ro
442 L. Claesson et al.

moisture, Lecture 4 - Introduction to measurement uncertainty, Lecture 5 - Decision-


making and Lecture 6 - Decision-making: Human cases.
The lectures in general were structured as follows, in Flipped classroom.
Lecture 1, for example:
• Introduction to metrology - basic concepts.
• E-tivity: Introductory Metrology. E-tivity is a term coined by Professor Gilly Salmon
to describe a framework for facilitating active learning in an online environment [17].
An E-tivity involves learners interacting with one another and with the course tutor
in an online communication environment, e.g. Adobe Connect, in order to complete
a particular task.
• Spark: Pre-recorded lecture, video on group work, video presenting whole course.
• Learning objectives: – Why measure? – How to measure? – What decisions are based
on measurements?
• Task: – Study the material on the LMS Its learning, pre-lecture, videos – This lecture –
Complete assignment 1 and submit it by Its learning.
• Learning resources: – Videos, pre-lecture, this lecture,
• Respond to others: Group work, video presenting the whole course.

2.2 Course 2: Analysis and Modeling 1


Participants were both students and company employees, and in the curriculum to the
course “Analysis and modeling of smart home and health applications” the learning
goals were described. Thus after completing the course the student should have acquired
the following knowledge and understanding:
• demonstrate an understanding of the use of systematic methods for modeling of
technical systems in the application of smart home and health systems
• understand the principles of modeling of sensors, actuators and modeling of smart
homes and health systems
• understand the general characteristics of dynamic technical systems
• understand the use of simulation as a method to analyze the technical system char‐
acteristics
• understand the system properties that include measurements on people
The activities in the course were structured accordingly. Furthermore, as a part of
the pedagogical approach, i.e. flipped classroom, the students had access to a prerecorded
metrology lecture with specialist and students’ comments focusing on measurements
with persons, measurements on humans and humans as measurements instruments.
Demands for skills and competences for engineering students. Given this, the prac‐
tice to perform real world engineering laboratory experiments as an important part to
provide engineering skills need a focus of its own. Today, many academic institutions
offer a variety of web-based experimentation environments, so called remote laborato‐
ries (RL), that support remotely operated physical experiments. These are new tools
enabling universities to provide students with free experimentation resources without a
substantial increase in cost per student.

zamfira@unitbv.ro
Expert Competence in Remote Diagnostics 443

At the end of 2006, the Department of Applied Signal Processing at BTH started a
project known as Virtual Instrument Systems in Reality (VISIR) together with National
Instruments in USA and Axiom EduTech in Sweden to disseminate the online work‐
bench concept created at BTH using open source technologies in collaboration with
other universities and organizations [18, 19]. Apart from BTH, five universities in
Europe have set up VISIR online laboratories for electrical experiments, (1) University
of Deusto, Bilbao, Spain, (2) The National University of Distance Education, Madrid,
Spain, (3) Carinthia University of Applied Sciences, Austria, (4) FH Campus WIEN,
Wien, Austria and (5) Instituto Politécnico do Porto, Portugal [20–22].

Lecture 1 Introduction
The course was introduced and the curriculum and course layout were presented. Flipped
classroom as pedagogical approach were presented, as were the course structure with
lectures, experimental work, literature, schedule, grading and expectations on the
students. In all the lectures the course leader holding Ph.D. in computer science were
present, thus holding an engineering perspective throughout the course.

Lecture 2 Smart Home Systems


This lecture was focusing on systems level of smart home and health systems. This to
provide an overview of the challenges of smart home systems and the specifics of an
expert working with state of the art in technology and implementation aspects. A stra‐
tegic selection of a technical manager from industry working part time both in a small
size and a large size company with smart home technologies was made. His competence
and up to date skills on system and application level were put into action while assem‐
bling and adding/removing sensors to smart home system IRL during the lecture. After
the lecture discussions with a student concerning specific technical implementation
details were performed. Giving the student answers and indications towards further
competence and development.

Lecture 3 Patient Personalization


A student on a technical study program having diabetes type 1 discussed engineering
support solutions for visualizations of information and measurements and automation
of inserting data and information. He spoke from the need perspective, how he functions
and his interests and requirements from a personalization perspective. Engineering of
application and web were presented and discussed from requirement engineering
perspective, and implementation perspective. Different functions and ways of displaying
of measurements were discussed. Brief focus upon the social sharing of data were also
part of the lecture. Interesting discussions between student-lecturer and between students
were initiated based on diabetes, course literature and the personalized competence
perspective. Several questions revolved around measurements and when to take them
optimally, and which factors that impacts blood sugar.

Lecture 4 Sensor System (Engineer and Entrepreneur Component Level)


This lecture was given in the lab environment in a small company. The lecturer was an
industry development manager having the skills as an entrepreneur on a large company
now having his own smaller company taught on component level. App and sensor

zamfira@unitbv.ro
444 L. Claesson et al.

engineering were discussed in detail from an application perspective in the research


front. Specific laws, rules and regulations, street smart perspectives as well as attitudes
and ethical acceptance models were discussed. Furthermore, engineering solutions that
were less successful and domain and staff related changes were presented. The students
were engaged in the discussions, and participated fully and expressed positive words as
signs of insights made from the lecture and the discussions. Specific engineering
descriptions about how the sensor were engineered and user interface discussions were
made. Presentations of personalized unidentified data with discussions about personal‐
ization, histogram and individual bio-rhythmic were made, see Fig. 2.

Fig. 2. Sensor based bio-rhythms of individual anonymized personal data with time on the x axis
and number of events on y axis.

Some discussions on personal bio-rhythms and personal patterns in behavior were


made. Also prediction and change of behavior models were presented from a perspective
of personalization. Predictions are a prioritized area for innovation in EU context. Reli‐
ability in measurements and complexity in interpretation of data, how to read and under‐
stand statistics and histogram corresponding to the learning goals of ‘general charac‐
teristics of dynamic technical systems’ and ‘system properties that include measure‐
ments on people’.
A clear innovation perspective was put and the lectures out on the company in the
company lab were productive and appreciated. However small tech obstacles were to
be handled during the process.

Lecture 5 Medical Doctor and Researcher Expert on Diabetes


A medical doctor, specialist and also a researcher in diabetes gave a lecture on the basics
of diabetes and more advanced aspects of diabetes. A strong discussion concerning what
is technical possible today and what is desired as functions from a medical perspective
were performed.

zamfira@unitbv.ro
Expert Competence in Remote Diagnostics 445

2.3 Course 3 - Analysis and Modeling 2


This course was only given to company employees in cooperation with industry and
were iteratively evaluated, i.e. suggestions for topics, lectures and after the lectures
comment for improvements were given. The lectures contained input from a PhD thesis
with focus on gesture recognition technologies, awarded the Chester Carlsson the
Swedish Engineering price. The labs were to apply to the participants own industry and
based on the smart home IoT lab at BTH. Methods for diagnose within healthcare were
presented. Within the lectures the essence within smart home architectures, measure‐
ments uncertainty - metrology, system architecture, IoT, open systems, technology for
measurements of movements of motion for example accelerometer, skin cancer, diabetes
and the flipped classroom approach were presented.

3 Lab Settings and Design of Lab Exercises

3.1 Smart Home Lab Architecture and Settings

Laboratory exercises represent an ideal scenario for engineering students to comprehend


through the application in actual situations of fundamental concepts and to analyze,
synthesize, and make judgments based on evidence.
The smart home lab is situated at BTH in Karlskrona. The physical lab structure
is similar to an ordinary furnished flat. A basic one room apartment with entrance,
hall, one living room with a bed, toilet and kitchen, see Fig. 3. Hygiene facilities is
however excluded i.e. shower or bathtub and a washing machine. Standard sensors
available on the market are used. They were selected based on requirements given
by the functions and to be less as well i.e. multiple sensors to have as few sensors/
actuators as possible. They are ubiquitous and merged into the environment i.e.

Fig. 3. The architecture of the smart home lab. X11 – sensor Fibaro: light, temperature and motion,
10 – sensor VISION: toilet door, open – close, 9 – sensor, Fibaro, living room, door, open – close, 7 –
motion detector, SP814-1

zamfira@unitbv.ro
446 L. Claesson et al.

engineered to blend in. Thus a person may in principle pass through the whole apart‐
ment without observing the sensors/actuators.
The lab setting has been engineered in cooperation with industry, thus it is an indus‐
trial platform with on the market sensors and software solution in use in everyday smart
home applications more specifically in care homes or in elderly and/or disabled persons
home/s. The system includes remote control and functions enabled via a smart phone.

3.2 Laborations 1 and 2 Platform


The platform in the lab were based on standard solutions on the market, but specially
compiled to suit the experimental work in the course. The wireless protocol used is Z-
wave, which is used to transmit the sensor events to the gateway, where all sensor data
is collected [23]. The gateway translates the sensor events into MQTT events, a wide-
spread publish/subscribe protocol, commonly used for IoT applications, which makes
it easier to integrate the solution with other systems. Several services use the MQTT
protocol to monitor what is going on in the experiment apartment. One of these is saving
all events to a database and can be queried from there by a simple web interface. In
Fig. 4 the graphical representation of the smart home sensors including the integration

Fig. 4. Smart home lab environment, sensor/actuator log.

zamfira@unitbv.ro
Expert Competence in Remote Diagnostics 447

of the smartphone fall detection application and geographical position to inform where
the fall has taken place are visualized.
As a proof of concept for simulating an elderly person falling down in their home,
a set of automation rules were written, to keep track of where in the apartment the person
was and to automatically notify when and where a person had fallen.
As MQTT is used by many different companies in IoT, it was interesting to see if it
could be integrated with other solutions. The basic protocol is always the same, but the
structure and naming of the events differ slightly between operators. The IBM IoT plat‐
form was one case study where it was shown that commands on their MQTT broker
could be translated and sent to and from the smart home gateway, making it possible for
solutions running in the IBM cloud to interoperate with the smart home system.
The reason for selecting z-wave are from an academic - industrial perspective, and
in one of the lectures in the smart home & health applications course a detailed presen‐
tation and discussion about other industrial standards, advantages and disadvantages
were presented. A bridge to IBM open system on GitHub has been engineered.

3.3 Design of Lab Exercises 1


The first laboratory experiment concerned in essence interpretation of activities in the
home based on the real time updates of the log of the sensors/actuators. The students
were given the architecture of the room and the functions of the sensors and how they
were placed.
A fundamental part of a smart home environment is to manage the sensors/actuators.
These can be represented using different visualizations or interfaces. In this lab, we have
developed various types of sensors and representations of the sensors when they are
activated. The lab is to you to try to identify what is happening in the smart home based
on what the sensors detect.
Learning Outcomes: To be able to interpret events, understand the complexity of
events from the sensor/actuator data and try to understand how the sensor data is visual‐
ized, how often data is sent, the format in which, depending on the measure/register/
identify.
Describe the events on the basis of what you think goes on in the lab.
Review and discuss which events really carried out in the lab.
Is it a reasonable visualization, logging of events linked to what they measure?
Can it be done better?
Discuss issues such as thresholds, timestamp, updating of sensor data, the sensor is
activated, visualization, interface, representation etc. Use literature and other relevant
literature.

3.4 Design of Lab Exercises 2

Learning Outcomes: To understand and manage large amounts of data to manage data
files and visualization.

zamfira@unitbv.ro
448 L. Claesson et al.

There are large amounts of data that can be used in the home appliance and health
applications [24]. Some are open, others are locked in the example. Various companies
and public bodies. To access, manage, visualize and understand this type of data is a
desirable qualification in the home appliance and health applications. Use the resources
available within the application, Diabetes, to develop a research question and visualize
data and services related to this application.
In the smartphone application in the Fig. 5 the accelerometer was presented. The
students downloaded the app themselves and then used it remotely and got access to the
log and could follow the lecture on threshold values.

Fig. 5. Visualization of accelerometer x, y, z positions in smart phone app. This application of


accelerometer data is measuring the accelerometer movements in fall. Discussing how service
personnel can be empowered by the functions of knowing if a person is standing up or falling,
enabled by remote diagnostics. Accelerometer attached to body in smart phone or on smart clock
and safely alarm on wrist.

Furthermore, we used two large companies’ standard solutions on IoT and meas‐
urements and metrology for the lab.

4 Flipped Classroom Outline

Initially, it is important to prepare students what the pedagogical method Flipped Class‐
room is. The teacher in a course starts with an introduction and explain what education
method flipped classroom implies to the students. This part is very important to get active
students in the online lectures.

zamfira@unitbv.ro
Expert Competence in Remote Diagnostics 449

In the course Analysis and modeling of smart home and health applications, in a
learning management system Its Learning, a student finds materials to read at least one
week before each session. The materials contain compulsory quiz to be answered.
The teacher can document which material the students had shown interest in before
lesson, but it was not possible to see if it really was read by students or just downloaded.
To find out if the documents was read, not just downloaded, a quiz were sent out to the
students. The quiz to be completed by the students contained information that must be
solved by the students. The number of correct answers could then give an indication to
if the students had read the article or the material before the lesson.
Furthermore, the students can provide feedback on whether the material needed to
be updated or adapted to their situation or wishes of content.

5 Results

After the sessions a questionnaire was passed to the students to acquire their opinion
about the courses. The questionnaire had 5 open ended questions. In total 17 students
responded to the questionnaire. The questionnaire in its full form and the answers to it
are presented below.
Questionnaire.
1. Did you find the course had relevance to the industry your working in?
2. If, what kind of job do you have, for example, business, academia, self-employed?
3. Has this course given you the opportunity to combine work with studies, if so, how?
4. What did you think about getting articles and study questions before you entered the
lecture? (Pedagogical approach Flipped classroom)
5. What is your opinion about the lab exercises 1 and 2?
The open ended questions answer from students were;
1. Yes (8), No (8), I do not work (1).
Comments from 7 students: The course has clearly been relevant! The course was
built on the “flipped classroom” and been so open has suited me very well. Person‐
ally, I appreciated the openness and formlessness (that’s the best word I am going
on right now) the issues very much, because by its nature given me a “nudge in the
right direction,” while they were big enough (not to be “lookup issues “), and made
room for me and my way of working. I definitely think that through my work with
the course got a good overview of smart home and its applications for health and
feel far better equipped to possibly one day working in the fields. IoT. I can develop
in my work, but most interested in self-interest. Not linking right now but work with
similar tools and I will hopefully get use of the education in future.
2. Automotive Industry (5), Calibration Technician (1), IT industry (3), Electrical
Engineer (3), Health Centre (1), Science Center (1) Manufacturing (1) and
Production (1).
3. Very good because assignments deadline was at the end of the course and it was up
to each individual to plan their work. Did thesis at the same time and was thus outside
and worked during daytime. I spent 1-2 h a week on the studies associated with the

zamfira@unitbv.ro
450 L. Claesson et al.

course. Had no time for the course. Did not read the course at all because I did not
have time.
4. The pedagogical design Flipped Classroom suits me very well.
5. Very good! I liked that the first lab was early and with the layout. It gave me the
inspiration to think about smart home opportunities and span, present and future
situation, etc. Thought about other lab because I had to go a little deeper in a quite
specific application, and work on the sensor level. The laboratory then supplemented
each other very well (a general laboratory system-level, application-specific labo‐
ratory component level) and inspired at least me (again, thanks to data transparency
and formlessness) to think freely and to take it beyond the assignments.

6 Conclusion

Academia usually educates students, in this approach a novel way to educate regular
students and industry staff has been performed. Furthermore, for increased competence
as a level of expertise for innovation for the already employed in engineering industry.
Conclusively, the answers to the two questions are as follows;
How can academia make engineering courses of relevance to industry?
• Hard to find students holding the requirements
• Hard to find students having the time being a student and performing work in parallel
• The production line of the companies needs to be taken into consideration (that is
holding a product deadline are main priority for the companies)
• Trust
• Business awareness
• Measuring innovation - over time - how
• Education platform - how to reach students - how to measure their process
The questionnaire reveals that equal number of persons had found the course of
interest to the industry they worked in as not of interest. In the second question of how
we can make use of academic resources of flipped classroom as pedagogical approach
to reach the education goals, one challenge has been to encourage the students to go
through the preparatory material in advance. The tools used by the teachers for meas‐
urement in the learning platform show if the student downloaded the literature, but not
if the student has read and understood the literature, it has been complemented with a
survey. A survey holding questions that potentially can be answered correct if the given
literature has been read.
Meanwhile, flipped classroom technology lead to greater effectiveness through a
more active learning. Some specific goals, the most active learning way, was that:
• Encourage students to more rules-bound prepare for each lesson
• Helping teachers to better identify students’ difficulties in good time to adjust the
learning
• Helping students to develop a stronger “need to know”
• Establish an interactive environment in the ‘classroom’

zamfira@unitbv.ro
Expert Competence in Remote Diagnostics 451

Further work and approaches towards evaluating education approaches of relevance


to industry are of interest and much needed from several countries as one approach in
bridging the gap of persons holding competence of relevance for industry.

References

1. European Commission: Employers’ perception of graduate employability. Analytic report,


Flash Eurobarometern (2010)
2. Jonassen, D., Strobel, J., Lee, C.B.: Everyday problem solving in engineering: lessons for
engineering educators. J. Eng. Educ. 95, 139–151 (2006)
3. Xia, J., Caulfield, C., Ferns, S.: Work-integrated learning: linking research and teaching for
a win-win situation. Stud. High. Educ. 40, 1560–1572 (2015)
4. Ferguson, N.: Achieving synergy in the industry-academia relationship. Computer 44, 90–92
(2011)
5. Study.com. http://study.com/
6. MOOC-List. https://www.mooc-list.com/
7. FutureLearn. https://www.futurelearn.com/
8. Yuan, L., Powell, S.: MOOCs and open education: implications for higher education. Report,
JISC cetis (2013)
9. Chang, V.: Review and discussion: E-learning for academia and industry. Int. J. Inf. Manag.
36, 476–485 (2016)
10. ICo-op. http://www.ico-op.eu/
11. BTH: Diagnos på distans- ‘online engineering’ på mastersnivå. bth.se/diagnospadistans
12. KK-Stiftelsen. http://www.kks.se
13. Kim, M.K., et al.: The experience of three flipped classrooms in an urban university: an
exploration of design principles. Internet High. Educ. 22, 37–50 (2014)
14. Strayer, J.F.: How learning in an inverted classroom influences cooperation, innovation and
task orientation. Learn. Environ. Res. 15, 171–193 (2012)
15. SP Technical Research Institute. https://www.sp.se
16. Pendrill, L.: Applications of statistics in measurement and testing. https://
metrology.wordpress.com/measurement-process-index-svenska/
17. Salmon, G.: E-tivities: The Key to Active Online Learning. Routledge, New York (2013)
18. Gustavsson, I., Claesson, L., Nilsson, K., Zackrisson, J., Zubia, J.G., Jayo, U.H., Håkansson,
L., Bartunek, J.S., Lagö, T., Claesson, I.: The VISIR open lab platform. In: Azad, A., Auer,
M., Harward, V. (eds.) Internet Accessible Remote Laboratories: Scalable E-Learning Tools
for Engineering and Science Disciplines, pp. 294–317 (2012)
19. Gustavsson, I., et al.: The VISIR open lab platform 5.0-an architecture for a federation of
remote laboratories. In: Proceedings of the REV 2011 Conference (2011)
20. Gustavsson, I., et al.: On objectives of instructional laboratories, individual assessment, and
use of collaborative remote laboratories. IEEE Trans. Learn. Technol. 2, 263–274 (2009)
21. Gustavsson, I., Zackrisson J., Lundberg J.: VISIR work in progress. In: 2014 IEEE Global
Engineering Education Conference (EDUCON) (2014)
22. Tawfik, M., et al.: VISIR: experiences and challenges. Int. J. Online Eng. 8, 25–32 (2012)
23. Z-Wave. https://www.zwavesverige.se/
24. NDR - Nationella diabetsregistret. https://www.ndr.nu/#/knappen

zamfira@unitbv.ro
Parallel Use of Remote Labs and Pocket Labs
in Engineering Education

Thomas Klinger(B) , Danilo Garbi Zutin(B) , and Christian Madritsch(B)

Carinthia University of Applied Sciences, Villach, Austria


t.klinger@cuas.at, {garbi,madritsch}@fh-kaernten.at

Abstract. This paper shows, how Pocket Labs, being the latest trend
in engineering education, can be used together with already established
Remote or Online Labs. Not only technical aspects, but also didactical
methods and student’s motivation have to be considered.

1 Introduction

Remote Labs are already existing for several years. In this concept, laboratory
hardware is located at a place, where storage and maintenance of the exper-
iments is possible, which mostly is a University or School campus. Students
access the lab exercises via the Internet using a service broker; an example is the
iLab Shared Architecture (ISA) [1], a Web services based distributed software
framework to manage heterogeneous remote labs. The federation model of ISA
allows for an easier sharing of remote labs across different institutions assuming
the remote labs implement the ISA Web services API.
Pocket Labs became considerable when the prices of hardware reached such a
low level that it was possible to provide each student with his or her own piece of
laboratory equipment. This paper discusses the combination of both principles,
which was first time carried out in Fall 2016 at CUAS [2,3].

2 Lab Concepts
Online or Remote Labs provide a solution for students to perform laboratory
exercises at a self-chosen time and also from remote places, mostly at home.
As personal interaction with the measurement and experimentation object is a
critical issue for engineering students, especially Remote Labs contain real hard-
ware. Access to this hardware is provided via web interfaces and very often also
via video showing the exercise itself. Nevertheless, students are not in physical
contact with the experiment.
As Pocket Labs provide actual and physical contact of students with real
hardware, they can be used especially for basic exercises and for students without
much experience. If electric or electronic components are used, a limitation will
be for standard and cheap components, as any other solution will not be feasible
due to financial reasons.

c Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6 42

zamfira@unitbv.ro
Parallel Use of Remote Labs and Pocket Labs in Engineering Education 453

Figure 1 shows different lab infrastructure concepts, starting with the classic
lab, where students have to come to the University campus and perform their
exercises in a predefined time. Remote Labs have the advantage that exclusive
and expensive hardware has to be installed and maintained only once and can be
shared among students. Finally, Pocket Labs bring students again together with
lab hardware, combined with the advantage of more or less free-chosen location
and time.

Fig. 1. Laboratory infrastructure concepts and methods

3 Didactic Considerations

As the concept of Pocket Labs requires different teaching methods, some didactic
aspects have to be considered. Especially the difference between full-time and
part-time or evening students, who have to accomplish their studies parallel to
a regular job, has to be taken into account.

3.1 Motivation

As part-time students always try to optimize, they might skip the exercises at
all or part of them, thus taking into account lower grades. To prevent this, the
laboratory exercises have to be integrated into the respective courses in such a
way, that they are not only essential for passing the course, but also serve as an
obvious tool for the understanding of the course topics.
This situation is slightly different for full-time students. Usually, they need
more instructions, guidelines and “helping hands”, which makes it more difficult
to leave them alone with laboratory exercises. In that case, it is even more
important to define the exercises as integrated part of lectures.

zamfira@unitbv.ro
454 T. Klinger et al.

3.2 Supporting Materials

The preparation and distribution of supporting materials have to be considered


thoroughly. Especially if the exercises are part of beginner’s courses in the first
or second semester of the study program, the instructions for them have to be
well prepared and should serve as a step-by-step manual for the exercise.
Not all of the beginners are familiar e.g. with breadboarding guidelines; they
do not know, which pin holes are connected, how to use power and ground strips,
and other specific topics. Additionally, students should learn to read electronic
circuit diagrams and also establish a connection to the real circuits an their
breadboards.

3.3 Preparation of Lab Exercises

It is obvious, that all laboratory exercises for Pocket Labs have to be thoroughly
prepared and tested, before they are submitted to students. An important issue
is, whether the laboratory exercises with Pocket Labs are integrated into existing
courses or are stand-alone. Another possibility is, to define Pocket Lab exercises
as optional so that they may provide further and deeper understanding of the
topics of a corresponding lecture or in general.

4 Examples of Parallel Use

The following examples show, how parallel and yet supporting use of lab concepts
and infrastructure can be performed. They are taken from different lectures, are
integrated in the respective curricula at different levels, and use even different
lab hard- and software, so that the general application of the concept is shown.

4.1 Electrical Engineering

The course Electrical Engineering is part of the curriculum for first semester
students. In this course, students learn the first concepts necessary to analyze
electrical circuits, such as Ohm’s law, methods of network analysis (KVL/KCL),
operational amplifiers and the basics of RC and RL circuits switching. Labora-
tory work plays a major role for students to learn these concepts, therefore this
course was chosen as one of the pilot courses for a parallel use of Pocket and
Online Labs.
With this example we aim to show how Online and Pocket Labs can be
used in a complementary basis. Each one of them is more adequate for some
particular situation. For example, Online labs are very well suited for performing
measurements with circuits for which the internals should be kept hidden from
the students for didactic purposes. These circuits can be represented by black
boxes. Of course, implementing a black box for a Pocket Lab is possible, but it is
also a highly inefficient approach, since this very specific hardware setup would
need to be replicated every semester, for each student of the course. With an

zamfira@unitbv.ro
Parallel Use of Remote Labs and Pocket Labs in Engineering Education 455

online lab, since the same experiment setup is shared among all students, the
creation of the experiment and its maintenance are much simpler from the point
of view of the lecturer.
An example of a lab exercise delivered to the students of this course consists
in finding the Thévenin and Norton equivalents at terminals A and B of an
unknown network represented by a black box (Fig. 2), which, in our case, contains
a simple voltage divider. Since students do not know the internals of the black
box, the Thévenin and Norton equivalents can only be obtained experimentally
by measuring the open-circuit voltage on terminals A and B and the short-circuit
current that flows through A and B.

Fig. 2. Finding the Thévenin and Norton equivalent of a black box

The Electrical Engineering course material and assignments are managed


with the Moodle Learning Management System. The students sign in to the LMS
with their university credentials. The course environment of the LMS contains
the lab assignments and the lab client application embedded as an external LTI
compliant tool. At the end of the course students have to submit the lab protocol.
The Online Lab exercises were implemented with the VISIR (Virtual Sys-
tems in Reality) Lab, a flexible remote workbench platform for experiments in
electronics [4].

4.2 Programming Exercises

Within the lecture of Computer Science (Bachelor degree program), students


learn a programming language from the ground up. Additional, the basic con-
cepts of algorithms, data-structures, and program flow are taught. Exercises,
Labs, and student homework are major components of this practical oriented lec-
ture. Part of student homework is the Virtual Programming Lab (VPL) which

zamfira@unitbv.ro
456 T. Klinger et al.

is implemented in moodle. VPL allows the teacher to setup a group of pro-


gramming exercises. Students receive an exercise description via moodle. This
description explains the expected outcome in detail and also provides a set of
test-cases or test-data against students are able to check their results (Fig. 3).

Fig. 3. VPL prime number test cases

Students can edit their program source code in the browser and they can run
their programs interactively. The teacher has the ability to search for similarities
between the files and can finally grade the results.

4.3 Analog Computing

Analog computing is a technology that was widely used in the second half of
the last century to perform complex calculations which were not able to solve
with digital computer of that age in a reasonable time and with reasonable
effort. Although the idea of analog computing is itself very old (think about the
Antikythera mechanisms dated about 100 B.C.), it is nowadays outperformed
by digital computers; nevertheless it is a good basis for students to understand
the principles of calculation circuits [5].
It is not possible to provide every student or a number of laboratory places
with even a small analog computer; the development effort as well as the costs
would be too high. Therefore, the topic is ideal for a Remote Lab. Additionally,
students can learn the basics of calculation circuits, such as summing and differ-
ence amplifier, and others. With this understanding they are able to understand
also the function of simple analog computers, which may be provided as Remote
Lab by the University.

zamfira@unitbv.ro
Parallel Use of Remote Labs and Pocket Labs in Engineering Education 457

4.4 Image Processing


Image Processing is embedded in different lectures at both, the bachelor and
master degree programs. After students learn the basic principles they use dif-
ferent platforms to implement image processing applications. One platform is the
Raspberry Pi. Students are using the Pi Camera to implement an edge detection
algorithm in Python using OpenCV. Applying an edge filter, the intensity of the
edge is highest if the edge is in focus.

Fig. 4. User interface of an auto-focus remote lab in image processing lectures [6]

This principle is re-used for the implementation of an auto-focus algorithm on


a Visualizer Remote Lab (Fig. 4). The focus motor of the lens of the visualizer is
controlled via a serial interface. Image processing methods are used to compute
the edge filter and bring objects in focus.

5 Conclusion
It could be shown, that the parallel use of Remote and Pocket Labs adds value
to engineering education, if some didactic aspects are considered. As students
are more or less left alone by accomplishing them, especially guiding mechanisms
and supporting materials have to be thoroughly considered.

References
1. Harward, V.J., et al.: The iLab shared architecture: a web services infrastructure
to build communities of internet accessible laboratories. Proc. IEEE 96, 931–950
(2008). doi:10.1109/JPROC.2008.921607

zamfira@unitbv.ro
458 T. Klinger et al.

2. Klinger, T., Madritsch, C.: Use of virtual and pocket labs in education. In: REV
2016, Madrid, Spain (2016)
3. Klinger, T., Madritsch, C.: Collaborative learning using pocket labs. In: IMCL 2015,
Thessaloniki, Greece (2015)
4. Gustavsson, I., et al.: A flexible electronics laboratory with local and remote work-
benches in a grid. Int. J. Online Eng. (iJOE) (2008)
5. Ulmann, B.: Analog Computing. Oldenbourg, München (2013)
6. Klinger, T.: Image Processing with LabVIEW and IMAQ Vision. Prentice Hall
PTR, Upper Saddle River (2003)

zamfira@unitbv.ro
The Effectiveness of Online-Laboratories
for Understanding Physics

David Boehringer ✉ and Jan Vanvinkenroye


( )

Computer Center (ICT Services), University of Stuttgart, Stuttgart, Germany


{david.boehringer,jan.vanvinkenroye}@tik.uni-stuttgart.de

Abstract. For the class “Experimental Physics for Engineers” (1500 students
each winter term) at the University of Stuttgart online laboratories (80% “virtual
labs”/simulations and 20% remote experiments) are optional learning resources.
In a new long-term investigative set-up the learning effects of online laboratories
as well as the other learning resources are to be detected.

Keywords: Online laboratories · Physics · Long term evaluation · Learning


outcome

1 Initial Situation

At the University of Stuttgart “Experimental Physics for Engineers” is a mandatory class


for almost all engineering students in their first semester. About 1500 of them participate
in each winter term. Due to the high number of participants and limited room capacity,
the same lecture used to be given twice a day, giving all students a chance to attend. The
curriculum does not include any mandatory exercises for this specific lecture, nor does
the tight schedule allow an accompanying lab course. This has to be taken after the exam
of the theoretical part of the course during which the students get no hands on experience.
It is only the lecturer performing the experiment in the lecture hall.
Students experience the class as difficult and last year two thirds of those taking the
exam for the first time failed. For helping the students the lecturer offers all kinds of
learning resources such as a collection of formulas, slides, an Audience Response
System during the lecture, lecture recordings, exercises, the questions of past examina‐
tions, and last, but not least online laboratories.
Online laboratories have first been introduced in this class in 2009/10 [1–3]. Since
then they are a regular optional learning resource every time the class is given. The
online laboratories (80% “virtual labs”/simulations and 20% remote experiments) are
offered as SCORM modules to the students within the university’s Learning Manage‐
ment System along with the other learning resources. Each experiment is embedded in
an online test (self assessment) that consists of three phases:
1. The orientation phase: This first phase allows students to familiarize themselves with
the online-experiment. To this end, an abstract on the experiment is presented
including a short description of the experiment and the task to perform. Learning

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_43

zamfira@unitbv.ro
460 D. Boehringer and J. Vanvinkenroye

goals are described in this phase and a small pre-test evaluates the knowledge of the
students before they run the exercises with online-experiments. Since 2010/11 the
orientation phase includes a short interesting and entertaining movie the aim of
which is to prepare students for the upcoming exercise and to pique their curiosity.
2. The execution phase: This is the main phase of the exercise. Here, the given task
should be mastered by the students using the online-experiment. In 2010/11 some‐
times more than one experiment per phase was included to offer more variety and
make the exercise more interesting.
3. The review phase: In this phase the progress of the students will be checked. This
phase is also implemented as a small test. From 2010/11 onward the questions of
the review phase are similar to questions of the exam and one of the questions of
one of the online-experiment’s review phase is actually included in the exam.
The preparation of the learning resources takes quite some time and effort. Hence
the lecturer asked himself “is it worth the effort?”, and contacted the eLearning depart‐
ment of the computer center not only for technical support (as in the beginning), but also
for the evaluation of the effectiveness of the respective learning resources for under‐
standing physics.

2 Evaluations of 2009/10 and 2010/11

Online experiments were introduced in this class in the course of the LiLa (Library of
Labs) project in 2009 and closely accompanied and analyzed by the project team [4].
In 2009/10 26.7% of the students who took the exam performed at least one of the
three online-experiments (the fourth experiment was an exception for being a rather
difficult open ended question posed as a competition). In 2010/11 47.3% of the students
who took the exam participated. The total numbers of students (not only those who took
the exam) are shown in Tables 1 and 2.

Table 1. Number of participants in the exercises in the winter term 2009/2010


Exercise Pre-test Execution Post-test
1 325 245 153
2 203 173 115
3 133 50 15
4 21 1 No post-test in this exercise

In both years it could be observed that the more online-experiments the students
performed the more likely it was they would pass the exam (see Figs. 1 and 2). Most
students used the online-experiments in the course of their exam preparation during the
weeks before the exam and not as regular learning resource during the lecture period.

zamfira@unitbv.ro
Effectiveness of Online-Laboratories for Understanding Physics 461

Table 2. Number of participants in the exercises in the winter term 2010/2011


Exercise Pre-test Execution Post-test
1 801 692 583
2 631 549 463
3 515 414 314
4 478 362 291
5 418 307 238
6 426 281 206

Percentages of students who passed the exam in 2009/10 respective


the number of online-experiments they performed
100
90
80
70
60
50
40
30
20
10
0
none 1 2 3

Fig. 1. Percentages of students who passed the exam in 2009/10 respective the number of online-
experiments they performed; overall 68.5% of the students passed the exam.

In 2010/11 a much lower percentage of students passed exam compared to the year
before (47.8% compared to 68.5%). The main reason for this is that the kind of questions
asked and the structure of the exam had changed profoundly. Nevertheless, when asked
about the value of the online-experiments as preparation for the exam, 70.3% considered
them as helpful in 2010/11 compared to only 31.8% in 2009/10. Apparently the changes
made in each of the three phases of an online-experiment were successful.
Figures 1 and 2 cannot tell us about the effectiveness of online laboratories for the
learning outcome. The data seem to suggest that the more interested the students are,
the more they learn and the better their exam performance is.
As additional indicators students of the 2009/10 class were asked about their perceived
learning success because of online experiments (Fig. 4) and the effect online experiments
have on their motivation to deal with the topics of the lecture more intensively (Fig. 3).

zamfira@unitbv.ro
462 D. Boehringer and J. Vanvinkenroye

Percentages of students who passed the exam in 2010/11 respective


the number of online-experiments they performed
100
90
80
70
60
50
40
30
20
10
0
none 1 2 3 4 5 6

Fig. 2. Percentages of students who passed the exam in 2010/11 respective the number of online-
experiments they performed; overall 47.8% of the students passed the exam.

The online experiments increased my motivation to deal with the


topics of the lecture more intensively
45
40
35
30
25
20
15
10
5
0
agree don't agree

Fig. 3. Percentages of student answers on a bipolar scale with six units concerning the effect of
online experiments on the students’ motivation

zamfira@unitbv.ro
Effectiveness of Online-Laboratories for Understanding Physics 463

Online experiments seem to have a positive effect on the students’ motivation. And
according to most students they also have a positive effect on the learning success.

The online experiments led to a greater learning success


45
40
35
30
25
20
15
10
5
0
agree don't agree

Fig. 4. Percentages of student answers on a bipolar scale with six units concerning the perceived
learning success because of online experiments

We made no further investigations about this topic after 2011. It was only in late
summer 2016 that we resumed our research. This time we want to install a generic set-
up for our investigations that is supposed to be in place for several years. It will be
discussed in the next section.

3 Current and Future Research

3.1 Common Investigative Set-ups

Studies that examine the effects of media-usage [for online experiments most important 6;
also see 5] for learning and student performance typically apply one of the following
investigative set-ups:
• Study of one group only: studies of the usage of media, their acceptance and the
influence they have on the learners’ motivation often concentrate on one large group
of students. Sometimes this large group is separated into smaller groups of students
with different media usage preference and the performance of the respective students
in exams.
• Study of two and more groups: if effects of certain media are to be detected, especially
concerning student performance in exams, often one group for which these media
are offered is compared with a control group for which they aren’t. In some cases

zamfira@unitbv.ro
464 D. Boehringer and J. Vanvinkenroye

different media are offered to different groups or the same media are offered and it is
examined whether the effects are the same. In rare cases these studies are performed
in some combination in consecutive years.
More sophisticated studies take into account other factors such as gender, GPA, class
attendance etc. and try to extract the actual effects of the different media. These studies
are rather rare since the necessary data are not easy or impossible to collect [e.g. 7].

3.2 The Long Term Investigative Set-up at the University of Stuttgart


The situation for the course “Experimental Physics for Engineers” is exceptional
regarding a couple of circumstances:
1. the number of participating students is very high (around 1.500),
2. the online experiments are provided by the computer center in a sustainable way,
3. the investigative set-up is not only supposed to deliver a flash light, but intended to
serve for a long-term study for several years.
The key component of the investigative set-up is an item analysis including all ques‐
tions and results of the exams of the past ten years as well as all online experiments with
their questions. All these questions will be connected to their respective topic areas of
physics. Besides the questions all kinds of learning resources will be connected to the
topic areas of physics as well.
In the course of the next years we want to change the media and learning resources
that are provided to the students for the different topics. In some cases it will be possible
to exchange the learning resource within the execution phase of an online experiment
without changing most of the questions in all three phases. It will be possible to use a
simulation or another simulation or a remote experiment or even a lecture recording
showing the experiment and to observe what effect this has on the learning results.
A formative mid-term test with questions in the kind of the exam questions, but only
covering the topics that have already been lectured until the mid of December (the winter
term starts in mid-October and lasts until mid-February) will be added to the curriculum
(starting December 2016). This mid-term test enlarges the number of opportunities to
evaluate the learning resources with all the students. We hope to get less biased data
than in 2009/10 and 2010/11. The data will be analyzed in a factor analysis focusing on
the topics of physics on one hand and on the kinds of learning resources on the other.
In addition to the formative mid-term test we try to add several variables to control
for confounding effects: we are asking the students about their opinions of and experi‐
ences with each kind of online resource (online experiment, slides, lecture recording
etc.) and also some “offline” resources (e.g. study groups, personal lecture notes). The
questions include duration of usage, motivational effect, perceived learning success,
difficulty and suitability for test preparation. The survey with these questions will be
made right after the mid-term test and again after the final examination.
In comparing the usage of the learning resources as it can be detected from log-files
of the Learning Management System, the students’ perceptions expressed in the survey,
and the learning success as it can be seen in the item analysis, we hope to be able to

zamfira@unitbv.ro
Effectiveness of Online-Laboratories for Understanding Physics 465

paint a more colorful picture of the students’ learning process, their decisions which
learning resources to use, and their respective learning outcome. First results will be
presented at the conference.

References

1. Tetour, Y., Boehringer, D., Richter, T.: Integration of virtual and remote experiments into
undergraduate engineering courses. In: 2011 First Global Online Laboratory Consortium
Remote Laboratories Workshop (GOLC), Rapid City, SD, October 2011, pp. 1–6. IEEE (2011)
2. Richter, T., Tetour, Y., Boehringer, D.: Simulations in undergraduate electrodynamics: virtual
laboratory experiments on the wave equation and their deployment. In: 2010 IEEE, Education
Engineering (EDUCON), Madrid, Spain, April 2010. IEEE, pp. 1091–1097 (2010)
3. Richter, T., Tetour, Y., Boehringer, D.: Library of labs. A European project on the
dissemination of remote experiments and virtual laboratories. In: SEFI Annual Conference
2011, Lisbon, Portugal, 27–30 September 2011
4. Richter, T., Tetour, Y., Boehringer, D.: Library of labs: a European project on the dissemination
of remote experiments and virtual laboratories. In: Werner, B. (ed.) International Symposium
on Multimedia (ISM 2011), Dana Point, California, USA, December 2011, pp. 543–548. IEEE
(2011)
5. Lindsay, E.D., Good, M.C.: Effects of laboratory access modes upon learning outcomes. IEEE
Trans. Educ. 48(4), 619–631 (2005)
6. Lindsay, E.D.: The impact of remote and virtual access to hardware upon the learning outcomes
of undergraduate engineering laboratory classes. Ph.D. University of Melbourne (2005)
7. Traphagan, T., Kucsera, J.V., Kishi, K.: Impact of class lecture webcasting on attendance and
learning. Educ. Tech. Res. Dev. 58, 19–37 (2010). doi:10.1007/s11423-009-9128-7

zamfira@unitbv.ro
Remote Control and Measurement
Technologies

zamfira@unitbv.ro
On the Fully Automation of the Vibrating
String Experiment

Javier Tajuelo1 , Jacobo Sáenz2 , Jaime Arturo de la Torre1(B) ,


Luis de la Torre2 , Ignacio Zúñiga1 , and José Sánchez2
1
Dept. Fı́sica Fundamental, UNED, Senda del Rey 9, 28040 Madrid, Spain
jatorre@fisfun.uned.es
2
Department of Computer Sciences and Automatic Control, UNED,
Juan del Rosal 16, 28040 Madrid, Spain

Abstract. This work explains how to develop a fully functional virtual


and remote laboratory (VRL) for a vibrating string of length L with
both ends fixed. This laboratory is common in undergraduate studies of
vibrations and waves. We propose the construction of a virtual laboratory
built with Easy Java/Javascript Simulations. This virtual lab allows to
explore the dependence between the frequency of the vibrating string
and the physical parameters of the experiment. This work also explains
how to build a remote laboratory using LEGO MindstormsTM , Arduino,
and a LabVIEW specific software to control all the components. The
remote laboratory exhibits the same behavior of a classical hands-on
lab, allowing the user to measure different physical quantities and their
dependence with the fundamental frequency of the vibration. Both the
virtual and the remote labs are accessible through UNILabs: a Content
Manager System created to host VRL on the cloud.

Keywords: Virtual lab · Remote lab · Physics

1 Introduction

Traditional experimental laboratory sessions and face-to-face lectures can be


complemented with new online experimental tools. While there already are lots
of Internet resources (many of them accessible for free) to fulfill many theoretical
aspects on education, engineering and scientific studies also need more specific
Internet based tools to cover the practical part of their teaching, as many works
have brought to light [4,20]. In this sense, online labs make possible to illustrate
scientific phenomena that require costly or difficult-to-assemble equipment, and
can be divided in two different and complementary approaches:

– Virtual Labs provide computer based simulations which offer similar views
and ways of work to their traditional counterparts. Nowadays, simulations
have evolved into interactive graphical user interfaces where students can
manipulate the experiment parameters and explore its evolution.

c Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6 44

zamfira@unitbv.ro
470 J. Tajuelo et al.

– Remote Labs use real plants and physical devices which are teleoperated in
real time. Remote experimentation through the Internet has been available
for more than a decade and its interest and use has been growing over the
years [10,16,18].

Past studies have shown that online and hands-on labs are equally effec-
tive in terms of learning outcomes [2]. Moreover, online labs provide additional
advantages [15], such as that lab sessions can be watched by many people and
recorded or that online labs can be used in 24/7 from anywhere and can be
accessed by handicapped people.
Given the complementary uses of the previous experimentation approaches,
it is probably best if an experiment itself is offered in several ways. Here, we
present a lab implementation of a vibrating strings system that consist in both
forms: the virtual or simulated one and the real, remote one. For those readers
that might be interested in these resources, the virtual and remote laboratories
can be found in UNILabs, a network of interactive online laboratories. For those
readers interested in replicating the system or learning how to build a similar
one, the main instructions and tips to do so are given in this paper.
This work uses a free and open source software called Easy java/javascript
Simulations (EjsS) that eases the creation of Javascript applications to build
online lab interfaces. Since its appearance, more than a decade ago, EjsS has been
growing and nowadays it can also be used to easily create remote laboratories. It
has been massively used to create physics simulations: there are more than three
hundred at the ComPADRE-OSP digital library [7], as well as many virtual and
remote labs in the automatic control field (for example, those at the UNILabs
network [5,8]). While all these applications were based on Java and deployed as
Java applets, EjsS now offers the possibility to build Javascript simulations. In
this regard, there are now plenty of Javascript simulations created with EjsS:
again, ComPADRE-OSP offers a couple of hundred of them. However, to the
best of our knowledge, EjsS has only been used for building a Javascript remote
experiment in the present work and in [1].
All applications created with EjsS can be embedded into Moodle, the most
widely used free and open source Learning Management System (LMS), with
just a few clicks. For this, a plugin called EJSApp [9] is used, which allows the
one-click deployment of VRLs into Moodle. Once installed in a Moodle server,
EJSApp allows teacher users to add a new kind of activity called “EJSApp”
which, in turn, allows uploading .jar and .zip files previously generated with EjsS.
When the file selected to create the activity is a .jar file, then a VRL deployed as
a Java applet is added to Moodle. When the file selected is a .zip file generated
by EjsS, the VRL deployed into Moodle is in Javascript format. With EJSApp
not only the applications get embedded in the LMS but they also gain some
additional features automatically, such as: connection with a booking system that
may be used for controlling the access to the remote experiment, multilanguage
support, saving data and image files from the virtual or remote experiment
application to the users’ files repository in the LMS, grading, monitoring the
time spent by users working with the experiment and backup and restore options.

zamfira@unitbv.ro
On the Fully Automation of the Vibrating String Experiment 471

The virtual and remote labs developed in this work have been integrated in
UNILabs, a portal based in Moodle, using this solution.

2 Physical Description of the Experiment

Consider a string of length L, volumetric density ρ and mass M , that oscillates


in the Y Z plane under a constant tension T . Figure 1 shows a forces diagram on
an infinitesimal length dy of the string. This infinitesimal portion of the string
has a mass dm = μdy, with μ = M/L the linear density of mass. Net tensions
produced on the string are

Fy = T cos(α + dα) − T cos α, (1)


Fz = T sin(α + dα) − T sin α. (2)

Fig. 1. In an infinitesimal portion of the string dy appear two tensions, one at each
end of the portion, so that under a small displacement assumption the horizontal net
tension is null.

Under the assumption of a small displacement in the vertical direction, a first


order Taylor expansion of the forces gives

Fy = 0, (3)
Fz = T dα. (4)

Newton’s Second Law gives, then

T dα = dma (5)
2
∂ z
= (μdy) . (6)
∂t2

zamfira@unitbv.ro
472 J. Tajuelo et al.

By relating the angle α with its Y Z components, taking derivatives and


approximating in Taylor’s first order we obtain an equation for the infinitesimal
angle

∂2z
dα = dy, (7)
∂y 2
so as the equation that describes the wave motion is

∂2z μ ∂2z
= , (8)
∂y 2 T ∂t2

which is the so-famous wave equation [13]. This equation describes


 the temporal
evolution of a transversal wave propagating at a speed v = T /μ.
For a fixed-fixed string both ends are fixed, so that the displacement at these
nodal points is zero. The temporal part of the solution to the wave equation can
be written as a linear combination of normal modes

  nπy 
z(y, t) = An sin cos (ωn t) e−nγt , (9)
n=1
L

where

nπ T
ωn = . (10)
L μ

Here, γ is a damping coefficient. Figure 2 shows the vertical oscillation of the


string as a function of the position x, for the first four normal modes. Each n
mode has (n + 1) fixed nodes (positions where there is no displacement) and n
anti-nodes (positions with maximum displacement).
Equation 9 shows that the bigger the normal mode, the higher the damping
factor. Eventually, all the n > 1 modes vanish and the only surviving term gives
a “stationary” wave
  
 πy  π T
z1 (y, t) = A1 sin cos t e−γt . (11)
L L μ

Note that the amplitude of the perturbation eventually goes to zero as a


consequence of its proper damping coefficient −γ.
At a fixed position y = L/2 we have

z1 (t) = A1 cos (2πf1 t) , (12)

where the fundamental frequency f1 is given by



1 T
f1 = , (13)
2Lr πρ

zamfira@unitbv.ro
On the Fully Automation of the Vibrating String Experiment 473

3
n

0.00 0.25 0.50 0.75 1.00


y/L

Fig. 2. First four normal modes of vibration for a fixed-fixed string. Each n mode has
(n + 1) points where the oscillation is zero. The wave length of mode n is λn = 2L/n.

where r is the radius of the string and ρ its volumetric density. If we select a
control parameter (as could it be the tension, for example), by measuring the
frequency f of the string as a function of this parameter we may establish a
relationship of the kind

f = αT β , (14)

so as for different tensions we may perform a least squares method to obtain the
constants α and β and, therefore, find the density of a string just knowing the
length of the string and its radius, i.e.,
1
ρ= . (15)
4π(Lrα)2

The key concept of this experimental set-up is the fact that there exists a depen-
dence between many parameters: a linear dependence between the period and
the length of the string, an inverse dependence between the frequency and the
radius, a square root dependence between the tension and the frequency, and so
on. With these many different cases, a student may explore with many parame-
ters, using linear-linear fits, linear-log fits, etc. so as to obtain physical quantities
measuring how the frequency depends on them.

3 Experimental Device
A schematics of the device is shown in Fig. 3. There are five different strings
(2) made of different materials (copper, kanthal, constantan, and nickel) and
with diameters ranging from 0.3 mm to 0.5 mm. One of the ends of the strings
(with the exception of the central string) is fixed on the aluminum structure of

zamfira@unitbv.ro
474 J. Tajuelo et al.

Fig. 3. Fully developed experimental device, consisting of the following elements:


(1) DC LED light source, (2) strings, (3) stepper motor, (4) LEGO gear connected
to LEGO servomotor, (5) light sensor, (6) linear stage, (7) LEGO carrier, (8) mobile
aluminum rod, (9) rule and length indicator, and (10) dynamometers. The close-view
figure shows a frontal view of the string plucking element consisting of (11) the rotation
axis of the LEGO gear, (12) trajectory described by the LEGO gear perimeter, and
(13) string under study.

the device, while the other end of the strings is connected to a dynamometer
(10) that measures the tension along the string. In the case of the central string,
the fixed end is connected to the axis of a stepper motor (3), so that the tension
can be controlled. A LEGO carrier (7) is used to displace an aluminum rod in
close contact with the strings (8) along the y axis, in such a way that the length
of the vibrating part of the strings can be changed from 380 mm to 550 mm. This
length is measured by means of a rule and an indicator attached on the mobile
rod (9). A DC-LED (Galaxy 1000) light source (1) illuminates the system from
above, and a linear stage (RS 340-3749) (6) is setup below the strings along the x
axis. Two elements are attached on the top of this linear stage: (i) a light sensor
(Phywe 08734-00) covered by an opaque cap with a 0.3 mm slit oriented along
the y axis (5), and (ii) a LEGO gear connected to a LEGO servo motor (4).
As can be seen in the close view of Fig. 3, the rotation axis of the LEGO gear
(11) does not coincide with its center, so that the perimeter of the gear roughly
describes an ellipse when the LEGO servo motor rotates (12). The position of
the opaque cap and the gear along the vertical direction has been fine-tuned
in such a way that, first, the cap of the light sensor is placed less than two

zamfira@unitbv.ro
On the Fully Automation of the Vibrating String Experiment 475

millimeters below the horizontal plane formed by the strings, and second, the
apex of the gear perimeter trajectory coincides with the horizontal plane formed
by the strings (13). The stepper motor and the linear stage are controlled by
two identical drivers (EasyDriver), and An Arduino I/O boardcard is used to
send the convenient digital signals. A power supply (Lendher 3003D) provides
the current required by both drivers, and a second identical power supply is used
for the DC-LED light source. An oscilloscope (PicoScope 2203) is used to read
the measurement from the light sensor. A LabVIEW code has been developed
to control all of the above mentioned elements.

4 The Graphical User Interfaces


The graphical user interface (GUI) for both the virtual and remote lab is built
using Easy Java/Javascript Simulations (EjsS). The GUI is written using the
Javascript version and includes plots, webcam visualizations, numerical fields,
sliders and so on, in a HTML view that allows the user interaction. It gives the
student the control of the experimental environment (tension, length) as in a
hands-on laboratory. To connect both sides (LabVIEW on the server side and
EjsS on the client side) the lab architecture includes a JIL server, [3]. JIL uses the
XML-RPC protocol to encode the messages and allow data exchange between
both sides, as shown in Fig. 4.

Fig. 4. VRL communications architecture

4.1 The EjsS Tool


EjsS is a tool that offers an easy way to create simulations and remote labora-
tories with a GUI for developers with no programming skills. These experimen-
tal applications can be made according to the user needs of interactivity and
visualization. Other VRL applications have been developed using EjsS and the
related papers define it as a tool that facilitates the development of applications
by researchers, teachers and students who want to focus in the simulation or
system theory and not in the technical programming aspects [3,12].
EjsS also allows the user to run a finished application directly from the editor.
If the aim is to publish the VRL as an online application, the developer can

zamfira@unitbv.ro
476 J. Tajuelo et al.

package it in order to run it in a standalone mode (in the case of Java) or inside
a web page to be run in a web browser (Java and Javascript), [1,6,8,11,14,17,19].
Figure 5 shows the Javascript editor. The top part in the editor contains
description, model and view tabs. The editor allows to build a simulation or
remote laboratory by adding the mathematical behavior and a graphical inter-
face. Then, the main application is divided in two parts:

Fig. 5. Main view of the EjsS editor.

– The model. Using this tab in the editor, a developer can define differential
equations, write some custom code and/or make connections to other software
or hardware. The complexity of the simulation and model depends only on
the implemented system, the requirements and the knowledge about it.
– The view provides to the users a GUI that determines the interaction and
visualization capabilities of the application. This view can be built using the
editor by adding single view elements from the right panel of the EjsS editor
(right side of Fig. 5).

zamfira@unitbv.ro
On the Fully Automation of the Vibrating String Experiment 477

4.2 The Basic GUI

The vibrating string experiment described in this work can be simulated using
EjsS, introducing Eq. 12 to generate the data and by creating an interactive GUI.
This GUI consists of buttons, sliders, numerical fields, check boxes, graphs, and
two and three dimensional graphical elements that allow one to change and
visualize parameters of the lab.
Figure 6 shows the basic structure of the virtual laboratory when the .xhtml
is served to the client. As it was said in previous sections, the virtual laboratory
is based on a simulation of the system behavior and the GUI. The virtual lab also
allows students to get familiar with the available interaction and the protocol, in
order to be prepared to the remote version of the lab. The application window is
divided into three sections: a 2D and 3D graphical visual representation of the
system, the controls panel and the plots/graph panel.

Fig. 6. The virtual vibrating string system laboratory.

– Visual representation (2D/3D): The top-left side of Fig. 6 shows a 3D model


of the system in which the user can observe the basic parts of the real structure
of the vibrating string laboratory. In the remote version, this panel contains a
web-cam that allows to see the tension and length of each string, or a general
view of the lab, as in Fig. 7.
– Controls panel: Using the controls, buttons and sliders of this panel shown
in top-right side of Fig. 6, the user is helped to go through the experimental
protocol, highlighting each step and giving tool-tips to make it easier.

zamfira@unitbv.ro
478 J. Tajuelo et al.

Fig. 7. The remote vibrating string system laboratory.

– Plots/Graphs panel: This part of the interface, at the bottom part of Fig. 6,
allows the user to see data in different plots and graphs. The vibrating string
laboratory plots the light intensity versus, first, the position of the linear
stage (see Sect. 5 for details in this calibration procedure), and second, time
(in order to obtain the frequency of the fundamental normal mode).

5 Experimental Protocol
Three CCD cameras allow the student for the visualization of a general view
of the experimental setup as well as a close view of the measurement elements
(dynamometers and length indicator). Once the student connects to the remote
controller, an automated initialization procedure is executed by the device: The
DC-LED is turned on, and the linear stage and the LEGO carrier are displaced
to their initial positions, determined by means of two LEGO limit switches. After
this initialization, the student has to proceed as follows:
1. Location of the strings’ positions: A complete sweep is performed by
the linear stage along its whole range of displacement, while the light sensor
attached on its top is continuously measuring the light intensity. Therefore,
the student can plot the function light intensity versus position along the x
axis. As can be seen in Fig. 8a, the light intensity is roughly symmetric with
a local maximum at the center. This is because we use a single light source
placed at the center of the device in order to avoid multiple shadows produced
by multiple light sources. Thus, the student can determine the position of each
string from the five local minima in the light intensity.

zamfira@unitbv.ro
On the Fully Automation of the Vibrating String Experiment 479

Fig. 8. (a) Light intensity versus position of the light sensor along the x axis. (b) Light
intensity versus time after the gear hits one of the strings. The inset graph represents
a close view of the results from t = 3.5 s to t = 3.55 s.

2. Selection of the string and the length to explore: After this calibration,
the linear stage is displaced to the string selected by the student, and the
LEGO carrier moves backward or forward to reach the desired length. Then,
the linear stage performs an automated fine-tuning to ensure that the thin
slit of the light sensor is placed exactly below the string shadow.
3. Selection of the tension (if needed): If the string selected by the student
is the central one, the tension can be varied by the stepper motor. In that
purpose, by clicking an increase tension (or decrease) control, the stepper
motor rotates a fixed number of steps in the clockwise (or counterclockwise)
direction, so that the tension is increased (or decreased) in steps of approx-
imately 0.05 N. The student observes the measurement of the dynamometer
by means of one of the CCD cameras, and is able to change the tension as
long as it is maintained below 10 N to avoid breakage of the string.
4. Execution of the experiment and data acquisition: When the previous
steps have been completed, the device is ready to execute the experiment in
those conditions selected by the student. Then, by clicking the corresponding
control, the light sensor starts to measure the light intensity, and the LEGO
gear attached on the top of the linear stage performs a 360◦ rotation, plucking
the string when it reaches its highest position. The Fig. 8b shows the results
of an actual experiment as an example. The instant in which the gear hits
the string and the relaxation dynamics are clearly observed. The student has
to analyze these data, calculating the frequency of the fundamental normal
mode, f1 .
5. Analysis of the results and comparison with theory: Once the student
has completed experiments under different physical conditions, the depen-
dence relation between f1 and the physical parameters of the string (T , L, ρ)
can be established. Then, the student should be able to discuss the experi-
mental errors and the validity of the theoretical model.

zamfira@unitbv.ro
480 J. Tajuelo et al.

6 Conclusion
Traditional hands-on laboratories are useful to achieve experimental skills like:
– Knowing the principles, techniques and instrumental measure devices to study
physical phenomena.
– Evaluate limitations of the measure process.
– Explain the effects of interference in measures, their consequences and how
to minimize the associated errors.
– Be able to calibrate measure instruments, take useful data collections and
perform a statistical analysis.
– Report correctly taken measures, and obtain relationships between physical
variables.
In distance learning education context, the implementation of new techniques
and methodologies that adapt these skills to the student is of the utmost impor-
tance. These techniques should allow both the comprehension of the physical
experiment and the evaluation of the expected abilities. Therefore, alternatives
to hands-on laboratories should be studied in detail.
In this work we propose an effective alternative that has been demonstrated
to be useful [2]. On one hand, we develop a virtual laboratory that allows the
students to practice, to be familiar with the techniques of measure, and to intro-
duce them to the concept of data adquisition. The virtual laboratory should
be built in such a way that it is close related to a real laboratory. In spite of
being in a virtual mode, it is important for the user to pay attention to how to
effectively measure physical quantities and how he or she has to collect correct
data. As a virtual laboratory, it lacks of proper physical (realistic) conditions,
and thereafter we need to offer a proper alternative to the hands-on lab.
For this purpose we also explain in this work how to build, in a easily afford-
able manner, a remote laboratory. This remote lab ensures that the data adqui-
sition is exactly the same a scientist would take if he or she was actually in a lab.
Through camera views, the experimentalist can control all the physical parame-
ters and can command the measure process. Raw data, extracted directly from
the experiment done in real time, will be used to analize results and to obtain
conclusions. We focus in this part on how to adapt a classical experiment to
be controlled with a computer. We use LEGO MindStormsTM construction kits,
stepper motors and an Arduino controller that are all connected in LabVIEW.
The control of LabVIEW is delegated in a JIL Server, which uses the XML-RPC
protocol to connect with a Javascript code written with Easy Java/Javascript
Simulations. This code is deployed in UNILabs, the webserver that allows the
user to connect to the lab with a web browser.
We choose the vibrating string as a prototypical example in the study of
vibrations and waves, which is one of the subjects that appears in the first
courses on Physics degree. The construction of an accurate hands-on laboratory
can be sensitive and, as such, we provide an alternative solution for those learning
center that cannot afford the physical lab. The selection of the vibrating string
was made with two ideas in mind. The first one is the physical representation

zamfira@unitbv.ro
On the Fully Automation of the Vibrating String Experiment 481

of the experiment. This experiment allows the student to know the dependence
between many parameters. We know that the fundamental frequency of the
standing wave depends on the density of the string, its length, and the tension.
The relationship between this parameters goes, respectively, as the inverse of the
square root, the inverse, and the square root. This fact allows the student to learn
about different representations (linear, inverse, logarithmic) in order to fit data
collection to a curve. The second idea is to present how cost-effective devices can
be used to build complex laboratories. With LEGOTM kits it is remarkably easy
to introduce experimentalists into robotic designs. The development of Arduino
boards also allows one to control stepper motors, light diodes, and so, in an
easy manner. These tools can be widely used to construct future laboratories,
compatible with the requirements of a given experimental setup and allowing the
students to measure, analyze and extract conclusion with laboratories developed
in the cloud.

Acknowledgments. Financial support from the Vice Chancellor for Academic Affairs
and Quality at UNED under grants GID2016-9-1 and GID2016-25-1 is acknowledged.

References
1. Bermudez-Ortega, J., Besada-Portas, E., Lopez-Orozco, J.A., Bonache-Seco, J.A.,
de la Cruz, J.M.: Remote web-based control laboratory for mobile devices based on
EJsS, Raspberry Pi and Node.js. In: 3rd IFAC Workshop on Internet Based Control
Education, Brescia, Italy, vol. 48, pp. 158–163. IFAC-PapersOnLine, November
2015
2. Brinson, J.R.: Learning outcome achievement in non-traditional (virtual and
remote) versus traditional (hands-on) laboratories: a review of the empirical
research. Comput. Educ. 87, 218–237 (2015)
3. Chacón, J., Vargas, H., Farias, G., Sánchez, J., Dormido, S.: EJS, JIL Server, and
LabVIEW: an architecture for rapid development of remote labs. IEEE Trans.
Learn. Technol. 8(4), 393–401 (2015). doi:10.1109/TLT.2015.2389245
4. Chang, G.-W., Yeh, Z.-M., Chang, H.-M., Pan, S.-Y.: Teaching photonics labo-
ratory using remote-control web technologies. IEEE Trans. Educ. 48(4), 642–651
(2005)
5. Chaos, D., Chacon, J., Lopez-Orozco, J.A., Dormido, S.: Virtual and remote
robotic laboratory using EJS, MATLAB and LabVIEW. Sensors 13(2), 2595–2612
(2013)
6. Christian, W., Esquembre, F.: Modeling physics with easy java simulations. Phys.
Teach. 45, 475–480 (2007)
7. Christian, W., Esquembre, F., Barbato, L.: Open source physics. Science
334(6059), 1077–1078 (2011)
8. de la Torre, L., Guinaldo, M., Heradio, R., Dormido, S.: The ball and beam system:
a case study of virtual and remote lab enhancement with Moodle. IEEE Trans.
Ind. Inform. 11(4), 934–945 (2015)
9. de la Torre, L., Heradio, R., Jara, C., Sanchez Moreno, J., Dormido, S., Torres, F.,
Candelas, F.: Providing collaborative support to virtual and remote laboratories.
IEEE Trans. Learn. Technol. 6, 312–323 (2013)

zamfira@unitbv.ro
482 J. Tajuelo et al.

10. de la Torre, L., Sanchez, J.P., Dormido, S.: What remote labs can do for you. Phys.
Today 69, 48–53 (2016)
11. de la Torre, L., Sanchez, J.P., Heradio, R., Carreras, C., Yuste, M., Sanchez, J.,
Dormido, S.: UNEDLabs - an example of EJS labs integration into Moodle. In:
World Conference on Physics Education (2012)
12. Farias, G., Keyser, R.D., Dormido, S., Esquembre, F.: Developing networked con-
trol labs a MATLAB and easy java simulations approach. IEEE Trans. Industr.
Electron. 57, 3266–3275 (2010)
13. French, A.P.: Vibrations and Waves. CRC Press, Boca Raton (1971)
14. Galan, D., Heradio, R., de la Torre, L., Dormido, S., Esquembre, F.: Automated
experiments on EjsS laboratories. In: International Conference on Remote Engi-
neering and Virtual Instrumentation, Madrid, Spain, pp. 78–85, February 2016
15. Gravier, C., Fayolle, J., Bayard, B., Ates, M., Lardon, J.: State of the art about
remote laboratories paradigms - foundations of ongoing mutations. Int. J. Online
Eng. 4(1), 19–25 (2008)
16. Heradio, R., de la Torre, L., Galan, D., Cabrerizo, F.J., Herrera-Viedma, E.,
Dormido, S.: Virtual and remote labs in education: a bibliometric analysis. Com-
put. Educ. 98, 14–38 (2016)
17. Heradio, R., de la Torre, L., Sanchez, J., Dormido, S.: Making EJS applications
at the OSP digital library available from Moodle. In: International Conference on
Remote Engineering and Virtual Instrumentation, Porto, Portugal, pp. 112–116,
February 2014
18. Heradio, R., de la Torre, L., Dormido, S.: Virtual and remote labs in control edu-
cation: a survey. Annu. Rev. Control 42, 1–10 (2016)
19. Pastor, R., Sanchez, J., Dormido, S.: Web-based virtual lab and remote experi-
mentation using easy java simulations. In: Proceedings of the 16th IFAC World
Congress (2005)
20. Wannous, M., Nakano, H.: NVLab, a networking virtual web-based laboratory that
implements virtualization and virtual network computing tech. IEEE Trans. Learn.
Technol. 3(2), 129–138 (2010)

zamfira@unitbv.ro
Identifying Partial Subroutines for Instrument
Control Based on Regular Expressions

Ananda Maiti ✉ , Alexander A. Kist, and Andrew D. Maxwell


( )

School of Mechanical and Electrical Engineering,


University of Southern Queensland, Toowoomba, Australia
anandamaiti@live.com, kist@ieee.org,
andrew.maxwell@usq.edu.au

Abstract. With increasing reliance on smart devices to communicate with each


other to deliver critical services, it is important that the devices become intelligent
and reliable. Such devices are widely used in Internet of Things applications that
operate on the Internet. These devices often communicate with new nodes and
face new situations while interacting with them. This paper focuses on providing
a generalized description of the communication between a particular pair of
devices. This description is based on regular expressions from automata theory.
The regular expressions enable the devices to determine the properties of future
interactions with other similar devices. This can help the nodes to validate
incoming commands, evaluate the interactions and maintain a reasonable quality
of service. A particular IoT application - a Remote Access Laboratory system, is
shown as an example where the regular expressions can be used. This application
aims to use the regular expressions based generalized descriptions to identify
potential subroutines from previously stored interaction data.

Keywords: Automaton · Remote laboratories · Algorithmic information theory ·


Programming · E-learning · Internet of Things

1 Introduction

Networked Control Systems (NCS) are systems that are operated over a network (Jinhui
et al. 2013). Increasing use of smart objects in Internet of Things (IoT) (Whitmore et al.
2015) applications and their complex architectures have necessitated semi-autonomous
and autonomous capabilities in these applications. IoT applications are based on the
internet which requires exchanging commands in discrete packets or frames over the
Internet. As such, these nodes operate with a finite set of basic commands. There can be
many complex commands based on these basic commands that may not be explicitly
described in the system, but which are specific to a given application. Such an IoT system
contain two types of nodes: masters and slaves. Master nodes control slave nodes. They
also have the responsibility of collecting data and have higher decision making capa‐
bilities. There can be many such master-slave combinations in the IoT system. A super‐
visory system is required to monitor and validate the progress of the IoT system and
steer the overall IoT system.

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_45

zamfira@unitbv.ro
484 A. Maiti et al.

Remote Access Laboratories (RALs) can be seen as IoT systems, which allow
students to access and control scientific experimental setups educational purposes
(Harward et al. 2008; Mejías et al. 2017). The Experiment sites are the slaves containing
the experimental rig and the students’ site contain the Controller Interface (CI) is the
master which sends commands to the experiment and collects data from it. RAL is
somewhat different from regular IoT as it is more dependent on human inputs, but it still
implements all the characteristics of an IoT system. A Peer-to-Peer (P2P) RAL system
extends the operational aspects of a traditional laboratories to allow individuals from
their homes and school to create and share experiment as part of their curriculum. This
means that the creators or the users of the experiments can only have a very basic fixed
set of commands. But more complex commands composed of these basic commands
can be used in the experiment to improve performance, but the experiment creators may
not be able to identify or construct/implement them. A P2P RAL is shown in Fig. 1
where it, if the human users are removed, becomes a general IoT system with multiple
sets of master and slave nodes.

Fig. 1. Remote laboratory architecture

This paper presents a method to create a list of regular expressions from the inter‐
action between the master and slave nodes. Such regular expressions (based on automata
theory) represent the way the experiments are used and can be used to create subroutines
and save them automatically without anyone specifically creating them. The regular
expressions can be used for other applications as well. Although the focus here is on
RAL, the proposed methodology can be adapted to any IoT system that operates with
known set of finite commands. It provides a computational solution to a control problem.
The remainder of the paper is organized as follows: Sect. 2 discusses the corre‐
sponding related work in algorithmic information theory, Automata theory and RAL.
Section 3 and 4 shows the problem addressed here and method to obtain the regular
expressions. Section 5 and 6 presents the method to convert the regular expressions to
the subroutines (or algorithms/programs) and an example.

2 Related Works

This section discusses the related work in remote laboratory and algorithmic information
theory fields.

zamfira@unitbv.ro
Identifying Partial Subroutines for Instrument Control 485

2.1 Algorithmic Information Theory and Control Theory


The algorithmic information theory is the study of computing information (Markowsky
1997; Grunwald 2008). It deals with complexity measures of strings (or other data
structures). One of the common application areas of algorithmic information theory is
to find programs/subroutines that generate a particular sequence. For e.g. for a sequence
0101010101 the corresponding program would be a for loop with 5 repetitions of ‘01’.
On the other hand 0110001010 cannot be so easily defined in terms of a program.
In the worst case, the program might require 10 output statements to generate the desired
strings. While the shortest program for a given string cannot be conclusively determined
(Li 2008). A relatively efficient program can be found that can generate the sequence.
This paper concentrates on an approach to create regular expressions from a given string
that can generate at least parts of the sequences.
In terms of NCS, each command that is passed to the slave devices from the master
nodes can be viewed as a discrete command with variable input parameters. Each symbol
in a sequence can be described as a command that has been executed in a given time.
The difference is that this sequence for a control command has time information related
to it that must be respected.
Regular expressions are mathematical notations to describe regular languages that
represent the language accepted by Deterministic Finite State automata (DFA). The
common operators of a regular language are * (Kleene star) which represents the
multiple repetitions of the corresponding elements and | which represents the union of
the multiple symbols from two expressions. A part of a regular expression can also a
regular expression if it is properly enclosed in parenthesis ‘(’ and ‘)’, if any.

2.2 Remote Access Laboratories

RAL are a class of NCS that is used to control equipment over the internet. The particular
type of RAL concerned in the current context is a Peer-to-Peer setup (Maiti et al. 2015a,
b). Such a system has a Controller Interface (CI) which is the master node that takes inputs
from the human operators, processes them according to a program and sends corre‐
sponding to the controller unit of the experimental setup in remote location and receives
any feedback. The Controller Unit (CU) is the slave nodes or devices in the RAL system
and it can be seen as a DFA as it changes states depending upon the command executed on
it. Each state is discrete and based on a particular event. The language between CI and CU
is the communication protocol for the instrument control in the P2P RAL. This language
consists of the very basic (or atomic) components of instrumentation-
• read (r) - reading the value of a port (sensors and actuators)
• write (w) - writing a value to a port (for actuators)
• wait (a) - pause the CU for maintain synchronization
There can be other composite commands based on these. The experiment creators
in these RAL systems are not capable of creating such advanced programs for their rigs.
They are capable of creating very basic commands and the learning experience to create
higher level programs is part of the educational aim of the RAL system.

zamfira@unitbv.ro
486 A. Maiti et al.

The aim of this paper is to establish a method based on control theory and to aid the
makers of the experiments in creating complex programs. The main contributions
include a method to create regular expressions that can describe the language between
a particular CI-CU pair. These regular expressions are then used to create complex
programs.
In terms of a general IoT, this problem can be applicable where a slave device inter‐
acts with a master device for the first time. It may have interacted with other master
devices previously and thus is capable of “knowing” what to expect from the new inter‐
action. These master nodes may work without human interventions and generate random
commands themselves. Thus, the slave node can keep track of what the master node is
requesting for and determine whether it is suitable for the overall goal of the IoT and
the node itself. This can be done if the commands exchanged previously are used to
determine a set of regular expressions that can define the language between the nodes.
The new interaction can be based on this set of predefined regular expressions. This
approach of collecting information from regular use of the devices automatically is part
of reinforcement learning approaches to implement intelligent devices. With time
devices that are capable of learning by analyzing past data (interactions or commands
in this case), can gain high reliability in their operations (Garcia and Fernande 2015;
Schmidhuber 2015).
This RAL application is similar to the problem of simultaneous localization and
mapping (SLAM) (Jaulin 2011) used in autonomous cars in that are large number of
sensors and actuators are involved. However, the experiments are not same are
unmanned vehicles or robots. They are run by human and are dependent on the human
inputs throughout the entire operation.

3 The RAL Experiment Setup

This section describes the RAL control strategies in terms of algorithmic information
theory.

3.1 RAL Experiment Rig State Space

The states in the CU are changed according to the commands from the CI. Each experi‐
ment has a finite set of actuators and sensors connected to the CU ports (R). At any given
time, the rig can have a discrete value on the ports. Thus, any command (C) executed
results in the change of state space of the rig Y i.e. changes the rig from one discrete
state to another i.e.

Ri+1 (Y) = ARi (Y) + BCi (1)

where A and B are constant matrices for an experiment rig. In terms of simple control,
only the actuators have any impact on state transition as these are the only components
that can directly change the configuration of the experimental rig. From a decision and
control point of view, the state space contains the values of the all active ports on the
MCU for both sensors and actuators. Technically, the state space of the rig may be

zamfira@unitbv.ro
Identifying Partial Subroutines for Instrument Control 487

infinite if each actuator is allowed to attain values between −∞ and ∞. However, in


practical applications, it can only attain a finite set of states, and hence a finite number
of transitions between them.
There can be different types of states. Valid states are all possible rig states that are
stable and thus permissible. An error state is all possible rig states that will break the rig
and make it inactive, thus not permissible. While valid states are deterministic, error
states are not possible to be identified as they are never recorded and anything other than
a desired valid state is an error states. An error state usually occurs when a transition
between two valid states fails.

3.2 Describing Control Output as a Sequence


The change in the states can be represented in form of a sequence of input symbols i.e.
a sequence
t t t t
c11 c22 c33 … crq for q, r > 0 (2)

where ci represents a command issued at time tj and

ci ∈ {𝕣 (p), 𝕨(p, v), 𝕒(v)} ∪ N (3)

where, N is a set of composite commands which may be already known for the experi‐
{ }
ment, i.e. N = (𝕣 | 𝕨 | 𝕒)∗ and |N| ≥ 0.
The aim is to create a ‘program’ or set of instructions that can generate this sequence
of symbols. If only repeating sequences of commands are used, then the corresponding
programs can be represented as a static sequence of statements generating the
commands/symbols. While it is easy to create a program that only generates static repe‐
tition of symbols a more complex technology is required to create functions or subrou‐
tines that involves variable inputs i.e. the values of v in Eqs. 2 and 3.
The method to obtain the algorithms is depicted in Fig. 2. It is based on a Deter‐
ministic Finite state Automata (DFA) and its corresponding regular language/regular
expression in this work that is able to identify conditional statements and iterations.
More powerful automata may be used to implement more complex functions. A parti‐
tioning algorithm is used to obtain clusters of commands that are issued within definite
time periods from training data sets. These clusters are then converted to its minimal
regular expression. Once the DFA is created, it is minimized to remove any redundancy.
Then the DFA is converted to a ‘program’ or ‘algorithm’ which is capable of processing
input parameters to generate variable output sequences, but according to the constraints
defined within itself.

Fig. 2. The steps to find potential subroutine

zamfira@unitbv.ro
488 A. Maiti et al.

4 Clustering to Obtain the Automaton

This section presents the method to create the regular expression list and their potential
applications.

4.1 Modified Regular Expressions

A symbol in the regular expressions is considered as a command along with its port(s).
For, e.g. 𝕨p1 represents a write command on port p1. For parallel executions, multiple
ports can be specified e.g. 𝕨p1p2 represents a write command on p1 and p2. The symbols
( )
𝕨pv can be parameter restrictive where a symbol is unique for a port (p) and value (v).
For sake of simplicity, the following sections make mention of port restrictions only,
but the procedures can be done with parameter restrictions as well.
For finding closely related command sequences, the parameters that are passed in
the write commands or whether it is a read or write command are not relevant. The only
things that matter are the repeating sequence of commands. A subsequence of commands
can be regarded as an element of a regular language that is accepted by the CU automaton
(Maiti 2015a, b) ensuring that the rig (or CU) is always in a stable state. For every such
subsequence, there can be a regular expression obtained from it. This can be done by
using the Hopcroft algorithm for minimization (Garcia 2013) and Kleene’s algorithm
(Gross and Yellen 2004) for creating regular expression from a DFA. The DFA may be
constructed (as shown in Fig. 3) with respect to a particular subsequence(s) to a particular
CU (e) as
{ }
Yse = Q, 𝛴, 𝛿, q1 , F

where,
• Q contains (r + 2) states. There is r non-final states (q1 … qr) corresponding to every
read/write command in the sequence in order, a single non final fail state qf, and a
single final state that is appended at the end (qr+1).
• Σ = set of unique commands from input sequence
• q1 is the first state corresponding to the first command from Eq. 2.
{ ( ) ( ) }
• 𝛿 = δ qi , c → qi + 1 , δ qi , c′ → qf for i ≥ 0, c is the command that is executed
between qi and qi+1 and c′ represents any command symbol except c.
• F contains only one state which is qr+1.

zamfira@unitbv.ro
Identifying Partial Subroutines for Instrument Control 489

Fig. 3. An example of the DFA of a sequence

As an example, if a sequence of commands is


𝕣p1 𝕨p2 𝕣p2 𝕨p3 𝕨p3 𝕨p3 𝕨p3 𝕨p3 ,

then the corresponding minimal regular expression can be obtained as


( )∗
𝕣p1 𝕨p2 𝕣p2 𝕨p3

For creating regular expressions from command sequences the notations of the
regular expressions must be improved to include the time information. The time infor‐
mation is neglected while creating the minimal regular expression for a sequence. Once
a regular expression is obtained, the time information is embedded. To achieve this a
time value ρi is associated with every command in the command sequence as discussed
in Eq. 2 where ρi = ti+1 − ti. Time gaps can be decisive when it is known to be constant
value and the regular expression contains the constant value in place or they can be
indecisive when it is represented only as a variable. Thus, a regular expression
( )+
𝕣p1 7𝕨p2 5𝕣p3 𝜌3 𝕨p3 𝜌4 𝕨p4 3𝕨p5 𝜌5

means that the time gap between 𝕣p1 and 𝕨p2 is represented as constant 7, between 𝕣p3 and
𝕨p2 is always a constant 5 and similarly the number of times 𝕨p4 and 𝕨p5 is repeated is
always with a time gap of 3. However, the time gap found between 𝕨p3 and 𝕨p4 is variable
and thus marked with symbol ρ4. For any practical application, there can be a tolerance
value for which the time between 𝕨p4 and 𝕨p5 can vary but still be regarded as constant
3 in the original subsequence.

4.2 Subsequences of User Commands

To identify repeating subsequences, a procedure is used as shown in Algorithm 1. The


input to the algorithm is the command sequence with respect to time as shown in Eq. 2
which can also be regarded as an input sequence. It is a one-dimensional set of data
points (D) that represents the command and the distance between them is a single integer
value representing the time difference between them. The algorithm works in two phases.
In the first phase, all the subsequences of the input sequence are generated to make a list
(LG) which is analyzed to determine which of them are repeated multiple times. Every
element in the list is as

zamfira@unitbv.ro
490 A. Maiti et al.

<Subsequence><regular expression>

where the regular expression is obtained as discussed previously. In the second phase,
a new list LX is created as a copy of LG along with a new attribute of number of appear‐
ances that represents the number of times the regular expression has appeared in LG. An
element in LX is as

<regular expression><number of appearances = 1>

where initially the number of appearances for each regular expression is set to 1.
There can be two broad types of similarity that could be found within LG:
1. Static time-bound: Any two regular expressions can be completely identical. These
are basic composite commands that maintain the time difference between commands
sequences that never change i.e. for two sequences s1 and s2

𝜌s1
i
= 𝜌s2
i
for i > 0 (4)

Ideally, these types of commands will be found at a very low level containing only
a few atomic commands. They do not involve iterations or conditional checking. These
commands are easy to identify and can be automatically stored without further
processing with human interventions.
2. Dynamic time-bound: The regular expressions are identical in this case if the
commands are the same and in the same order, but the time information is different.
If the time gaps are perfectly divisible or multiples of the corresponding subse‐
quence. The time gaps can be regarded as variable wait (𝕒) commands being executed
between the subsequences. This repetition represents a variable that is collected as
an input to the corresponding command. The relation between the corresponding
time gap in the sequence can be linear or non-linear. This type of subsequence
requires further processing that can be solved with further analysis of the data or
involve human interventions depending upon the context of usage. These types of
commands contain conditional checks and iterations. Such statements can be

zamfira@unitbv.ro
Identifying Partial Subroutines for Instrument Control 491

contrived based on the operators in the regular expression as described in the next
section.
If any two elements (e1, e2) ∈ LX are found to be identical with static time-bound
then e2 is removed from the list and the number of appearance for e1 is increased by 1.
Since e1, and e2 are exactly the same in this case, there is no need to update time infor‐
mation in e1’s regular expression.
If any two elements (e1, e2) ∈ LX are found to be identical with dynamic time-bounds
i.e. they differ only with regards to time information, then also e2 is removed and e1’s
number of appearance is increased by 1. But in this case the time information needs to
be updated and any previous constant time information needs to be replaced with a
variable symbol. For example, if there are two elements with regular expressions
( )∗
𝕣p1 5𝕨p2 6𝕣p3 7 𝕨p4 9wp5 and
( )∗
𝕣p1 5𝕨p2 6𝕣p3 7 𝕨p4 15𝕨p5

with number of appearances of 1 each, then they are identical, except the time infor‐
mation between 𝕨p4 and 𝕨p5 is different. Thus, the new regular expression representing
both would be
( )∗
𝕣p1 5𝕨p2 6𝕣p3 7 𝕨p4 𝜌1 wp5

with the number of appearances as 2.


The list is traversed multiple times until it cannot be reduced any further. At the end
of the procedure, the list has a set of regular expressions that can be used to describe all
of the previous interactions between the devices. This list contains some elements where
the regular expression has appeared only once, and some others may have been repeated
multiple times. The regular expressions that have been repeated multiple times are of
the most importance as they signify repeated use of the same set of commands.
For other applications, several other attributes may be added to the list. For example,
the failure probability of the expressions i.e. the likelihood of the associated command
structures to fail or create an unstable state in the device or if any special authorisations
are required to execute a particular regular expression with respect to the devices
concerned.

4.3 Application of Regular Expressions

This method to create the regular expressions can be used in many applications in an
IoT system such as:
• Validations: With increasing reliance on devices to deliver critical services, it is very
important to ensure safety and integrity of intelligent devices in an IoT system. This
can be done through validation of what is executed on the device. Validation aims
to verify whether executing a command will lead to an unstable state. This can be

zamfira@unitbv.ro
492 A. Maiti et al.

done by strictly enforcing the regular expressions and not allowing anything that does
not match known regular expressions.
• Evaluation: In some applications such as in RAL, these regular expressions can be
used to evaluate the control of the devices. This can be done by following a relaxed
matching of any two sequences where, the difference between them can determine
how much the student has deviated from an ideal sequence of control.
• Variable control interface: The control interface and the available controls may be
altered in real time depending upon the current network conditions. Such a scheme
would choose different command sizes to ensure that a desired level of interactivity
is maintained while ensuring that proper time difference between successive
commands is maintained.
Apart from these two there can be other applications such as finding patterns in the
control and suggesting the next set of commands. The application addressed in this paper
is with regard to automatically identifying potential functions, at least partially, that can
be stored corresponding to a particular experiment in a RAL system.

5 Example Application and Implementations

This section describes the method to convert the regular expressions into a subroutine
of a program. It requires analyzing some advanced aspects of the regular expressions.

5.1 Iterations

An ‘iteration’ or loop in computing, is a sequence of repeating statements that are


generated from a given parameter. This is usually done in the form of a for loop, while
loop, etc. In the current context, iterations are repeating read/write command sequences
that are executed on the same port.
Thus the * operator represents loops or iterations. Any set of commands within a
regular expression that is notated with a * is a repeating sequence. However, the condi‐
tion of iteration is difficult to determine. A * can only indicate the command set is
repeated multiple times. But it cannot determine what may be the determining parameter
i.e. whether it is based on a fixed number of iterations e.g. 0 to n (for loop) or based on
a condition of a sensor (while loop).
Since a while loop can be constructed as a for loop as well with an index variable in
it, the corresponding structure for while loop is used. Whenever an (expr)* is detected,
a statement of: ‘while(ui < vi)’ is added where ui represents the index variable
and vi represents the input variable to the loop. Before the loop ends, the statement:
‘ui = ui + 1’ is added to increase the index variable. The experiment maker can
alter the while condition depending upon what is required in the context of the experi‐
ment. The expr is then processed further to generate the statements inside the while loop.

zamfira@unitbv.ro
Identifying Partial Subroutines for Instrument Control 493

5.2 Conditional Checks


A conditional check in programming is a statement that bifurcates the subsequent set of
instructions into at least two groups, one that is followed if the condition is satisfied and
the other set if not. This is done with ‘if (conditions) {} else {}’ statements
in conventional programming languages. In the current context, a conditional statement
is executed when a set of commands is executed in place of another i.e. multiple large
regular expressions are identical except for some sections that are not identical. For e.g.
( )∗ ( )∗
𝕣p1 5𝕨p2 6𝕣p3 7 𝕨p4 9wp5 and 𝕣p1 5𝕨p2 6𝕣p3 7 𝕣p4 9wp5

These can be re-written as


( )∗
𝕣p1 5𝕨p2 6𝕣p3 7 𝕨p4 9wp5 | 𝕣p4 9wp5

In this example a 𝕣p1 5𝕨p2 6𝕣p3 is followed by either a 𝕨p4 9wp5 or a 𝕣p4 9wp5. This indi‐
cates a condition check must be done to decide which command is to be executed. Thus
the | operator in regular expressions can be converted to if .. else statements.
However, creating such regular expression will require further processing on LX.
Moreover, as the if statements are complicated and require a large amount of data to
determine the conditions, the method to create potential subroutines in this paper ignores
if statements or any | operator. This is because the data set from makers’ interaction may
not be sufficiently large to establish a conclusive condition to check. Also, unlike loops,
there cannot be any default condition that can be placed either.
The method to find the real while loop conditions and if statement conditions can be
done with better computational intelligence tools that establishes the relationships of the
changing sensor values and estimates which sensors values are relevant to the conditions.
These methods have been designed for a specific purpose of allowing the makers to
construct the subroutines here. But these descriptions may be used for various other
applications as well. For example, in a general IoT system perspective, having a | oper‐
ator indicates multiple options in the interaction between the devices. Also, having a *
operator indicates multiple occurrence of commands, but the number of such occurrence
may need to be optimized based on the proper conditions.

5.3 Conversion Algorithm

In a RAL environment, once a maker creates an experimental rig, they run the rig locally
over a LAN to record the interactions. These interactions are then used to create the list
LX i.e. regular expressions. The procedure to convert the LX list of regular expression
to subroutines is as follows:
Step 1. Remove any element from LX that contains the | operator.
Step 2. The list LX is sorted descending order of the number of appearances. This
will put the regular expressions with the most frequency of occurrence at the front of
the list.

zamfira@unitbv.ro
494 A. Maiti et al.

Step 3. If there is any regular expression expr in LX that is part of another regular
expression expr’ and both have an equal number of appearances, then expr can be
removed from LX. This is because expr is always a part of a larger expression.
Step 4. For every regular expression expr in LX, process the with Algorithm Convert()
and as described earlier. Once the output program is ready, the name of the program is
generated randomly and all the variables used in them are put in as parameters to the
subroutine/program.
Finally, when all the potential algorithms are ready, the human maker of the experi‐
ment can then choose to save any algorithm as a subroutine or reject them. They may
alter some of the algorithms by placing the conditions for a while and for loops and if
segments.
This procedure can generate a rough sketch of the algorithm as a subroutine/program
that can be used to reproduce what the maker wants the users of the experiment to do
with it. The maker need not be presented with a potential program if the number of
appearances for the corresponding regular expression is very low, only the most
frequently used regular expressions are of interest. The maker may have control of how
many they want to see and save. The actual programing statements will depend upon
the programming language used for the devices.

The subroutines/programs generated by this method can be partial if it contains while


loops as its condition cannot be conclusively determined. However, it can identify
complete subroutines as well if there is no while loops, and regular expressions are static
time-bound.

5.4 Test Setup


For testing the proposed process, a LEGO based robotic vehicle with two sensors (see
Fig. 4) was used. The two wheel actuators were connected to motors connected to ports
A and B and worked in a differential manner for making the robot turn and moved in

zamfira@unitbv.ro
Identifying Partial Subroutines for Instrument Control 495

unison for moving forwards and backwards. Two sensors (D and E) were mounted atop
another actuator (C). The sensors (D and E) did not stream any value but the user had
to request the value through a CI when desired. A, B and C were also controlled through
a CI. In total there were 3 actuators and 2 sensors. This example was parameter restric‐
tive, i.e. a symbol write symbol is represented by 𝕨pv. Also in this example, the ports A
and B operate in parallel, thus they may be considered as a single port AB.

Fig. 4. The robotic car

The maker of this experimental setup was a novice and thus the only commands the
maker could create was 𝕨AB, 𝕨C, 𝕣D and 𝕣E. There were no explicit wait commands used.
The inputs passed from the CI to the LEGO Mindstorms brick. The CI consists of 7
buttons associated with 4 ways the 𝕨AB (forward v1 backwards v2, left v3 and right v4)
may be executed, 𝕨C, 𝕣D and 𝕣E respectively. The experiment was designed to move the
robot around and collect data with the sensors at certain positions. A session of 145 s
was recorded and used as a training data set. The network latency was considered negli‐
gible for training.

5.5 Results
The histogram (partial) of the regular expressions is shown in Fig. 5. The regular
v1
expression (𝕨AB )* representing a move forward command has the highest number of
v2 v3 v4
appearances as this command is executed many times. Similarly, (𝕨AB )*, (𝕨AB )*, (𝕨AB )*
have appeared multiple times as well, as these are basic commands of the interface. But
there is no need to save these commands in while loops. The next set of commands that
( v1 v2 v3 v4
)∗
appears most is 𝕨AB 𝜌1 𝕨AB 𝜌2 𝕨AB 𝜌3 𝕨AB 𝜌4 . This indicates that the car was moved
multiple times with the set values v1, v2, v3 and v4. The corresponding program can be
saved as a particular function. The maker can replace the ρ1, ρ2, ρ3 and ρ4 wait variables
with constant values. This application allows the experiment maker to identify the
correct composite functions with multiple read/write commands. This way the maker
can learn about programming devices in new ways, which is an educational goal of the
RAL system. Also, after saving this composite function, it allows the maker to create a
better interface with specific commands reducing the effects of Internet latency on inter‐
activity.

zamfira@unitbv.ro
496 A. Maiti et al.

v1 v2 v3 v4
Fig. 5. The histogram of LX where s1 = (𝕨AB )*, s2 = (𝕨AB )*, s3 = (𝕨AB )*, s4 = (𝕨AB )*,
( v1 v2 v3 v4
)∗
s5 = (𝕨cv5)*, s6 = (𝕣D)*, s7 = (𝕣E)*, s8 = (𝕣D 𝕣E)*, s9 = 𝕨AB 𝜌1 𝕨AB 𝜌2 𝕨AB 𝜌3 𝕨AB 𝜌4 .

The histogram may be used for evaluation as well. This particular experiment (or
device) learns the sequence from the training data which is then uniquely attached to it.
Thus, it can be expected for this sequence to occur in any future interaction with other
nodes. For e.g. if the other users are given the same interactive CI and they do not perform
in an accurate manner, for e.g. skip some commands, then the corresponding histogram
will be different. The deviation in the histogram can be measured to determine how the
user has performed compared to an ideal histogram specific to this experiment.
The method to convert the expressions to the output programs is very basic and
heuristic in nature. However, it illustrates the usefulness of a regular expressions list.
Also, it is perfectly applicable in RAL as the system is expected to support the makers
as much as possible but not necessarily present accurate results.

5.6 Advantages and Disadvantages

The proposed method to create regular expressions from the interactions between
devices in an IoT system is robust. It can be used to define various interactions that
would otherwise be unknown at any given time. This creates a generalized model of the
interaction between devices autonomously. There can be several applications for these
properties as described earlier.
However, the process is very time consuming. For any given sequence with n
components the number of subsequences initially is:

|L | = n(n + 1)
| X| 2
The algorithm has a time-complexity of O(n2), and n could be very large. Thus, it
may be difficult to use this method in real time applications. However, it is well suited
for applications where the requirement is for post and pre-processing of the interactions.
Also, the regular expressions can be created and stored offline and once the interaction
starts any incoming sequence can be matched with the list of regular expression in real
time thus allowing for real-time validation and for adaptive controller-interfaces as
mentioned in Sect. 4.3.

zamfira@unitbv.ro
Identifying Partial Subroutines for Instrument Control 497

Another limitation is that the procedure described in this paper addresses only
command and port combinations. e.g. wp4 and rp3. Only the parameters passed with the
write commands are considered as variable. This is useful in context of RAL where
commands are in general associated with a specific port. However, further work can
examine ways to create subroutines that can account for variable ports and parameters.
Further work also needs to formulate and optimize the conditions that can be checked
in the if statements, and within conditional checks.
The main concern of this approach is that it limited to applications like RAL which
does not demand a very accurate reduction and detection of the subroutines. These can
rely on human to manually rectify any possible errors. If the number of sensors or
actuators increases or there are very large variations of inputs, then it becomes more
difficult to determine the regular expressions accurately.

6 Conclusions

A method to define a set of regular expression that covers the language between to nodes
in a NCS were discussed. This allows to the nodes to maintain validity of commands
and integrity of the nodes operation. It can also allow for altering interactions as required
according to the network latency conditions which are especially important in the context
of the Internet. Regular expressions can also be used for determining the deviation of
the usage of a node from a perceived ideal. These regular expressions have been applied
in case of a P2P RAL where they are converted to corresponding programs that can be
stored for a particular device. The regular expressions can enable the nodes to determine
the cause of any fault and trace it back to a particular set of commands that may be part
of a regular expression, but may not have been executed before. This can help in rein‐
forcement of learning approach of the devices.

References

Jinhui, Z., Yuanqing, X., Peng, S.: Design and stability analysis of networked predictive control
systems. IEEE Trans. Control Syst. Technol. 21, 1495–1501 (2013)
Whitmore, A., Agarwal, A., Xu, L.: The Internet of Things–a survey of topics and trends. Inf.
Syst. Front. 17, 261–274 (2015)
Harward, V.J., Del Alamo, J.A., Lerman, S.R., et al.: The iLab shared architecture a web services
infrastructure to build communities of Internet accessible laboratories. Proc. IEEE 96, 931–
950 (2008)
Markowsky, G.: An introduction to algorithmic information theory. Complexity 2(4), 14–22
(1997)
Grunwald, P.D., Vitanyi P.M.B.: Algorithmic information theory. arXiv:0809.2754v2 [cs.IT]
(2008)
Maiti, A., Kist, A.A., Maxwell, A.D.: Real-time remote access laboratory with distributed and
modular design. IEEE Trans. Ind. Electron. 62 (2015a)
Garcia, J., Fernande, F.: A comprehensive survey on safe reinforcement learning. J. Mach. Learn.
Res. 16, 1437–1480 (2015)

zamfira@unitbv.ro
498 A. Maiti et al.

Schmidhuber, J.: On learning to think: algorithmic information theory for novel combinations of
reinforcement learning controllers and recurrent neural world models (2015). http://arxiv.org/
abs/1511.09249
Maiti, A., Kist, A.A., Maxwell, A.D.: Components relationship analysis in distributed remote
laboratory apparatus with data clustering. In: IEEE International Symposium on Industrial
Electronics, pp. 861–866 (2015b)
Garcia, P., Lopez, D., de Parga, M.V.: DFA minimization: from Brzozowski to Hopcroft (2013)
Gross, J.L., Yellen, J.: Handbook of Graph Theory. Discrete Mathematics and it Applications
(2004). ISBN 1-58488-090-2
Li, M., Vitányi, P.M.B.: An Introduction to Kolmogorov Complexity and Its Applications, pp.
105–106. Springer (2008)
Mejías, A., Herrera, R.S., Márquez, M.A., et al.: Easy handling of sensors and actuators over TCP/
IP networks by open source hardware/software. Sensors 17, 94 (2017)
Jaulin, L.: Range-only slam with occupancy maps: a set-membership approach. IEEE Trans. Rob.
27(5), 1004–1010 (2011)

zamfira@unitbv.ro
Internet of Things Applied to Precision
Agriculture

Roderval Marcelino1(B) , Luan C. Casagrande2 , Renan Cunha2 , Yuri Crotti1 ,


and Vilson Gruber2
1
Laboratório de Pesquisa Aplicada, Universidade Federal de Santa Catarina,
Araranguá, Brazil
roderval.marcelino@ufsc.br
2
Laboratório de Telecomunicações, Universidade Federal de Santa Catarina,
Araranguá, Brazil

Abstract. Nowadays, the number of small family farms has grown con-
siderably and it represents the main type of agricultural enterprise in
the world. The family activity in agriculture is considered significant in
terms of production of strategic food for the population, mainly in devel-
oping countries. Small family farmers, in general, are always on influence
of weather, and as a consequence, usually they do not maximize the har-
vest, reducing then the incoming. Analyzing the small family farming
current context, this paper proposes the development of a low-cost solu-
tion for control, monitoring and automation of agricultural greenhouse.
The proposed solution was designed using prototyping as Raspberry Pi
(RPi) and Arduino in conjunction with sensors (temperature, humid-
ity, and light, among others) and few actuators (drip system, fans and
incandescent lamps). For interaction between the farmer and the sys-
tems, it was developed a web human machine interface. Currently there
is a prototype of the proposed system running in the campus facilities
of Universidade Federal de Santa Catarina (UFSC) – Araranguá. Perfor-
mance and stability tests were made in the system in order to validate
the effectiveness of the proposed architecture. As a conclusion, with this
study developed using sensors and actuators in a controlled environment
prototype, it is possible to conclude that low cost solutions for family
farms are extremely necessary and feasible.

1 Context
Nowadays, industrial farms contribute positively in the trade balance of several
countries in the world, such as Brazil. As a result, the real importance of family
farms is overshadowed, thus, a segment within agriculture that is essential for
the countries for several issues, such as: economic, socio-cultural, environmental,
food security, among others, is no longer prioritized.
According to [5], there is no an universal definition of family farms, although
some are more widely accepted. However, as reported by [9] in 36 definitions of
family farms, nearly all definitions specify that to be considered in this category,
a member of the household owns, operates and/or manages the farm either in
part or fully.

c Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6 46

zamfira@unitbv.ro
500 R. Marcelino et al.

Considering these definitions, it is possible to define the importance of these


farms. As stated by [9], there are more than 570 million farms in the world and
more than 500 million of these are owned by families. Family farms represent the
vast majority of farms in the world, but less of the share of the world’s farmland,
which means that they are, on average, smaller than nonfamily farms.
Considering Brazil as an example, it is known that family farms represent
84.4% of total number of establishments related to agriculture, occupying only
24.3% of the country’s agricultural land (80.25 million hectares) [5]. In addition,
accordingly to [5], a factor that is important to highlight is that family farming
employs seven out of every 10 persons employed in this field, that is, although
it occupies an area considered small by the number of farms, family farming
contributes employing a large number of people.
In spite of this context, where it is clear that this segment in the agriculture is
essential for several countries, family farmers are at a distinct disadvantage com-
pared to large farmers. This fact can be observed mainly considering the level
of technology applied in the field, because the large-scale producers can easily
access the main technological resource due to their purchasing power. This tech-
nology can guarantee a gain in the productivity and reduction of production risk.
Almost all Family farmers can not acquire these technologies, mainly because of
the high cost.
Considering Brazil as an example again, according to [4], the productive
performance of the family farmers with respect to their participation in the
country’s agricultural production reflected more the manual labor than a process
of technological intensification. As a consequence, in most cases family farmers
can count maintain the same level of productivity per area as large-scale farmers.
An example within this context of technology use in farms is the acqui-
sition of monitoring and control systems for greenhouses, thus ensuring that
the production of a specific crop will be maximized and will not suffer losses
through the influence of the weather. These systems, which generally have a
high cost, can guarantee the continuous production of a crop that has a high
value, because through data acquisition and classical control techniques, these
systems can maintain a controlled environment inside the greenhouse, with a
low error, independently of the outside conditions.
Technology is an essential mean to ensure that these family farmers are com-
petitive in the market, thus guaranteeing the real profit of their production, and
as a consequence, ensuring the income security of their families and their col-
laborators. Considering this fact, the development of low-cost technologies for
this class of farmer is essential, because in the same time that ensures a higher
income, reduce the risk of high investment in a specific crop.
Generally, greenhouses are high-level technological facilities for the produc-
tion of any type of crops. Control and monitoring system for greenhouses have
been widely studied, as in [3,11,12]. These authors proposed systems that follow
the same line of systems currently commercialized, differentiating basically the
methods used to implement the model, such as communication system, sensors
used to monitor the system, actuators, among others.

zamfira@unitbv.ro
Internet of Things Applied to Precision Agriculture 501

Control algorithms are also evolving, as can be observed in [7,16]. On the


other hand, new factors have created opportunities for development of new low
cost solutions, as shown in [1,10,13,15]. Although there are some studies that
were developed aiming low cost control and monitoring systems for greenhouses,
these systems were not developed specifically for family farmers. Therefore, this
work proposes to describe a low cost system that was developed aiming family
farmers.

2 Purpose or Goal
Considering the importance of family farmers and the need to provide the oppor-
tunity to use low-cost technology that could contribute in the incoming of these
farmers, this work proposes the development of an efficient and innovative low-
cost solutions to control, monitor and automate small greenhouses.
The proposed idea is based on Internet of Things (IoT) concepts, where
the system was developed focusing the interconnection of all agents involved.
The projected system should be capable of acquiring data from various sensors
(temperature, humidity, solar irradiation, among others) in real time and display
them in a human machine interface (HMI). In addition, these data need to be
stored in a database in such a way that could be consulted at any time, either for
simple visualization or even for a data processing based on the data acquisition
history of the sensors.
Besides to the data acquisition, the system should also have a user-friendly
interface, where the user has control over all actuators that would be used in the
automated structure. Considering that this activation would be done remotely
using the internet as a mean of communication, this feature would bring conve-
nience for the farmer.

3 Approach
The embedded system proposed was focused on speed of development and
results. It was chosen to use processing solutions, such as Arduino and Raspberry
Pi platforms, because they represent solutions for fast and efficient projects.
The proposed system can be divided into four large sections, according to
Fig. 1:
1. Sensors and Actuators;
2. Acquisition/Processing Central;
3. Web application;
4. Human machine interface.
The Acquisition/Processing Central has two data processing systems, being
them: Arduino and Raspberry Pi. Arduino is responsible for performing low-level
and real-time data acquisition. After this process, these data is transported from
Arduino to Raspberry Pi via UART serial communication. On the other side,
Raspberry Pi has the responsibility to realize a higher-level data processing and
to communicate with the web server, aiming to store the data and to provide
information to the user through a friendly interface.

zamfira@unitbv.ro
502 R. Marcelino et al.

Fig. 1. System architecture

3.1 Sensors and Actuators

The monitoring and control system developed for data acquisition and actuation
in the crop is composed of some type of sensors and actuators. According to
Wendling (2010), sensor is a term used to designate devices sensitives to some
form of energy in the environment, relating information about a physical quantity
that needs to be measured. There are two main types of sensors, being: analog
or digital. Analog sensors can measure any type of value over the time within its
operating range. In the other side, digital sensors can assume only two values in
their output signal, being zero or one. Actuators are devices that will somehow
act on system variables.
As shown in Fig. 1, the block (1) represents the sensors and actuators used in
the system. Among the sensors used, the DHT11 is responsible to capture tem-
perature and ambient humidity. This sensor can measure temperatures between
0 and 50 ◦ C and humidity between 10% and 95%. It uses an NTC thermistor and
for temperature and HR202 for humidity. The internal circuit reads the sensors
and communicates with Arduino via the One Wire communication protocol.
Another sensor used in the system is DS18B20, which is responsible to mea-
sure soil temperature. It works in the range of −55 to 125 ◦ C with the error of
±0.5%. This sensor has an A/D converter circuit and a memory, in which the
data is saved. After the conversion, the Arduino reads the data through One
Wire digital communication.
Aiming to measure the soil moisture, it was used the sensor Hygrometer. This
is an analog sensor that is composed by a comment that, through two probes,

zamfira@unitbv.ro
Internet of Things Applied to Precision Agriculture 503

measures the soil moisture by measuring the current between the probes and by
a trimmer potentiometer circuit, in which the sensitivity is adjusted.
Among the actuators used, fans were used for air circulation and, as a conse-
quence, to change the temperature of the environment. As the proposed system
is a prototype, it was used two small fans 12 V 1 A. These fans can be started in
the HMI through digital ports from Arduino.
Another actuator used is incandescent bulb. Its function is to warm the
environment, according to the pre-defined values. The user can activate this
actuator in the HMI, where the Raspberry Pi receives this requisition and sends
it to the Arduino, which starts the system sending a digital signal for a relay.
Finally, an irrigation system was proposed to adjust soil moisture and nutri-
tion of the crops. This actuator is constituted by an electromechanical valve that
when triggered via relay, allows the flow of water. Again, the user can control
this system through the HMI.

3.2 Acquisition/Processing Central

As shown in Fig. 1, the block (2) represents Arduino and the microcomputer
Raspberry Pi. In general, Arduino is the component responsible to read the data
from the sensors and to trigger the actuation systems proposed for this system.
Another important responsibility is to send this data acquired to Raspberry Pi.
The communication is bidirectional, so the Arduino will receive commands from
the Raspberry Pi to control the actuators.
In the other side, microcomputer Raspberry Pi is responsible for the high-
level data processing and the connection between the prototype and the web
application. Its roles are to receive data from the sensors captured by the
Arduino, process the received data, send it to the user application, receive com-
mands to trigger the actuators via web application and send these commands to
the Arduino. It was used serial connection to connect the Arduino and Raspberry
Pi. In the other side, the connection between Raspberry Pi and web application
was defined through a wireless network (WiFi IEEE 802.11).

3.3 Web Application

The description of the web application development can be divided into two
subsections: the first one detailing the subsystem responsible to receive data
from the physical system and the second one describing the human machine
interface (HMI) created to interact with the users.

3.3.1 Data Management System


The data management system was inside the web application that was developed
using Node.JS. Node.JS is a development platform for server-side applications
that uses JavaScript and Google’s V8 JavaScript engine.
To illustrate the popularity of this language in IoT, it is interesting to quote
and excerpt from an interview with Michael McCool, Intel s chief engineer, where

zamfira@unitbv.ro
504 R. Marcelino et al.

he affirms that IoT is a huge mashup of Web services, browser technology, and
embedded technology. JavaScript is pretty useful in all those places [6].
MongoDB technology was chosen to storage the data acquired from the sen-
sors and the data generated from the HMI. This is a high performance non-
relational database that is ideal for storing large volumes of information. This
technology has sufficient maturity for applications of this size and this type.
Another reason to use MongoDB in this project is the fact that this database
can manage data of any structure and it is possible to add new functionalities
without redesigning the entire database [14].
Basically the web server developed exposes clients to a REST API where,
through HTTP calls, the microcomputer Raspberry Pi sends the data collected
through the Arduino to be stored in the cloud. For real time communication
between the web application and Raspberry Pi, web sockets were used. This
technology creates a persistent communication channel between remote objects
and transfer the data through the network asynchronously. In the other way of
the communication channel, where the user can send commands for the con-
trol system, it was defined that would be used the same online communication
channel (web sockets).

3.3.2 Human Machine Interface (HMI)


As described by [2], the main objective of HMI is to produce usable, stable, safe,
and functional system, as well as to develop or improve the safety, usefulness,
effectiveness, and usability of computer systems through the establishment of
physical communication between man and machine. In the developed system,
the authors chose to adopt web technology as a mean of interaction between
users and the monitoring/control system. This choice is justified due to the fact

Fig. 2. Homepage of the web application

zamfira@unitbv.ro
Internet of Things Applied to Precision Agriculture 505

that one of the main objective was to provide the possibility to monitor and
control the system from anywhere, at any time, through a computer or even a
mobile device.
The web application described in the previous item, besides being responsible
for the acquisition, processing and storage of the data, also provides the user the
opportunity to interact with the system, as is described below:

– Access control to the system through mechanism of authentication using


user/password;
– Online data monitoring;
– Queries in the database by dates;
– Visualization of the information in tables and graphs;
– Video Streaming from a webcam located in the automated greenhouse;
– Control panel for the actuators.

Figure 2 demonstrate the home page of the web application for interaction
with the users. In it, it is possible to see the sensors that are inside the automated
greenhouse arranged in an intuitive way. The updating of these values on this
page occurs automatically as soon as the sensors change their state.
Figure 3 shows another system module, where the user can visualize through
graphs or tables the data history of a specific sensor. This query can be done
using time period filters, that can be defined by the user.

Fig. 3. Query data and visualize with charts

Figure 4 shows the part of the web system where the user can send commands
directly to the automated system. Actuators that are physically arranged in the
greenhouse structure are connected to the acquisition/processing central that

zamfira@unitbv.ro
506 R. Marcelino et al.

Fig. 4. Actuators control panel

receives the commands from the cloud through web socket. This part of the
whole system allows control of, for example, the irrigation valve, the fans, and
the bulb.

4 Actual or Anticipate Outcomes

The main outcome of the developed prototype was to prove that the utilization
of new and more economics embedded systems can act effectively in control
of familiar greenhouses, being one of the solutions of IoT applied in precision
agriculture.
The system shows its viability of working for about a month without any
problem. The signals were collected, processed, decisions were taken, and the
data were stored for future analyses.
All of the sensors used are low cost sensors and highly available as well. The
application did not require big precision, therefore, devices like the DHT11 and
the DS18B20 met the application without any signal conditioning.
For the Acquisition/Processing Central, the use of two data processing was
salutary, being that the Arduino was responsible for the low level and the
Raspberry PI was responsible for the integration with the web. Both systems
responded adequately in the means of speed, sending and receiving data in cor-
rect time.
In general, the propose system could be considered an appropriate solution
to low economic family farms.
The web application used the not-relational database MongoDB, and it
worked very well considering the instability, data security, and fast answering
to the queries. After a month using the prototype, the database was using just

zamfira@unitbv.ro
Internet of Things Applied to Precision Agriculture 507

100 MB considering that the system was storing data each 10 min. The web
server, developed in Node, works stably, giving the impression that it was the
correct solution for this problem.
The website layout, developed to support responsiveness, is accessible by
mobile devices. The HTML webpage showed responsiveness overall with the
smartphones tested.
In addition to the HTTP calls that the RPi made to the server, it also existed
a persistent communication using the websocket. This communication did not
have any problem; sending messages between the site to RPi was performed
how it was supposed to. An important part to achieve this result was the use of
Socket.IO, that is an abstraction of the websocket.
It is also important to point out that use of javascript is facilitated, mainly
Node.js, because it has a large library that stand for many necessities and it is
supported by international players like Google, Facebook, Intel among others.
This IoT Technologies are tools integrated in the model of acquisition and
monitoring system, which goal is to become the best system to receive and share
the information about the monitored system.

5 Conclusion/Recommendations/Summary

The Internet of Things is being used in many fields today. The agriculture
is just another field of application. The cost reduction of sensors and elec-
tronic devices, miniaturization of these devices, use of global positioning system
(GPS), geographic information system (GIS), are all elements that boost this
perspective [17].
With a stronger use of sensing in the crops, soil analysis, humidity, temper-
ature, weather, among others, it will build many crop data to make the best
decision at the right time and the precise way of improving the agriculture pro-
duction [8].
In this study, the focus was centered in the computer technologies applied in
greenhouses. The actual controlled environments do not have enough computer
technology applied and this study proved the viability to apply this technology
in family farmer.
The family farmers employ many people around the world, but they do not
have enough investment capacity, however, they build a strong network of local
agricultural production and regional consumption. This paper showed a real pos-
sibility of applying new and economical computational technologies to support
family’s crops. With the application of modern technologies, the family farmers
will not lose space for the big producers, where the capacity of automation is
larger.
The developed system responded as expected, acting upon request and stor-
ing the data for future analyzes. The system answered in milliseconds despite
the problem not requiring fast answers.
Aiming to guarantee the operation of the system, it is suggested the instal-
lation of energy maintenance system for the moment when there are shortages.

zamfira@unitbv.ro
508 R. Marcelino et al.

The use of the system such as Arduino and Raspberry PI, represented high
speed of development with low cost. There was not necessity to develop a dedi-
cated system because the family farmers do not have enough money to support
a specific and dedicated system.
For new plants, we hope to apply the system in a real greenhouse to evaluate
the results of productivity and quality of the crops. At this moment, it was used
only a prototype of a real greenhouse. This work, at this moment is only focused
on the possibility of using this system, and their application. Now the goal is to
apply it in the real field.
The use of two processing systems is justified by the lack of analog inputs
in Raspberry, being exclusive for Arduino the dedication to processing sensor
readings and the activation of the actuator, and finally, by the speed of the
development of web applications that the Raspberry enables, since it has an
embedded operating system.
It can be also defined that some economic feasibility studies still need to
become a target among the main concerns of small-scale agricultural technicians,
engineers and managers. It will be important to relate the costs and benefits of
implementing these systems, as well as the rates of return on investments. It is
necessary to translate into money what the cost reduction means and increases
in productivity, as well as gains in food/flower production in these greenhouses.
The integration of applied IoT in precision agriculture will make the agricul-
tural sector more productive. Planting at the right time, right place and with
the right resources will give birth to a new way of planting, focused on product
quality and high productivity.

References
1. Ai, W., Chen, C.: Green house environment monitor technology implementation
based on android mobile platform. In: 2011 2nd International Conference on Artifi-
cial Intelligence, Management Science and Electronic Commerce (AIMSEC). Insti-
tute of Electrical and Electronics Engineers (IEEE), August 2011. http://dx.doi.
org/10.1109/AIMSEC.2011.6010025
2. Bittencourt, A., Müller, R.S.W.: Avaliação dos Princı́pios de Usabilidade. N/A,
São Paulo (2014)
3. Eredics, P.: Measurement for intelligent control in greenhouses. In: Proceedings of
the 7th International Conference on Measurement, pp. 440–447 (2009)
4. Guanziroli: Agricultura familiar e reforma agrculo xxi. Editora Garamond, June
2001
5. Heberlê, A.L.O.: A agricultura familiar brasileira no contexto mundial,
January 2014. https://www.embrapa.br/busca-de-noticias/-/noticia/1871776/
artigo-a-agricultura-familiar-brasileira-no-contexto-mundial
6. Hunter, L.: The smartest way to program smart things: Node.js: The reasons to
use node.js for hardware are simple: it’s standardized, event driven, and has very
high productivity, February 2015
7. Moreno, J.C., Berenguel, M., Rodrı̀guez, F., Baños, A.: Robust control of green-
house climate exploiting measurable disturbances. In: 15th Triennial World
Congress of the International Federation of Automatic Control (2002)

zamfira@unitbv.ro
Internet of Things Applied to Precision Agriculture 509

8. Lee, W.S., Ehsani, R.: Sensing system for precision agriculture in Florida. Comput.
Electron. Agric. 112, 2–9 (2015)
9. Lowder, S.K., Skoet, J., Singh, S.: What do we really know about the number and
distribution of farms and family farms worldwide? Background paper for the state
of food and agriculture (2014)
10. Moga, D., Petreus, D., Stroia, N.: A low cost architecture for remote control and
monitoring of greenhouse fields. In: 2012 7th IEEE Conference on Industrial Elec-
tronics and Applications (ICIEA), pp. 1940–1944, July 2012
11. Rangan, K., Vigneswaran, T.: An embedded systems approach to monitor green
house. In: Recent Advances in Space Technology Services and Climate Change
2010 (RSTS and CC-2010), November 2010. Institute of Electrical and Electronics
Engineers (IEEE). http://dx.doi.org/10.1109/RSTSCC.2010.5712800
12. Shin, C.S., Lee, Y.W., Lee, M.H., Park, J.W., Yoe, H.: Design of ubiquitous glass
green houses. In: 2009 Software Technologies for Future Dependable Distributed
Systems, March 2009. Institute of Electrical and Electronics Engineers (IEEE).
http://dx.doi.org/10.1109/STFSSD.2009.48
13. Rong-Gao, S., Zhong, W., De-Chao, S.: Greenhouse temperature and humidity
intelligent control system. In: Proceedings of the 3rd WSEAS International Con-
ference on Circuits, Systems, Signal and Telecommunications, pp. 120–125 (2009)
14. MongoDB: Internet of Things (2016). https://www.mongodb.com/use-cases/
internet-of-things
15. Teemu Ahonen, R.V., Elmusrati, M.: Greenhouse monitoring with wireless sensor
network. In: Proceedings of the IEEE/ASME International Conference on Mecha-
tronic and Embedded Systems and Applications, pp. 403–408 (2008)
16. Thiemo Krink, R.K.U., Filipic, B.: Evolutionary algorithms in control optimiza-
tion: the greenhouse problem. In: Proceedings of the Genetic and Evolutionary
Computation Conference, pp. 440–447 (2001)
17. Zhang, N., Wang, M., Wang, N.: Precision agriculture-a worldwide overview. Com-
put. Electron. Agric. 36, 113–132 (2002)

zamfira@unitbv.ro
Computer Vision Application
for Environmentally Conscious
Smart Painting Truck

Ahmed ElSayed1(&), Gazi Murat Duman2, Ozden Tozanli2,


and Elif Kongar3
1
Department of Computer Science and Engineering, School of Engineering,
University of Bridgeport, Bridgeport, CT 06604, USA
aelsayed@my.bridgeport.edu
2
Department of Technology Management, School of Engineering,
University of Bridgeport, Bridgeport, CT 06604, USA
3
Departments of Mechanical Engineering and Technology Management,
School of Engineering, University of Bridgeport, Bridgeport, CT 06604, USA

Abstract. Transportation industry is heavily regulated in the United States to


ensure passenger safety and low environmental impact. Highways, being the
primary mode of transportation, offer an exciting area of research to achieve
both objectives. For instance, road markings play an important role in main-
taining traffic safety by providing proper guidance to the vehicle drivers. The
marking activities require strict control technologies due to the properties of the
liquids used that may be hazardous to the driver and to the environment. With
this motivation, this paper proposes an environmentally friendly system for paint
consumption monitoring and inventory control that keeps track of the utilization
rate of the marking liquid (i.e., paint) while the paint trucks are on service.
Through this system the paint crew and the decision makers would receive
information regarding the amount of paint that is consumed so that refilling
would be possible without significant time loss. The system, using a shortest
path routing algorithm, directs the truck driver to the nearest refill station and
then, if needed, to the nearest customer. The real time information sharing
allows accurate billing and real time inventory control. A computer vision
system is used to monitor the amount of paint consumption where the collected
data are then sent to a multi-network system for real-time communication.

Keywords: Cloud computing  Computer vision  Environmentally conscious


systems  Green road marking  Green transportation  Inventory control 
Internet of things  Smart road marking

1 Introduction

Growing population and increasing number of megacities result in high traffic density
contributing to the need for new rules and regulations to address environmental health
and safety concerns. These concerns necessitate several regulations to maintain and
upgrade road infrastructure for passenger safety and to achieve carbon emission targets

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_47
zamfira@unitbv.ro
Computer Vision Application for Environmentally Conscious 511

for environmental well-being. Road markings play an important role in accomplishing


both goals. Major road marking types include object markers, pavement, highway and
curb markings Charbonnier et al. (1997). This research focuses on the markings on
highways used to separate the traffic flow. Road marking operations require several
materials including the paint, reflective glass beads, epoxy, and catalyst which contain
heavy chemicals that are substantially harmful to the environment. In general, motor
vehicles equipped with tanks are used to carry the road marking liquid. Currently, level
of liquid remaining in a tank is measured manually using dip sticks, which is a
time-consuming process in addition to being potentially hazardous to human health.
Manual measurement also prohibits the vehicle from being routed to the nearest refill
station in a timely manner resulting in excessive gas consumption and inefficient labor
utilization. With these motivations, this study combines the vehicle tracking and tank
level monitoring systems and proposes an innovative approach to provide an accurate,
transparent and traceable (real-time) inventory management and vehicle tracking pro-
cess. The approach utilizes several image processing, wireless communication, and
vehicle routing technologies and algorithms. In this regard, an automated vision system
is used to estimate the liquid consumption rate by measuring the area of the markings
on the road surface through visual inspection of the painted area. A laser sensor is used
to measure the depth of the painted area for accurate paint consumption calculation.
Following this, an online reverse calculation is conducted to obtain the amount of
remaining liquid. Using the information provided, a routing algorithm is employed to
determine the nearest refill station. The proposed system provides an efficient platform
for real time inventory control and offers an environmentally benign solution by
(i) preventing excessive paint usage and thereby reducing the harmful effects of heavy
chemicals and, (ii) decreasing the overall vehicle miles travelled, thereby reducing the
carbon dioxide (CO2) emission rate.

2 Literature Review

Previous applications of image processing for road marking recognition and detection
for environmental benefit are quite limited in the literature. Related studies usually
focus on advanced driver assistance applications and intelligent transportation systems
(Charbonnier et al. 1997; Cruz et al. 2016; Kheyrollahi and Breckon 2012; Lin et al.
2016; Mathibela et al. 2015). One of the relevant studies includes three-level road
marking algorithm for automatic extraction of the repainting process parameters
Mathibela et al. (2015). In addition, Woo et al. (2008) presented a novel robotic system
for damaged lane mark detection, while Kheyrollahi and Breckon (2012) proposed a
multi-step processing pipeline for an automatic real-time road markings and text
recognition under a variety of driving, lighting and road conditions. Lin et al. (2016)
developed a road marking quality assessment mechanism where the proposed system
receives digital images of the road marking surface, processes images, and analyzes
them to capture the geometric characteristics of the marking. The geometric charac-
teristics are then assessed to determine the quality level of the finished marking.
Cruz et al. (2016) studied the environmental impacts of road markings considering the
entire life-cycle of each material used in the process. To the best of our knowledge,

zamfira@unitbv.ro
512 A. ElSayed et al.

there is no environmentally focused research that combines simultaneous tank-level


monitoring with vehicle tracking systems.

3 Methodology

In this paper, a real-time tank level monitoring has been designed where the painting
vehicles are directed to the nearest refill station when the inventory level in the tanks is
low. The system allows data collection and analysis for online real time quality and
performance measurement. Resulting information is shared with multiple parties to
improve the decision making at the operational and managerial levels. The block
diagram of the proposed system is depicted in Fig. 1.
Proposed system is based on a number of assumptions explained below.
• A homogeneous fleet of single tank painting vehicles is used for road markings.
• A single material, namely paint, is used in the road marking process.
• A single refill station serves each vehicle.
• All model parameters such as the distances, tank capacity, transportation cost per
unit of traveled distance between each region are known and deterministic.

Fig. 1. Proposed multi-network system for real-time tank level monitoring.

3.1 Mathematical Model


The problem is formulated as a mixed integer linear programming (MILP) model
which aims at minimizing the routing cost when the vehicles visit the refill stations by
considering the shortest possible path. In this model, the vehicle load is disregarded
given that the transportation costs are distance-dependent.

zamfira@unitbv.ro
Computer Vision Application for Environmentally Conscious 513

The sets, parameters and variables of the model are explained below.
Sets
i: index for regions, i 2 I ¼ f1; 2; 3; . . .; ng,
v: index for vehicle, v 2 V ¼ f1; 2; 3; . . .; mg,
where,
n: is the number of regions the vehicles can pass through,
m: is the number of working vehicles.
Parameters
T: transportation cost per unit distance,
arcði; jÞ: arc length from region i to region j ðj 2 IÞ,
Cvmin : minimum paint level at vehicle v.
Variables
xv : amount of paint in vehicle v,

1; if arcði; jÞ visited by truck v
yijv :
0; otherwise:

Objective Function
8v 2 V; if xv  Cvmin in region s find the optimum value of z to move car v from s to
refill station t,
where:
!
X
m X
n X
n
min z ¼ min arcði; jÞyijv T : ð1Þ
v¼1 j¼1 i¼1

The objective is to minimize the total transportation cost (Eq. 1).


Constraints
8
X X < 1; if i ¼ s; 8ði; sÞ 2 I and 8v 2 V
y 
j ijv
y
j jiv
¼ 1 if i ¼ t; 8ði; tÞ 2 I and 8v 2 V ð2Þ
:
0; else
X
y
j ijv
 1; 8i 2 I and 8v 2 V: ð3Þ

yijv 2 f0:1g; 8ði; jÞ 2 I and 8v 2 V: ð4Þ

Equations (2) and (3) represent the shortest path flow constraints. Equation (4)
represents the binary variable.

zamfira@unitbv.ro
514 A. ElSayed et al.

3.2 Computer Vision System


The proposed system consists of a tank module that will be used to capture image
frames from the two attached cameras (Fig. 2). The cameras are located on both sides
of the spray valve allowing bi-directional painting as shown in Figs. 2 and 3.

Painting Tank

Camera

Laser Range Finder

Spray Valve

Fig. 2. Experimental cart design

Painting Tank

Camera

Laser Range Finder

Spray Valve

Fig. 3. Proposed system connected to highway painting vehicle

After the recording of the captured input stream, each captured frame is processed
using the following steps:
Step 1. Adjust the frame size for proper processing.
Step 2. Apply de-noising filter to reduce the noise in the captured image.
Step 3. Separate the captured road marking from the background and produce a
binary image using thresholding.
Step 4. Use morphological operations to fix the problems in the thresholding
process.
Step 5. Calculate the painted area based on camera calibration information.
The system allows two laser range finders to be attached to the tank module for
accurate calculation of the thickness of the painted area which then can be used to
calculate the paint volume. Using this information, the remaining amount of paint and
the time to refill the tank can also be calculated. Furthermore, a scale-invariant feature
transform (SIFT) Lowe (1999) tracking technique and data from the accelerometer are
utilized to detect if the vehicle is moving or not. This technique temporarily pauses the
calculations if the car is stationary until the car starts moving or until the task is
completed.

zamfira@unitbv.ro
Computer Vision Application for Environmentally Conscious 515

An additional advantage of using computer vision monitoring technique is that,


based on the detected painting pattern, automatic troubleshooting can be conducted
when there is problem in the spray valve or when the painting pips are obstructed.

4 Numerical Experiments

The MILP model aims at minimizing the transportation cost. In the numerical exper-
iment, the unit transportation cost is assumed to be $0.40/mile, where the maximum
paint level for each vehicle is 100 gallons, and the minimum paint level is 20 gallons.
There are total of three painting vehicles on service and five regions. The shortest path
algorithm is solved using GAMS 24.7.4. According to the results, the total trans-
portation cost for three vehicles is found to be $18.
The proposed computer vision technique has been implemented using OpenCV
library with Python language. A live feed from the 5 MP camera that captures 30
frames per second has been used. Figure 4 shows an example of the frame processing
steps. For this example, the consumption volume of paint for marking an area is
calculated by the computer vision system. The painted area calculated from the cap-
tured videos is 16,179.5 cm2 where the thickness of the paint is 1 mm, and the amount
of paint used for this area is 0.43 galloon.

(a) (b)

(c) (d)

Fig. 4. Road marking detection steps from the recorded video

5 Conclusion

This research introduced a novel technique which estimates the paint consumption used
for highway road marking systems and determines the minimum transportation cost.
For this purpose, a shortest path problem is defined and formulated as a mixed integer
linear program. Moreover, a computer vision-based system equipped with multiple

zamfira@unitbv.ro
516 A. ElSayed et al.

cameras and laser sensors is proposed in order to calculate the paint consumption
during the road marking process. Using this data the remaining time to reach the
minimum paint level in the tank is determined. The proposed system can be used for
more advanced road marking signs such as words and/or directional arrows, which are
currently handled manually.
As a further study, the smart painting system can altered to accommodate multiple
road marking materials. Additionally, a global positioning system can be included to
determine the exact position of the painting vehicle(s) for a superior functioning
shortest path algorithm.

References
Charbonnier, P., Diebolt, F., Guillard, Y., Peyret, F.: Road markings recognition using image
processing. In: IEEE Conference on Intelligent Transportation System, ITSC 1997, pp. 912–
917, 9–12 November 1997. doi:10.1109/ITSC.1997.660595
Cruz, M., Klein, A., Steiner, V.: Sustainability assessment of road marking systems. Transp. Res.
Procedia 14, 869–875 (2016). doi:10.1016/j.trpro.2016.05.035
Kheyrollahi, A., Breckon, T.P.: Automatic real-time road marking recognition using a feature
driven approach. Mach. Vis. Appl. 23, 123–133 (2012). doi:10.1007/s00138-010-0289-5
Lin, K.-L., Wu, T.-C., Wang, Y.-R.: An innovative road marking quality assessment mechanism
using computer vision. Adv. Mech. Eng. 8, 6. doi:10.1177/1687814016654043
Lowe, D.G.: Object recognition from local scale-invariant features. In: Proceedings of the
Seventh IEEE International Conference on Computer Vision, vol. 1152, pp. 1150–1157
(1999). doi:10.1109/ICCV.1999.790410
Mathibela, B., Newman, P., Posner, I.: Reading the road: road marking classification and
interpretation. IEEE Trans. Intell. Transp. Syst. 16, 2072–2081 (2015). doi:10.1109/TITS.
2015.2393715
Veit, T., Tarel, J.P., Nicolle, P., Charbonnier, P.: Evaluation of road marking feature extraction.
In: 2008 11th International IEEE Conference on Intelligent Transportation Systems, pp. 174–
181, 12–15 October 2008. doi:10.1109/ITSC.2008.4732564
Woo, S., Hong, D., Lee, W.-C., Chung, J.-H., Kim, T.-H.: A robotic system for road lane
painting. Autom. Constr. 17, 122–129 (2008). doi:10.1016/j.autcon.2006.12.003

zamfira@unitbv.ro
Remote Monitoring and Detection of Rail Track
Obstructions

Mohammed Misbah Uddin, Abul K.M. Azad ✉ , and Veysel Demir


( )

College of Engineering and Engineering Technology,


Northern Illinois University,
DeKalb, IL, USA
aazad@niu.edu

Abstract. Railway snow and sand monitoring has become an important safety
issue, as they pose a serious threat to lives, property, and security. Snow disasters
every year bring immeasurable losses to society, and sand obstructions can cause
the shutdown of railway lines for weeks. This paper presents a detailed design
and implementation of a non-contact railway monitoring system using a camera
mounted on a mobile platform. The captured image is processed to identify the
level of obstruction on a rail track. The goal is to transmit the obstruction data to
the cloud in real time and enable monitoring using a web based graphical user
interface. Once the obstruction crosses a predetermined threshold, the system will
alert the officials.

1 Introduction

Over 140,000 miles of railroad track comprise the United States (US) railway network.
About a thousand of rail road incidents have been reported by the Federal Railroad
Administration in the last decade [1, 2]. Weather conditions have a major influence on
rail tracks and affect the operating efficiency, physical infrastructure, and the safe
passage of freight and people.
Despite the availability and sophistication of advanced weather information systems,
adverse snow and sand deposits continue to cause problems for railway operators. Snow
and sand accumulations can occur on rail switches, brake riggings, track flange ways,
and grade crossings, thereby reducing control over the train and increasing the risk of
derailments and other types of incidents. The wheels of locomotives and rail cars are at
risk for slipping and sliding when tracks are coated with snow and sand. Excessive sand
deposits cause track blockages, ballast ingress, and jamming of switches and gear boxes.
Figure 1 shows rail track obstructions in Cape Town and Morocco [3, 4].
A rail track consists of rails and ballasts. The tracks are higher than the ballasts. A
train can run without much danger even if the ballasts are submerged by an obstruction;
however, if snow or sand is covering the rails, then it possesses danger to the train’s
operation [2].
This paper introduces a non-contact remote monitoring system for rail tracks that
detects the amount of obstruction on the rail track utilizing a camera on a mobile

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_48

zamfira@unitbv.ro
518 M.M. Uddin et al.

(a) Cape Town, South Africa [3] (b) Oriental Express, Morocco [4]

Fig. 1. Sand/snow covered tracks

platform. The images collected by the camera will be continuously monitored and the
amount of obstruction is quantified using the proposed image processing algorithm. The
obstruction data obtained from the image processing unit are sent to the cloud server for
remote monitoring. Any time the obstruction level crosses a threshold, the system will
provide an alarm for remote observers.

2 Existing Systems

The traditional method of measuring snow or sand depth involves using a scaled rod,
which is inserted into the deposited snow or sand on the ballast of the rail tracks. This
is a labor intensive and time consuming operation. With the development of new tech‐
nologies, automated systems have been introduced for rail track monitoring for obstacle
detection. Currently, available obstruction depth measuring systems use ultrasonic
waves and lasers.

2.1 Ultrasonic Measurement System (UMS)

An Ultrasonic Measurement System (UMS) relies on sending ultrasonic pulses to the


snow or sand covered ballast and measures the reflection from the ballast (Fig. 2). It
uses the difference in acoustic impedance in the reflected pulses in the presence of
obstructions (sand/snow) which causes the reflected waves to travel a different path in
order to reflect back [5]. It measures the travel time of ultrasound waves from trans‐
mission to reception. The system is inexpensive and is a fast non-contact method of
snow/sand depth measurement. However, the system is not sufficiently accurate and
shows an error of ±2 cm. Also it can measure only from a stationary platform and hence
provide only a localized means of measurement.

zamfira@unitbv.ro
Remote Monitoring and Detection of Rail Track 519

Fig. 2. Ultrasonic depth measurement system [5].

2.2 Laser Measurement System

This method involves sending a short burst of laser pulse to the ground and measuring
reflected light from the snow/sand surface. It utilizes the pulse and phase measurement
of the laser signals. Figure 3 demonstrates the principle of operation of a typical laser
measurement system. L1 − L2 is the difference of the pulse traveled (before and after
the snow/sand present on the tracks). By using angle a (from Fig. 3), the snow depth can
be obtained:

hs = (L1 − L2) cos a

where, hs is the snow depth. This system has a long range for measurement and better
accuracy. It is not affected by weather conditions either. However, the cost of such a
system is very high. The same as the previous system, this one also has the drawback
of stationary/fixed installation, thereby limiting its use to only a small area. Additionally,
it cannot monitor the whole length of the rail track.

zamfira@unitbv.ro
520 M.M. Uddin et al.

Fig. 3. Laser measurement system [5].

3 Remote Monitoring System

The system proposed in this paper utilizes a video camera to monitor and detect obstruc‐
tions on rail tracks. A camera system is mounted on a moving platform (train or quad
copter) pointing toward the track. The camera feed is captured periodically and is
processed using a proposed image processing algorithm to identify the rail track and, in
turn, to determine whether the track has any obstruction that could pose a danger for a
passing train. The system reads an image from the video camera and recognizes the rail
tracks from the background, which is in turn processed to measure the amount of rail
track coverage by snow/sand or any other object. The coverage on the ballast and rail
areas are independently processed and computed. As rails are higher than the ballast,
the rails are used to set threshold levels to compare the amount of snow/sand present on
the tracks. Even if the ballast is completely submerged by the snow/sand, it still does
not pose serious threat to the rail track operation. However, if the rails are submerged,
then it will be identified as an immediate danger.
Figure 4 illustrates the processes involved in the proposed monitoring system and
can be divided into three major components: image processing, obstruction monitoring,
and graphical user interface (GUI).

zamfira@unitbv.ro
Remote Monitoring and Detection of Rail Track 521

Fig. 4. Overall system diagram.

3.1 Image Processing

The system uses a video camera focused on the rail tracks. The images from the camera
are processed and filtered to separate the rail track from the background. The steps in
the image processing part are shown in Fig. 5.

Fig. 5. Image processing steps.

3.1.1 Image Acquisition


A webcam is used in the developed prototype system to capture the image frame at a
resolution of 640 × 480 pixels at 30 frames per second. The camera is mounted on a
mobile platform aimed toward the rail track. During the developmental phase of this
project, a 2D image of a rail track is used instead of a real life track to test the algorithm.

zamfira@unitbv.ro
522 M.M. Uddin et al.

Also the camera is mounted on a fixed pole directed toward the rail track. The camera
is focused to acquire the views of the rail and ballast, which are the key components in
the inspection. Figure 6 shows an image of a captured frame. In real-life operation, a
camera with higher resolution will be mounted on a mobile platform such as a train, rail
car, or quad-copter.

Fig. 6. Raw image from the camera.

3.1.2 Rail Track Detection


Rail track consists of two major components, the rail and the ballast, and both are critical
for obstruction detection. The technique proposed here analyzes and detects the amount
of obstruction on both the ballast and rail regions. The rail track detection is a three step
process: Morphological Filtering for Image Enhancement, Canny Edge Detection, and
Hough Transform.

Morphological Filtering for Image Enhancement:


The image is sent through a morphological filter that serves as the initial image
enhancement step [6]. This improves the visibility of various regions into which an
image can be partitioned as well as the detectability of the features inside. The goals of
this step are to clear the image of possible noise, amplify the contrast between adjacent
areas, even out the desired features, and eliminate unwanted features. As morphological
filtering works best on binary images, the rail track image is passed through a threshold
filter to convert it into a black and white image. Figure 7 shows the output of the thresh‐
olding process.

Fig. 7. Threshold image.

zamfira@unitbv.ro
Remote Monitoring and Detection of Rail Track 523

The binary image is filtered through a structural element of sizes varying from 3 × 3
to 8 × 8. The erosion of an image f (x, y) by a structural element s is denoted by f 𝜃 s [6],
where 𝜃 denotes the erosion between f and s. The output image g(x, y) is processed based
on the following rule, repeating for all x and y:
{
1 if s fits f
g(x, y) =
0 if otherwise

Thus, erosion creates a new image that marks all of the locations of a structuring
element’s origin at which it fits the input image. Figure 8 shows the eroded images
(structural element s of size 3 × 3). The operation has stripped away a layer of pixels
from the object, shrinking it in the process. Pixels are eroded from both the inner and
outer boundaries of regions, so erosion enlarges the holes enclosed by a single region
as well as makes the gap between different regions larger. Erosion also eliminates small
extrusions on a region’s boundaries. For better results the erosion process is applied
twice, both with structural element s of size (3 × 3). Much of the granular noise is
removed in the process. There is still some left that will be removed in the later steps.

(a) First erosion (b) Second erosion

Fig. 8. Eroded images of size (3 × 3).

Erosion removes small-scale details from the binary image and simultaneously also
reduces the size of regions of interest. By subtracting the eroded image of the rail track
from the original image, the boundaries of each region are found as b = f − g where, b
is an image of the region boundaries. Another process called dilation is used to smooth
out the eroded image. The dilation of the eroded image g by a structuring element ś
(denoted as g ⊕ ś ) produces a new binary image h = g ⊕ ś with ones in all locations
(x, y) of a structuring element ś origin at which that structuring element ś fits the input
image g, repeating for all x and y, i.e.
{
1 if s fits g
h(x, y) =
0 if otherwise

Dilation basically has the opposite effect of erosion [6], it adds a layer of pixels to
both the inner and outer boundaries of regions for smoothing out the regions of interest

zamfira@unitbv.ro
524 M.M. Uddin et al.

from the rails (as illustrated in Fig. 9). Erosion removes the unwanted noise, and the
dilation process enhances and amplifies the remaining regions for better processing. For
better results, three iterations of dilation process are applied with s(4 × 4).

(a) 1st dilation (b)2nd dilation (c) 3rd dilation

Fig. 9. Dilation Process applied with s (4 × 4)

Canny Edge Detection


The original image f is sent to a canny edge filter to extract the edges (Fig. 10). The
process involves three steps:

Fig. 10. Image after Canny edge detection.

A Gaussian low pass filter of suitable standard deviation (𝜎 = 4.0) is used for
smoothing out the image. Since edge detection is susceptible to noise in the image, this
step is essential to remove the noise and also smooth out the weak edges.
The smoothed out image is then filtered with a Sobel kernel in both horizontal and
vertical directions to find the first derivative in the horizontal direction and vertical
directions. These two images identify the edge gradient and direction for each pixel as
follows [6]:

Edge_Gradient(G) = Gx2 + Gy2



Angle(𝜃) = tan − 1(Gy∕Gx)

Where, Gx− horizontal direction and Gy vertical direction.

zamfira@unitbv.ro
Remote Monitoring and Detection of Rail Track 525

Gradient direction is always perpendicular to edges. It is rounded to one of four


angles representing vertical, horizontal and two diagonal directions.
Detection of edges is finalized by suppressing all the other edges that are weak and
not connected to strong edges. This is basically done by thresholding. After finding the
gradient magnitude and direction, a full scan of image is done to remove any unwanted
pixels that may not constitute an edge. For this, every pixel is checked if it is a local
maximum in its neighborhood in the direction of the gradient.

Hough Transform
Hough transform is used for feature extraction to isolate the edges that correspond
to the rail tracks from the others [10]. Due to imperfections in either the image data or
the morphological filter, there may be missing points or pixels on the desired curves as
well as spatial deviations between the ideal line/circle/ellipse and the noisy edge points
as they are obtained from the edge detector. Figure 11 shows how the four points can
be interpreted by the edge detector.

Fig. 11. Possible combinations of lines for a 4-point space.

The image obtained from the morphological filter is passed through a low pass filter
to locate regions that are either 1 or 0. This yields a hard filtered image with edges (not
yet useful for rail detection). Basically the concept is to accumulate all the points in the
low-pass filtered image and calculate the space of each point and how many lines inter‐
sect the point. A point (x, y) will have a line

y = ax + b,

passing in a space defined by a and b. Similarly there will be another line:

y′ = a′ x′ + b′ ,

present in space a′ , b′.


Hough transform first accumulates all the spaces using the points and these are
superimposed on the edge detected image. This shows the enhanced features of the rail
track. Figure 12 shows the output of Hough transform performed on the morphological
image. This is used only to extract the ballast region. Those edges corresponding to only
the horizontal line segments are used, and the others are rejected.

zamfira@unitbv.ro
526 M.M. Uddin et al.

Fig. 12. Hough transform of image h(x, y)

Figure 13 shows the final image output from the processing unit. The edges from the
canny edge detector and Hough transform are superimposed in this final image. This
shows the rail area and ballast area separately. Different colors are used to highlight
different regions of the rail tracks. As snow/sand starts to accumulate on the tracks, the
amount of rail track area visible in the image starts to decrease. This information is
extracted in the form of percentage of area covered by the obstruction.

Fig. 13. Processed image.

3.1.3 Area Estimation


Area estimation involves extracting individual components of the rail track and applying
obstruction detection technique. Individual components are detected by segmenting
them with respect to the edges detected, as mentioned in the Canny Edge Detection
section. Once the longest edges are detected and the ballast is identified, area segmen‐
tation is applied. Area segmentation extracts the individual rail track regions for better
processing. The areas of ballast and location of rails are extracted individually by setting
the Region of Interests (ROIs) [6–8]. This is essential in determining the level of
obstruction present on each portion of the track. Obstruction detection is performed

zamfira@unitbv.ro
Remote Monitoring and Detection of Rail Track 527

using blob detection [9–12]. Blobs are considered as homogenous areas that have the
similar properties (like color and shape). It detects and extracts the sand/snow/rock
particles present on the ROIs extracted in the previous step.

3.1.4 Obstruction Estimation


The final step of the image processing unit is estimating the amount of obstruction
present on the rail track. This is computed using a combination of contour detection and
pixel estimation [8, 9]. Contour detection is applied to identify the relatively bigger
objects like stones, rocks, pebbles. While pixel estimation is used to identify both the
small and big objects. Small objects can be sand or snow. The amount of pixels present
in the threshold image provides an estimate of the amount of obstruction present on top
of the rails. The count of horizontal edges of the rails that are previously detected by the
Canny edge detector also serves as an obstruction estimation tool. As the obstruction
covers the rail tracks, the edges that correspond to the ballast region cannot be detected
anymore.

3.1.5 Image Processing Application Development


OpenCV (version# 2.4.13) is used to implement all the image processing steps, which
include morphological filtering, Canny edge detection, Hough transform, area estima‐
tion, obstruction detection and quantification. The whole process is summarized in
Fig. 14. Visual Studio 2013 is employed to develop the OpenCV based obstruction
detection code using C++ programming language. The obstruction levels are recorded
into a.CSV (comma separated values) file.

Fig. 14. Image processing application development steps.

We added sand, pebbles, and stones to the 2D rail track for testing purposes. The
amount of obstruction added was recorded, and the image processing unit (IPU) detec‐
tion and estimation of these obstructions were performed. Figure 15 shows the compar‐
ison of obstruction levels between the ones estimated by the IPU unit and manual calcu‐
lation. The percentage of area covered by the obstruction seems to closely match the
values from the manual estimation.

zamfira@unitbv.ro
528 M.M. Uddin et al.

Fig. 15. Manual and IPU approach for estimating obstruction levels.

3.2 Obstruction Monitoring

The system is designed for remote monitoring of obstructions present on rail tracks. The
IPU processes and detects the amount of obstruction present on the tracks, which is then
sent to an online server in real time. Simultaneously, GPS coordinates from an on board
chip provides the location information of the obstruction. Both the obstruction values
from the IPU and the GPS location along with a time stamp are sent to the server in the
form of a.CSV file. Threshold levels can be adjusted by a user depending upon an esti‐
mated critical value for a train line. The critical value is set based on the amount of
allowable coverage of the tracks. If the rails are found to be covered by the IPU, then a
flag is raised and sent to the server via the data file. The server reads the flag field and
raises an alert on the graph with the corresponding GPS location.

3.3 Graphical User Interface


An ASP.NET application is designed that allows the user to remotely track and view
the obstruction levels in a graph format along with a real time camera feed within a
graphical user interface (GUI) (Fig. 16). The graph includes corresponding GPS infor‐
mation along with the time stamp. Figure 17 shows the information flow for the devel‐
oped system.

zamfira@unitbv.ro
Remote Monitoring and Detection of Rail Track 529

Fig. 16. ASP.NET web page.

Fig. 17. Information sources for the ASP.NET application.

Visual Studio 2013 is used to develop the ASP.NET application. Camera feed of the
rail track is projected on the page using JavaScript code. The graphical layout is
displayed using CanvasJS charts, which allows us to make interactive graphs with
animated projections (Fig. 18).

zamfira@unitbv.ro
530 M.M. Uddin et al.

Fig. 18. ASP.NET application block diagram.

Figure 18 shows how the different JavaScript codes are used to project the raw image
from the camera and the obstruction levels with GPS locations on the ASP.NET GUI.

4 Conclusion

The paper describes the development of a remote railroad obstruction detection system
utilizing image processing techniques. A real time camera output is processed and passed
to the cloud and presented to the remote users via a GUI. The project involves detecting
the individual components of a rail track, detection and calculation of obstruction levels,
and cloud based remote monitoring user interface. The system has been designed,
developed, and tested with a 2D rail track image. The system can be extended for a
physical 3D track using a mobile camera mounted on a drone. Alerts can be set to trigger
emails or text messages to concerned officials.

References

1. Phillips, D.A.: RWDI Consulting Engineers & Scientists, Analysis of Potential Sand Dune
Impacts on the Railway Tracks and Methods of Mitigation. www.iktissadevents.com/files/
events/gtrc/1/presentations/d2-s8-duncan-phillips.pdf

zamfira@unitbv.ro
Remote Monitoring and Detection of Rail Track 531

2. Rossetti, M.A.: A Session 4A, Advances and Applications in Transportation Weather,


Analysis of Weather Events on U.S Railroads (2007)
3. Bamford, H.: Western Cape, Train Track Disappears Under Sand, 16 September 2015
4. Sand on the tracks stalls Spain-Saudi desert rail project. http://www.koratravelgroup.com/en/
package/desert-train/http://www.thelocal.es/20160309/row-over-sandy-tracks-hits-spain-
saudi-rail-project (2016)
5. Xiong, J., Zhu, L., Qin, Y., Cheng, X., Kou, L., Zhang, Z.: Railway snow measuring methods
and system design, chapter 79. In: 2015 International Conference on Electrical and
Information Technologies for Rail Transportation (2016)
6. Efford, N.: Pearson Education Limited, Digital Image Processing – A Practical Introduction
Using Java (2000)
7. Kapsalas, P., Rapantzikos, K., Sofou, A., Avrithis, Y.: Regions of interest for accurate object
detection. In: International Workshop on Content-Based Multimedia Indexing (2008)
8. Young, I.T., Gerbrands, J.J., Vliet, L.J.: Fundamentals of Image Processing. PH Publications,
Delft (1998)
9. Yogamangalam, R., Karthikeyan, B.: Segmentation techniques comparison in image
processing. In: International Journal of Engineering and Technology (2013)
10. Szeliski, R.: Computer Vision: Algorithms and Applications. Springer, New York (2010)
11. Kaspers, A.: Blob Detection, Biomedical Image, Sciences, Image Sciences Institute, UMC
Utrecht (2011)
12. Hinz, S.: In: Fast and subpixel precise blob detection and attribution. In: Proceedings ICIP
2005, Genua (2005)

zamfira@unitbv.ro
Improving Communication Between Unmanned Aerial
Vehicles and Ground Control Station Using Antenna
Tracking Systems

Sebastian Pop, Marius Cristian Luculescu, Luciana Cristea,


Constantin Sorin Zamfira ✉ , and Attila Laszlo Boer
( )

Transilvania University of Brașov, Eroilor 29, 500036 Brașov, Romania


{pop.sebastian,lucmar,lcristea,zamfira,boera}@unitbv.ro

Abstract. Unmanned Aerial Vehicles (UAV) are used in inspections on critical


infrastructure such as oil, gas, water, pipelines, power networks, dams, moni‐
toring crop vegetation status in precision farming, forest fires, and rescue opera‐
tions. The paper presents a method to continuously maintain the Line Of Sight
(LOS) between a Ground Control Station (GCS) and an UAV in order to improve
the geo-reference accuracy of the acquired data. This need arose under a research
project for monitoring the crops vegetation status when the UAV communicates
with the GCS during the data acquisition process. For solving the problem, two
pan-tilt modules Antenna Tracking Systems (ATS) were attached on UAV and
GCS. They are controlled by a microcontroller development board, so that to keep
the LOS between UAV and GCS.

Keywords: Unmanned Aerial Vehicle · Ground Control Station · Antenna


Tracking System · Spectrometer

1 Introduction

Information transfer between two systems can be achieved by using communication


interfaces and protocols. Each system has its own interface used for sending and
receiving data. One of the two systems can be an Unmanned Aerial Vehicle (UAV) so
that wireless communication has to be used. There are a lot of opportunities and chal‐
lenges in doing this [1]. The other system can be a Ground Control Station (GCS) that
has to receive information from the UAV and send commands to it. Data transmission
must be accurate and continuous [2]. The systems can communicate in Line-Of-Sight
(LOS) or Beyond-Line-Of-Sight (BLOS), in the last case communication is of critical
importance [3].
Basically there are two methods to create the communication between UAV and
GCS. An UAV is equipped with an omnidirectional antenna system and GCS can be
equipped with an omnidirectional antenna or with a directional one. In existing systems,
with a transmitter power of 500 mW between two omnidirectional antennas (on UAV
and GCS), broadcast quality video (without interference) can be achieved over a distance
of 500–600 m depending on external conditions.

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_49

zamfira@unitbv.ro
Improving Communication Between Unmanned Aerial Vehicles 533

With omnidirectional antenna system mounted on UAV and a directional antenna


on GCS, transmission distance on the same type of transmitter can be increased up to
1000 m (LOS distance). In most cases, an omnidirectional antenna is placed on board
of the UAV and a pan-tilt directional antenna module is placed on GCS, to track the
UAV. Some military drones have mounted specially designed directional antennas.

2 Problem Formulation

It is necessary to identify a method and develop a system to continuously maintain the


LOS between a GCS and an UAV in order to improve the geo-reference accuracy of the
acquired data [4]. The system should have the possibility to be mounted not only on an
UAV, but also on different type of mobile vehicles in order to assure the monitoring of
crops vegetation status (tractors and so on).

3 Problem Solution

3.1 Data Acquisition System

For monitoring the crop vegetation status [5] a special designed data acquisition system
which has different type of sensors was used. The most important sensor is composed
by four STS VIS-NIR Ocean Optics spectrometers [6]. They are mounted on an UAV
capable to carry the necessary equipment. Hyperspectral data obtained from the spec‐
trometers are recorded through a software developed by our research team. A Raspberry
Pi 3 (RPi3) development board was used so that to benefit of its computing power,
graphic facilities and included access point characteristics. The access point is a wireless
communication (Wifi) component, which help us to transfer information between the
GCS and the data acquisition system placed on the UAV. To have real time data we are
limited by the wireless communication distance between the GCS and the UAV.
The GCS controls a standard Antenna Tracking System (ATS) so that to follow the
position of the UAV. Being a commercial and open source solution currently used for
First Person View (FPV) flight, it is not the paper task to describe it. A similar ATS is
also mounted on the UAV.

3.2 GCS ATS Operation


GCS equipped with a pan-tilt ATS module, receives GPS (Global Positioning System)
data packages from the UAV in GPS communication protocol format. From these data,
the position and flight height are necessary.
GCS control module, through a specific software, transforms data received from
UAV GPS, in azimuth and elevation commands to drive the actuators from the GCS
pan-tilt module. Priori, Home position is set according with the UAV GPS data. In
developed software are taken into account different cases that may generate communi‐
cation errors between GCS and UAV.

zamfira@unitbv.ro
534 S. Pop et al.

In Fig. 1 the GCS antenna tracking system is presented. It consists of a control device
connected with a microcontroller development board, an Inertial Measurement Unit
(IMU) sensor, a GPS module, and as actuators we used two servos, one for pitch correc‐
tion and the other to correct azimuth.

Fig. 1. GCS antenna tracking system

3.3 UAV ATS Operation

In Fig. 2 is presented the structure of the UAV antenna tracking system together with
the data acquisition system used for crops vegetation status monitoring. UAV is
equipped with the same type of pan-tilt ATS module as the GCS.

Fig. 2. Structure of the UAV antenna tracking system and data acquisition system

Using a control device (a Companion) connected with a microcontroller devel‐


opment board (Arduino, Raspberry Pi), the software identifies the position and the
parallelism between the two antennas, placed one on the GCS and the other on the

zamfira@unitbv.ro
Improving Communication Between Unmanned Aerial Vehicles 535

UAV. The magnetometer and the barometer sensors offer data that are used for
keeping UAV in LOS.

3.4 ATS Implementation


For the tracking task, starting idea was to collect necessary data to command servos
from the autopilot Inertial Measurement Unit (IMU) sensor. In this case two autopilots
are necessary, one on the UAV and one on the GCS. Because the data acquisition system
should have the possibility to be mounted not only on an UAV, but also on different
type of mobile platforms we decided to build our own guidance system based on an
inertial measurement unit to act as an independent device.
Inspired by military system we implemented an antenna tracker on the UAV to
permanently follow the standard antenna tracker controlled by the GCS in order to
maintain directivity in the flight direction. To reduce or avoid communications delays
and errors, parallelism between antennas has to be obtained.
In order to realize the ATS on the UAV we used an IMU–10DOF Breakout from
Adafruit (Fig. 3) [7] and a NEO M8N GPS module [8] (Fig. 4). These modules were
connected to an Arduino microcontroller board to obtain data about the current system
(GCS or UAV) antenna position, namely distance, azimuth and height from ground. As
actuators we used two servos, one for pitch correction and the other to correct azimuth
in order that the two antennas are always in the LOS.

Fig. 3. 10 DOF IMU module Fig. 4. NEO M8N GPS module

This antenna tracking system can be mounted on different vehicles (Rover or UAV)
and what the most important is that it do not depend on a flight controller.
IMU–10DOF Breakout sensors, have the following functions:
• The LSM303 module has a 3D digital linear acceleration sensor and a 3D digital
magnetic sensor. LSM303 provides the information about GCS compass head on the
three axis of the magnetometer. Before starting it is necessary to calibrate the sensor
in order to know the GCS azimuth. Data obtained from the three-axis accelerometer
were not used in our application;
• L3GD20 module is a three-axis angular rate sensor, used to obtain roll, pitch and yaw
data. The pitch angle of the UAV antenna tracker help us for keeping constant steering
towards GCS;

zamfira@unitbv.ro
536 S. Pop et al.

• BMP180 module is a high precision digital atmospheric pressure and temperature


sensor. As pressure changes with altitude, based on pressure differences between the
point of departure and the point where the UAV is in flight, we calculate the altitude
above ground level (AGL – altitude);
• GPS NEO M8N module is used to determine space position in WGS84 (World
Geodetic System 1984) coordinates system of the UAV antenna tracker.
Actuators consist of a continuous rotation servo (CRS) for azimuthal orientation
allowing 360° spinning and a standard servos (SS) for orientation in the pitch angle.

4 Results

4.1 Data Acquisition Software

In Fig. 5 the main data acquisition window is presented. It includes values related to the
acquisition process: spectrometers identifiers, number of spectrum reading iterations,
integration time, reference and zero point levels, data acquisition control (start and stop),
and reflection coefficients charts.
Hyperspectral data acquired from the spectrometers are stored on a SD card by the
RPi3 developed software or can be sent in real time to the GCS. Collected data are used
to compute the Normalized Difference Vegetation Index (NDVI) to estimate crop yields.
The accurate starting point of the measurement has to be known so that the information
to be correlated with the map configuration.
Communication delays can occur in transmission between the UAV and the GCS
preventing to get real time data from the spectrometers.
Each flight mission is programmed in the GCS. An example of a flight mission
prepared in the Mission Planner software [9], is presented in Fig. 6.

Fig. 5. The main data acquisition window

zamfira@unitbv.ro
Improving Communication Between Unmanned Aerial Vehicles 537

Fig. 6. Example of a flight mission prepared in Fig. 7. Relative position of the UAV ATS and
the GCS software GCS antenna

4.2 The Calculation of UAV Antenna Position Related to the Pitch Angle

In Fig. 7 we present the necessary elements to determine the relative position of the
UAV antenna tracker and GCS antenna. The UAV ATS controller records the takeoff
position and continuously the data from the GPS.
Permanently it calculates the distance between the projection on the ground of the
UAV GPS position and the fixed GCS position. Using the GPS data, the UAV ATS
controller computes the d distance between the two antennas.
We had to choose one of the methods used in navigation, to calculate the d distance
between the two antennas: a rhumb line (or loxodrome) or to apply the haversine
formula.
A rhumb line, or loxodrome [10] is an arc crossing all meridians of longitude at the
same angle, i.e. a path with constant bearing as measured relative to true or magnetic
north. This method is recommended to calculate the distance d over long range
(>1000 km).
The haversine formula is an equation used to calculate distances between two points
on a sphere from their longitudes and latitudes co-ordinates and we implemented it in
our software, having the advantage of the obtained GPS data (longitudes and latitudes)
from the UAV ATS controller. Usually, GPS co-ordinates, longitude and latitude, are
in deg-min-sec suffixed with N/S/E/W (e.g. 45°37′60″N, 25°34′60″E) format. To apply
the haversine formula, this co-ordinates must be converted in signed decimal degrees
without compass direction, where negative indicates west/south (e.g. 45.6333, 25.45 or
40.798302, −73.985006).
The haversine formula is:

a = sin2 (Δϕ∕2) + cos ϕ1∗ cos ϕ2∗ sin2 (Δλ∕2) (1)


√ √
c = 2∗ atan2( a, (1 − a)) (2)

d = R∗ c (3)

zamfira@unitbv.ro
538 S. Pop et al.

where:
Δϕ = (lat 2 – lat1); difference of latitude, Δλ = (lon 2 – lon1); difference of longitude,
R = 6371 km; radius of the Earth, d is the distance computed between Ground Control
Station (GCS) and the UAV.
The BMP180 module [11] offers data regarding the atmospheric pressure, used to
compute the H altitude at which UAV antenna is positioned during the flight. The basic
formula for absolute altitude is:

⎛ ( ) 1 ⎞
⎜ p 5.255 ⎟
H altitude = 44330 ∗ ⎜1 − ⎟ (4)
⎜ p0 ⎟
⎝ ⎠

where p is the measured pressure by the BMP180 sensor and p0 is the pressure at sea
level e.g. 1013.25 hPa.
Based on d and H values, the controller determines pitch angle for having the LOS
between UAV and GCS. Pitch angle is converted in Pulse-Width Modulation (PWM)
to command the standard servo. At the same time with pitch angle command, values
from the L3GD20 module are used to compute the correction of the angular position of
the UAV ATS, but also related to the aircraft pitch.

4.3 The Calculation of the Position of the Antenna Azimuth Angle Versus UAV
ATS

To compute the azimuthal orientation toward magnetic north, between the GCS and
UAV ATS, it is necessary to know the azimuth angle which is given by the LSM303
module and is recorded by the UAV ATS controller. An initial calibration has to be
made in order to read the takeoff azimuth angle. The LSM303 module must be level to
the earth’s surface. Tilt compensation circuits and techniques can be used to normalize
the magnetometer reading to correct all the influences especially magnetic interferences
and the effect of the earth’s field [12].
The x and y component of the earth’s magnetic field, that is, the directions planar
with the earth’s surface, are used for the computation of α, the magnetic compass heading
angle. The magnetometers x and y readings are used in the set of equations:

y >0 α = 90−(arctan(x/y)) ∗ 180∕π; (5)

y > 0 α = 270−(arctan(x/y)) ∗ 180∕π; (6)

y > 0, x > 0 α = 180; (7)


y > 0, x > 0 α = 0. (8)
To determine true north heading, we must add or subtract the appropriate declination
angle. The declination angle for Brașov county is +5°35′.
During the flight the controller azimuth values are recorded. Azimuth angle of takeoff
position will be added or subtracted from recorded azimuthal angle values. GPS pitch

zamfira@unitbv.ro
Improving Communication Between Unmanned Aerial Vehicles 539

angle and antenna azimuth angle, enter as data in an average subroutine, in order to
minimize tracking errors. As a result, averaged values are converted as PWM signals to
command CRS.

5 Conclusions

The paper presents a new approach to improve the communication between the pan-
tilt UAV ATS module and the pan-tilt ATS module from GCS, using only commer‐
cial antennas, to improve the quality and the distance they can transmit safety the
information.
The ATS is controlled by a Companion-microcontroller development board, being
responsible to maintain the LOS between GCS and UAV.
In the next future, different types of communication interfaces and antennas will be
tested.

Acknowledgment. This paper was realized within the Partnerships Programme in priority
domains-PN-II, which runs with the financial support of MEN-UEFISCDI, Project no. PN-II-PT-
PCCA-2013-4-1629 and it was also supported by a grant of the Romanian National Authority for
Scientific Research and Innovation, CNCS/CCCDI – UEFISCDI, project number PN-III-P2-2.1-
BG-2016-0132, within PNCDI III.

References

1. Zeng, Y., Zhang, R., Lim, T.J.: Wireless communications with unmanned aerial vehicles:
opportunities and challenges. In: IEEE Communications Magazine, vol. 54, Issue no. 5, pp.
36–42 (2016)
2. Çuhadar, I., Dursun, M.: Unmanned air vehicle system’s data links. J. Autom. Control Eng.
4(3), 189–193 (2016)
3. Li, B., Jiang, Y., Sun, J., Cai, L., Wen, C.-Y.: Development and testing of a two-UAV
communication relay system. Sensors 16(10), 1696 (2016). doi:10.3390/s16101696
4. Barton, J.D.: Fundamentals of small unmanned aircraft flight. Johns Hopkins Apl. Tech.
Digest 31(2), 132–149 (2012)
5. Thenkabail, P.S., Lyon, J.G., Huete, A.: Hyperspectral Remote Sensing of Vegetation. CRC
Press, New York (2016). ISBN 1439845387, 9781439845387
6. Ocean Optics Spectrometers, STS Series. http://oceanoptics.com/product-category/sts-
series/. Accessed 20 September 2016
7. https://www.adafruit.com/product/1604. Accessed 4 Apr 2016
8. https://www.u-blox.com/en/product/neo-m8-series. Accessed 9 Apr 2016
9. http://ardupilot.org/planner/docs/mission-planner-overview.html. Accessed 4 May 2015
10. Alexander, J.: A rhumb way to go. Math. Mag. 77(5), 349–356 (2004)
11. https://www.bosch-sensortec.com/bst/products/all_products/bmp180. BST-BMP180-
DS000-09. Pdf. Accessed 4 May 2016
12. Bingaman, A.N.: Tilt-Compensated Magnetic Field Sensor, Master of Science in Electrical
Engineering Thesis, Blacksburg, Virginia, 13 May 2010

zamfira@unitbv.ro
Remote RF Testing Using Software
Defined Radio

Stephen Miller(B) and Brent Horine

Manhattan College, Bronx, USA


{smiller02,brent.horine}@manhattan.edu

Abstract. A virtual radio frequency test environment is described using


an FGPA with dual core ARM processors to implement transmitter and
receiver chains in the digital domain and a single chip RF agile trans-
ceiver. These components are commercially available on development
boards with software and firmware to provide the interface. A driver
amplifier and a power amplifier are evaluated in the test set for lin-
earization experiments.

Keywords: RF power amplifier · Software defined radio · Virtual test ·


Linearization

1 Introduction

The wireless revolution has paralleled the Internet revolution and continues to
act as an economic driver. One of the economic characteristics of the Internet
economy is the scalability of software start ups. Venture capital has gravitated
towards start up organizations that have low cost of entry and fast time to
market. In contrast, hardware start ups face big capital investments and long
time to market horizons. This inhibits the flows of capital to these start up, and
ultimately places obstacles in innovation. While there are many costs associated
with bringing hardware to market, a significant one is test and evaluation. Radio
frequency (RF) test equipment is expensive and special purpose. Software defined
radio (SDR) [8] moves the implementation of radio functions from the analog,
component element domain to the software domain running on general purpose
processors. This same technology is being used in test equipment and has the
potential to virtualize RF testing.
The University of Utah runs Emulab, a large scale testbed including low cost
PCs, WiFi access points, and importantly, USRPs, the Universal Software Radio
Peripheral [7]. Similarly, Rutgers University runs the ORBIT test bed using
software defined radios for evaluating wireless networks [9]. These systems allow
remote users to configure and run tests on a large number of network nodes. The
focus is on the network protocol stack and examining different routing and media

The authors would like to thank the Manhattan College Summer Research Scholars
program for supporting this work.

c Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6 50

zamfira@unitbv.ro
Remote RF Testing Using Software Defined Radio 541

access control (MAC) layers. Because they are based on software defined radio,
there is some possibility to run experiments at the physical (PHY) layer. The
modulation format can be implemented in GNU Radio running on commodity
PCs [1].
The intention of this research is to push the virtualization line down to the
transmitter and receiver line up. The line up generally refers to digital up con-
version, crest factor reduction, digital predistortion, digital to analog conversion,
analog up conversion, and amplification for the transmitter. On the receiver side,
the line up starts with a preselector filter, low noise amplifier, analog down con-
version, analog to digital conversion, and digital down conversion. Receivers also
include channel estimation and equalization and the demodulation and decision
process. RF testing is expensive due to the specialized equipment and compo-
nents required. Organizations from start ups to university laboratories may not
have the capital budget to afford this equipment. An RF laboratory as a service
model could change the economic model of the hardware startup.
Ultimately, it is important to evaluate wireless systems in realistic channel
conditions. Propagation emulators are also very expensive and only provide an
estimate of real conditions. Being able to host a wireless prototype system in an
actual field scenario could be very effective. One particular relevant use case is
the Industrial Internet of Things (IIoT). This field is characterized by severe mul-
tipath distortion and interference environment and has a significant impact on
the feasibility of any given solution. Indoor wireless applications suffer from the
same issues as IIoT and are difficult to model accurately. Labor intensive walk
tests are required to accurately map coverage patterns. These use a backpack
full of consumer phones in a logging mode. This only produces high level data.
Gathering more fundamental coverage data today requires bulky test equipment.
They often are conducted by an engineer that maps the signal strength versus
location and interferers. Other environments, especially urban developments,
require a three dimensional coverage map. Remotely accessed surveying equip-
ment could provide this useful feature, perhaps even a drone mounted system.
All of these applications can be satisfied by a software defined radio platform.
In particular, we propose a system on chip that mates to an integrated chip
with data conversion and RF upconversion. An example is the Xilinx Zynq
processor with dual-core Cortex A9 ARM processors and Virtex 7 FPGA fabric
as hosted on the ZEDBoard and the Analog Devices AD9361 Integrated RF Agile
Transceiver. This system allows for broad frequency coverage, diverse filtering
options, FPGA options, real-time SDR implementation on one core, and test
executive software running in Linux on the other core. This paper outlines the
construction of the software defined radio signal chain, software control program,
and depicts a sample of its potential usage. Furthermore, this paper discusses
the future potential usage of this system to address pressing issues regarding
high frequency communications.

zamfira@unitbv.ro
542 S. Miller and B. Horine

2 Software Defined Radio Chain


The SDR board used throughout this paper is an Analog Devices FMCOMMS2
with an AD9361 transceiver for dual-channel signal transmission and reception.
This single chip performs digital up and down conversion, digital filtering, data
conversion between analog and digital domains, frequency conversion, and gain
control. The parameters are highly configurable through a Linux driver or direct
I/O. The board supports an operating frequency up to 6 GHz and a bandwidth
up to 56 MHz. The FMCOMMS2 is designed to be used as a peripheral to
the Xilinx ZedBoard. The ZedBoards Zynq processor encapsulates all logic and
control of the system. Furthermore, a modified form of Linux running on one of
the ARM cores provides a user interface. This interface serves as the primary
source of interaction between the user and signal transmission. The high level
signal chain within the SDR board can be seen in Fig. 1. The software controller
was designed as a fundamental means of waveform transmission and reception.
The programs shell was based off of Michael Feilens design of a streaming method
for the FMCOMMS2 board based upon IIO [2]. The reconstructed programs
primary functions included setting the SDR boards parameters, transmission of
sampled waveforms, and saving of received signals.

Fig. 1. High level signal chain

3 Test Environment Configuration


While we envision this system to fulfill a number of RF tests, our immediate moti-
vation is to characterize transmitter line-ups including the digital upconverter,
crest factor reduction, digital predistortion (DPD), DPD training, up conversion,

zamfira@unitbv.ro
Remote RF Testing Using Software Defined Radio 543

filtering and the power amplifier chain. In particular, we are investigating the lin-
earization of RF power amplifiers. We implement the digital portions of this line-
up in the ZedBoard. To demonstrate the feasibility and explore the performance
limits, we tested a Mini-Circuits GALI-19+ power amplifier [3] (7 GHz–10 mW)
on a TB-409-19+ evaluation board [4] and a ZHL-5W-2G-S+ [5] (800–2000 MHz
5 W). The entire test setup can be seen below labeled Fig. 2. The signal that was
fed back into the SDRs receiver was also tapped off to a spectrum analyzer to pro-
vide a real-time reference signal power measurement. The GALI-19+ is used as a
driver amplifier. Its characteristics are illustrated in Fig. 3.

Fig. 2. Test environment configuration

Fig. 3. SDR linearity measurements

zamfira@unitbv.ro
544 S. Miller and B. Horine

Fig. 4. ZH-5W-2G-S+ IO characteris- Fig. 5. ZHL-5W-2G-S+ gain characteris-


tics tics

More output power is required to drive PAs into saturation. This is the role
of the high gain ZHL-5W-2G-S+ amplifier. The IO characteristics of this driver
amplifier can be seen in Fig. 4. Furthermore, the gain characteristics can be seen
in Fig. 5.

4 Results
The FMCOMMS2 is considered to be linear from 800 MHz 6 GHz within its
datasheet [6]. A 20 MHz wide LTE waveform is sent through the system without
an amplifier in the loop (Fig. 6) and with the GALI 2 GHz PA driven into satura-
tion Fig. 7. The loopback mode shows a clean rolloff from the signal to the out of
band region. The saturated amplifier version exhibits a shoulder indicative of third
order intermodulation of the various subcarriers in the LTE waveform. While the
in-band distortion cannot be so easily observed, a receiver algorithm can be incor-
porated into the ZedBoard processor and the constellation error easily calculated.

Unfiltered 20MHz LTE Signal Amplified and Attenuated 20 MHz LTE Signal
160 150

140
140
130
120 120
Response (dB)

Response (dB)

100 110

100
80
90

60 80

70
40
60

20 50
−20 −15 −10 −5 0 5 10 15 20 −20 −15 −10 −5 0 5 10 15 20
Frequency (MHz) Frequency (MHz)

Fig. 6. 20 MHz LTE in loopback mode Fig. 7. 20 MHz LTE saturated PA

zamfira@unitbv.ro
Remote RF Testing Using Software Defined Radio 545

5 Future Work and Conclusion


The experiments demonstrate the ability of the system to perform the required
turn-key measurements. The next steps include further developing the hypervisor
and test executive system that is responsible for configuring and launching the
tests and reporting the results. Additional enhancements can include bundling
higher power RF components suitable for power amplifier testing such as direc-
tional couplers, attenuators, and driver amplifiers into a programmable switch
matrix.

References
1. http://gnuradio.org/
2. https://github.com/analogdevicesinc/libiio/blob/master/examples/ad9361-
iiostream.c
3. http://www.minicircuits.com/pdfs/GALI-19+.pdf
4. http://www.minicircuits.com/pcb/WTB-409-19+ P02.pdf
5. http://www.minicircuits.com/pdfs/ZHL-5W-2G+.pdf
6. http://www.analog.com/en/design-center/evaluation-hardware-and-software/
evaluation-boards-kits/eval-ad-fmcomms2.html#eb-overview
7. Hibler, M., Ricci, R., Stroller, L., Duerig, J., Guruprasad, S., Stack, T., Webb, K.,
Lepreau, J.: Large-scale virtualization in the Emulab network testbed. In: Proceed-
ings of the 2008 USENIX Annual Technical Conference, pp. 113–128 (2008)
8. Mitola, J.: The software radio architecture. IEEE Commun. Mag. 33(5), 26–38
(1995)
9. Ott, M., Seskar, I., Siraccusa, R., Singh, M.: Orbit test software architecture: sup-
porting experiments as a service. In: IEEE Tridentcom (2005)

zamfira@unitbv.ro
Remote Control of Large Manufacturing Plants Using
Core Elements of Industry 4.0

Hasan Smajic1 ✉ and Niels Wessel2


( )

1
University of Technology, Arts and Sciences, Cologne, Germany
hasan.smajic@th-koeln.de
2
Schneider Electric GmbH, Ratingen, Germany

Abstract. The most big manufacturing plants such as large transfer line, large
packaging machines, steel production line or plants for food and beverage are
built for a long-live deployment and are usually energy-intensive processes. A
redesign of mechanical parts is for a long usage term (20 till 30 years) not required.
But upgrading of automation parts with more performance is a needful continu‐
ously process. An efficiently implementation of modern automation and IT tech‐
nologies by currently engineering tolls is not the state of the art. On the beginning
of this paper a model concept will be introduced, showing how the partition of
the large plants can be performed in small collaborating parts with less
complexity. The next model describes the distributed and allocated hardware and
software components to determined plant parts. The developed distributed control
system based on smart Ethernet nodes, which allows remote control and mainte‐
nance according elements of industry 4.0.

Keywords: Remote control · Online engineering · Industry 4.0 · Automation


systems · Distributed control · Industrial software

1 Introduction

The very fast advances in digital hardware and communication technology have led to
the development of a new generation of microcontroller-based control systems, known
as distributed control systems [1]. These are characterized by a digital fieldbus commu‐
nication, small decentralized control systems instead of one centralized system and a
microcontroller-based distributed intelligence, which uses more and more onboard
computing power in smart sensors and actuators.
The concepts of different bus-layers for fieldbus, as they are realized at the current
development status of complex production systems, have reached their limit of capability
and efficient usability in regard to engineering, maintainability and extensibility [2]. By
using existing and implementing new functions and interfaces in intelligent field devices,
a control system for production systems is to be developed. Unlike most previous
approaches, it must based on continuous communication and functions for remote program‐
ming and remote management of plug&play-able field devices.
The first goal of this project is to find out, what impact standard implementation
methods for automation devices (interconnectible, interoperable, interchangeable and

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_51

zamfira@unitbv.ro
Remote Control of Large Manufacturing Plants 547

Plug&Play) have to overall automation structure of a plant. The second objective is to


show methods, models and tolls, needed for development of industrial engineering soft‐
ware, which allows efficient additionally enhancement of automation technologies in
current control systems. Finally, the main focus would be, to find out, how the core
elements of Industry 4.0 can be used for remote engineering and service. This control
concept should bypass problems of standards and the standardization of heterogeneous
field devices by using wide known standards like Ethernet, TCP/IP, WSDL, SOAP,
XML and Java.

2 Modelling of Large Plants

The building of automation architecture is a complex issue. For four different technology
levels (mechanical parts of plant, hardware, communication system and software) the
physical and logical models have been described with object oriented methods according
UML Notation (Fig. 1). Based on developed models the distributed control system
consider decentralization of tree dimension. Not only the hardware and data processing
is distributed, but also the PLC control algorithm and data base. Each smart device on
the lowest level has its own PLC task and his own part of data base, connected with
industrial Ethernet.

Fig. 1. Overall model for flexible distributed automation architecture

The first step for plant modeling is identification of mechanical parts and their clas‐
sification to function groups and function units. After hierarchically decomposition of
plant objects in sublevels follows the assignment of functionality and description of
relationships and interfaces.

zamfira@unitbv.ro
548 H. Smajic and N. Wessel

The physical architecture describes the mapping of the hardware elements to esti‐
mated plant objects. That means the distribution of the technical objects to the Control-
Nodes, which are connected with the industrial Ethernet. These technical objects, the
sensors and actuators as well as markers as virtual objects, are in an object oriented
viewpoint instances of the appropriate classes Sensor, Actuator and Marker. Also they
are assigned as embedded objects to an instance of a generalized class “Bus-Node”.
The logical architecture of the distributed automation system will be displayed in a
class-structure diagram. The model shows the static structures between classes and
instances as well as the appropriate relationships and serves as the basis for the gener‐
ation of node-specific software-modules. The classes Sensor, Actuator and Marker are
derived by inheritance from the two abstract classes Producer and Consumer. As derived
classes they are a specialization of this basis. A sensor takes over the role of a message
producer in this control concept with horizontal communication in the lowest automation
layer. The class Sensor is a specialization of Producer, that as basis class contains the
attributes “identifier”, “priority” and “blocking time” of the accordant communication
object and a method for message creation in the case of an event. On the other side an
actuator depends on information from specific sensors and Markers in the system.
For the control-concept a comprehensive model according UML-notation (Unified
Modeling Language) has been created, that describes the communication, the distributed
architecture and the PLC-application, which is to be distributed. The described tree
models are the basis for the later software development process and integration of the
smart devices.

3 Architecture for Remote Automation Engineering

The new architecture for remote automation engineering has been developed using the
intelligence of smart devises with the aim to substitute control task of a centralized PLC
(Fig. 2). Each smart device on the lowest level has its own PLC task and its own part of
the process data base. They process communication between devices is over industrial
Ethernet (Modbus TCP/IP). The following type of devices are used:
• Sensors, detectors, encoders and RFID switches for data detection
• Programmable logic control (PLC) for data processing
• Human Machine Interface
• Velocity speed driver and Motion driver
• Smart meter for energy efficiency
• Ethernet Network with Modbus TCP/IP
• Programming and SCADA software
The gateway server provides all central services including administration tasks,
routing to the appropriate automation devices and security mechanism. The server also
runs an SQL server with modules for authentication and logging. There are features for
global data access, data for energy efficiency and maintains of the whole process. For a
high scalability the Hyper-V services, which are integrated into Windows 2008 Data‐
center, are used on the gateway machine. The root machine consists of two VLAN

zamfira@unitbv.ro
Remote Control of Large Manufacturing Plants 549

Fig. 2. Remote access to the plant

switches for network data transfer between the virtual clients. Each VLAN is connected
with one physical network adapter. The first VLAN is connected with the internet and
is used by the thread management gateway server who links to routing and RAS
(including VPN connections). Another roll of this virtual server machine is to establish
a web server and secured internet access from inside of the laboratory. The second
VLAN is connected with the virtual machine pool. In order that the virtual machines
can connect to the PLCs and the HMIs, the VLAN is connected to the physical network
adapter, which is connected to the internal laboratory network.
For an integrated authentication service, the active directory services are used. In
this directory we integrated a container for external users. Only administrators and users
in this group are allowed to connect via VPN. External standard users have some more
restrictions which are realized due to adapted group policy objects.
For remote access with a Web Client the user needs just standard Web-Browser. The
connection to devices is established over external URL Gateway Server and the name
of the virtual machine, before username and password must be given. The connection
to the virtual working desktop can be realized in a simple way. In the first step a VPN
connection has to be done. For establishing the VPN connection the user only need the
VPN client supporting SSTL, L2TP, PPTP or IPsec. The easiest way is to use the already
integrated client of the most frequently used operating systems, such as Microsoft
Windows, Mac OS X, iOS and many Linux distributions. After connection the user is
able to start a remote access app which uses the RDP protocol. On actual Microsoft

zamfira@unitbv.ro
550 H. Smajic and N. Wessel

Windows machines (XP, or later) the tool is already integrated. OS X users can download
a small application from the Microsoft web site. If the application is running, users only
need to enter username, password and the name of the virtual machine.
After establishing the connection to the virtual machine the user can open applica‐
tions for writing, download and testing applications to the PLC, HMI and Drives.
For Design and implementation of developed models for distributed control system
(capture 2) the application software UAG from Schneider Electric is used. It allows the
design the complete large plants including mechanical parts, control objects, commu‐
nication and logic.
The Plant design for mechanical objects is compliant to VDI3260 standard. Object
oriented model is used for design of technological object like motors, valves, pumps.
Those object are implemented via FDT international standard as cyber physical systems
(CPS) into application. UAG provides first the single entry based on the Smart Control
object Devices (ScoD) and then generates the applications providing synchronized,
consistent databases. In other words the tools reflect the image of a single database
(Fig. 3).

Fig. 3. Assignment of physical real object (ScoD) to virtual object (CPS)

The ScoDs as CPS are applied within the process context by parameterization. It
propagates the information incrementally, changing only the affected/modified parts.
Incremental generation means that UAG adds incrementally information to the PLC and
HMI applications. Manual modifications within the PLC/HMI are not changed by an
incremental generation.
In a case that an upgrading of automation parts occurs, the software controls for the
generated system the global resource mapping, the PLC application code with config‐
uration and variables and for the HMI application the variables, symbols, archive and
access information and more.

zamfira@unitbv.ro
Remote Control of Large Manufacturing Plants 551

On this way by all object changes the software can manage the actually physical and
logical model of the plant. This feature can significantly reduce the new project cycle
times and save life cycle cost.

4 Conclusion and Future Work

The described automation architecture has been already tested in the model factory of
university Cologne. The remote access services are running on an application for high
rack warehouse very stable with a high reliability. The remote control system is available
from outside primarily for our project partner. The first experience has shown that
developed architecture with internet and web technologies can be used for automation
engineering in following topics: remote programming of PLC, HMI, Drives, Motion,
Networks and remote monitoring.
It has been proven that implemented Industry 4.0 elements give high advantages for
monitoring of overall production resources efficiency. With just a standard Web-
Browser big energy data can be shown vie PLC access. Typically those data can be used
for monitoring of energy consumption, process part performing, benchmark perform‐
ance and reporting. We are currently developing data interface to MES and ERP Systems
(SAP).

References

1. Meier, H., Smajic, H.: Distributed Control System Based on Smart Sensors and Smart Actuator,
Mechatronics & Robotics 2004, IEEE Industrial Electronics Society, APS - European Centre
for Mechatronics Aachen, Germany, pp. 103–105, 15 September 2004. ISBN 3-938153-30-X
2. Kühnle, H., Lorenz, K., Klostermeyer, A.: Neue Wege in der Fabrikautomatisierung. In:
Werkstatttechnik, Heft 3, pp. 138–141 (2010)
3. Meier, H., Smajic, H., Faller, C.: Webbased automation for complex manufacturing systems,
machine tools and factories of the knowledge. Mach. Eng. 4(1–2), 52–59. ISSN 1642-656
(2004)
4. Falkman, P., Helander, E., Andersson, M.: Automatic generation: a way of ensuring PLC and
HMI standards. In: IEEE 16th Conference on Emerging Technologies and Factory Automation,
ETFA 2011, Toulouse, 5–9 September 2011
5. Smajic, H., Faller, C.: Remote laboratory for education in automation engineering. In: IEEE,
CTI global Engineering Education Conference, Istanbul, Turkey (2014)
6. Unity Application Generator (UAG) 3.3 SP4, Extended User Manual (2016). http://www.schneider-
electric.com/en/download/document/33003669K01000/
7. Umsetzungsempfehlungen für das Zukunftsprojekt Industrie 4.0, Berlin, 2. Oktober 2012. https://
www.bmbf.de/files/Umsetzungsempfehlungen_Industrie4_0.pdf

zamfira@unitbv.ro
Games Engineering

zamfira@unitbv.ro
Dinner Talk: A Language Learning Game
Designed for the Interactive Table

Jacqueline Schuldt1(&), Stefan Sachse1, and Lilianne Buckens2


1
Fraunhofer Institute for Digital Media Technology, Ilmenau, Germany
jacqueline.schuldt@idmt.fraunhofer.de
2
Ricoh Europe (Netherlands) B.V., Amsterdam, Netherlands

Abstract. The Interactive Table as a part of the Interactive Classroom or


Learning Lab, shows how teaching and learning spaces can be redesigned to
support changing teaching styles and methods. Therefore, it is necessary to deal
with content and ideas to enable essential experiences for the learner. Dinner
Talk is the label of an infinite family of digital games, particularly designed for
the Interactive Table. Dinner Talk is based on a novel software technology
named Cubbles Technology. This software technology is investigated on the
basis of the concept of Webbles and developed for the use of current web
technologies (HTML5). Webbles is a component-oriented approach for devel-
oping Web-based applications. It is particularly suitable for problems where data
from existing sources are integrated ad hoc, to be flexibly aggregated and
analyzed. This includes the possibility to develop Webbles evolutionary further
and combine them with one another. Dinner Talk is an experimental
Game-Based Learning Scenario developed with Cubbles Technology.

Keywords: Cubbles technology  Cubbles  Game-Based Learning 


Interactive Table  Placement games  Webble technology  Webbles

1 Introduction

Technologically, the game Dinner Talk is based on meme media technology [1]. Now,
taking the perspective of games science, Dinner Talk is a dynamic placement game.
But practically, Dinner Talk is a Game-Based Learning approach to support learning of
a foreign language using the Interactive Table.
When people play (digital) games, they have an experience. It is this unique
experience that is in focus. But, as familiar as we are with experiences, they are very
hard to describe. You cannot really see them, touch them, or hold them - you cannot
even really share them. No two people can have identical experiences of the same thing
- each person’s experience of something is completely unique. The (digital) game itself
is not the experience, it is the enabler for the experience. What we can do, is to create
interesting artefacts (e.g. rule sets, computer programs) that are likely to create certain
kinds of experiences when a player interacts with them [10].
This is what we do while designing games for the Interactive Table. The digital
game Dinner Talk running on the Interactive Table, is simply a tool to generate unique
experiences in learning languages.

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_52
zamfira@unitbv.ro
556 J. Schuldt et al.

This approach is aiming at a change in teaching practice. The teacher becomes a


coach who is guiding the students to gain experiences which lead to learning while
enhancing motivation through the game aspect. Based on this game, several innovative
scenarios are possible (e.g. Dilemma Talk), which can advance the learning experience.

2 The Interactive Table

From a hardware perspective an Interactive Table is a large multi-touch screen lying in


a horizontal position on a pedestal or lift, controlled by a computer. By its design it
offers several affordances that are beneficial for learning. An affordance can be defined
as the user’s perception of the possibilities of action that the properties of a device are
offering [4].
The table-format invites people to stand around it and even to lean on it. To be able
to see what is happening on the screen, everyone needs to stand front-row. This
physical position invites interactivity with the device and also between participants. As
Buisine et al. [11] mention for instance, this spatial configuration appears to increase
collaborative behavior and decrease social loafing. Also Rogers and Lindley [12] found
that the horizontal orientation of the screen makes it easier for participants to collab-
orate than a vertical orientation. This provides an opportunity for small-group col-
laboration and active participation. The multi-touch property enhances this affordance
because of its intuitive nature (the engaging experience of direct manipulation [13])
combined with the fact that multiple people can interact with it simultaneously. The
High Definition quality of the screen asks for the use of imagery, giving way to
multimodal learning.
Of course this device also has several constraints. For instance, it is not practical to
read or input large pieces of text. These affordances (and constraints) make the
Interactive Table especially suitable for multiplayer Game-Based Learning, of the type
that is board-like, and where collaboration is useful or even needed to come to a
conclusion.

3 Dinner Talk: A Placement Game Approach


Based on Cubbles

The potential of learning with digital games is far from being exhausted, and certainly
not at school and in further education. In a digital game, real people act in a virtual
world. Actions with real content, e.g. doing math, managing resources, solving com-
binatorial problems, understanding speech and acting accordingly form the basis for
real learning and training. Dinner Talk is the name of a de facto infinite family of
digital games that support learning German as a foreign language. Dinner Talk is a
so-called placement game [1] - in the category of combinatorial games one of the
easiest subcategories. It consists of a simple game board and playing pieces, which are
labelled with German text. If the pieces are placed in such a way that texts on
neighboring pieces match, the players will receive gratification in the form of points on
a High Score List.

zamfira@unitbv.ro
Dinner Talk: A Language Learning Game Designed 557

It is permitted for the dynamic variation of these games that set pieces can be
re-arranged to allow for exploration, especially competitive, and explorative learning.
Dinner Talk provokes the confrontation with the content of phrasings in the German
language and is almost infinitely scalable with respect of linguistic complexity. It has a
large number of parameters. Every instantiation of parameters leads to a particular
game of the Dinner Talk family [1].

3.1 Cubbles Technology


Dinner Talk uses new web technology called cubbles [5]. Cubbles technology was
inspired by webble technology [6], which is the latest implementation of meme media
technology [7]. The meme media technology has an unusual philosophical background.
Richard Dawkins wrote in his Book “The Selfish Gene” about his speculations of a
non-biological evolution [8]. Tanaka developed the idea to transform this knowledge
evolution into software [7].
One part of the cubbles technology is a web framework for developers. This
framework allows developers to create web components, called cubbles. These cubbles
have a MVC structure and are implemented in HTML5, CSS and JavaScript. The data
inside the cubble is stored into slots, which can be an externally available property
parameter. Different cubbles can be connected over these slots. After a connection is
established the cubbles can share data with each other.

3.2 New Technology Features


Currently, an update for the web framework, on which Dinner Talk is based, is fin-
ished. The latest version of this framework contains some new features. Two of these
new features are auto-connection and direct execution.
The features auto-connection (cubbles latch when dragged and dropped one over
the other) and direct execution (completed cubbles run immediately, an effect essential
to learner feedback) are relevant for exploratory playful learning. The prominence of
these features is introduced in previous papers [1, 9].
The first version of Dinner Talk was running on an old version of the framework.
Some important features were still desirable. To add these features some proprietary
code was written. In the new version of Dinner Talk this code was replaced by the new
standardized framework features. Also, other parts of the code were now replaced by
HTML5 standards, e.g. drag and drop and the rotate function.
To force a multilingual version of Dinner Talk, the database behind the game was
rewritten. Now it is possible to create one scenario in different languages. The new
structure of the database allows interest-attributes for each dataset of a character. Now,
with an improved editor it is easier to add or edit these attributes.

zamfira@unitbv.ro
558 J. Schuldt et al.

3.3 Current Status of Dinner Talk


A first core variant of the game family Dinner Talk (see Fig. 1) has been completed. This
version allows for both single-player self-controlled learning and multi-player collab-
orative learning, the latter of which is realized via discussions of content. The Interactive
Table stimulates collaborative learning by having the learners actually around the table.
Jantke et al. [1] consider dynamic placement games in which the players have per-
mission to take a certain seating back, i.e., to undo an action. This helps to achieve better
results, to reduce frustration, and to increase the fun of playing. Dynamic placement
games are exploratory games. To sum up, the design of the Dinner Talk game family
leads to a form of exploratory Game-Based Learning [2, 3].

Fig. 1. First version of the core variant of the game family Dinner Talk played on the Interactive
Table.

Exploratory Game-Based Learning in Dinner Talk follows four steps:


1. Read text and try to understand the essentials.
2. Place pieces so that neighbors share their interests.
3. Score one point for every match.
4. Try replacements for achieving higher scores.
Figure 2 shows a new design concept for the further development of Dinner Talk.
The focus is, in the current status of ongoing work, on usability testing and iterative
game design of Dinner Talk. Through play testing techniques and thinking aloud, the
authors ensure game element functionality and expand or discover game concepts for
increased user enjoyment. Therefore, volunteer play testing groups test the latest
prototype and provide vital feedback for subsequent prototypes.

zamfira@unitbv.ro
Dinner Talk: A Language Learning Game Designed 559

Fig. 2. Concepts for further development of Dinner Talk.

4 Summary and Conclusions

Dinner Talk is a Game-Based Learning approach based on latest technology devel-


opment in the field of cubbles technology, and support changing teaching styles and
methods. The new web framework allows the adding of new features and extended
elements to the core version of Dinner Talk. This will increase the attractiveness,
playability and usability of the game.
The evaluation of Dinner Talk and the impact on Language Learning is necessarily
beyond the limits of this paper. Some systematic evaluation has to follow and will be
reported separately.

Acknowledgement. Part of the authors work, has been supported by the German Federal
Ministry for Education and Research (BMBF) within the joint project Webble TAG under grant
no. 03WKP41D (Webble TaT).

References
1. Jantke, K.P., Arnold, O., Bosecker, T.: Exploratory game play to support language learning:
dinner talk. In: 8th International Conference on Computer Supported Education, pp. 161–
166 (2016)
2. Arnold, O., Bosecker, T., Hume, T., Jantke, K.P. Response to the challenging refugee influx:
a potentially infinite family of serious games for learning of foreign languages playfully. In:
Proceedings of the e-Society, Vilamoura, Portugal, 9–11 April 2016

zamfira@unitbv.ro
560 J. Schuldt et al.

3. Jantke, K.P., Bosecker, T.: Exploratives spielerisches Lernen von Fremdsprachen. magazin.
digitale.schule (2015)
4. Hammond, M.: What is an affordance and can it help us understand the use of ICT in
education? In: Education and Information Technologies, vol. 15, Issue no. 3, pp. 205–217
(2010)
5. https://github.com/cubbles. Accessed 17 Nov 2016
6. Kuwahara, M.N., Tanaka, Y.: Webbles: programmable and customizable meme media
objects in a knowledge federation framework environment on the web. In: Karabeg, D.,
Park, J. (eds.) Second International Workshop on Knowledge Federation, Dubrovnik,
Croatia, 3–6 October 2010
7. Tanaka, Y.: Meme Media and Meme Market Architectures: Knowledge Media for Editing,
Distributing and Managing Intellectual Resources. IEEE Press & Wiley-Interscience,
Hoboken (2003)
8. Dawkins, R.: The Selfish Gene. Oxford University Press, Oxford (1976)
9. Fujima, J., Jantke, K.P.: The potential of the direct execution paradigm: toward the
exploitation of media technologies for exploratory learning of abstract content. In: Urban, B.,
Müsebeck, P. (eds.) eLearning Baltics 2012: Proceedings of the 5th International eLBa
Science Conference, pp. 33–42. Fraunhofer Verlag (2012)
10. Fullerton, T.: Game Design Workshop: A Playcentric Approach to Creating Innovative
Games, 3rd edn. Taylor & Francis Ltd., Boca Raton (2013)
11. Buisine, S., Besacier, G., Aoussat, A., Vernier, F.: How do interactive tabletop systems
influence collaboration? In: Computers in Human Behavior, vol. 28, issue no. 1, pp 49–59
(2012)
12. Rogers, Y., Lindley, S.: Collaborating Around Large Interactive Displays: Which Way is
Best to Meet? Interact. Comput. 16(6), 1133–1152 (2004)
13. Shneiderman, B.: Touch screens now offer compelling uses. IEEE Softw. 8(2), 93–94, 107
(1991)

zamfira@unitbv.ro
The Experimento Game: Enhancing a Players’
Learning Experience by Embedding Moral
Dilemmas in Serious Gaming Modules

Jacqueline Schuldt1(&), Stefan Sachse1, Verena Hetsch2,


and Kevin John Moss3
1
Fraunhofer Institute for Digital Media Technology, Ilmenau, Germany
jacqueline.schuldt@idmt.fraunhofer.de
2
Hochschule für Technik und Wirtschaft Berlin, Berlin, Germany
3
Hochschule Osnabrück, Osnabrück, Germany

Abstract. The Experimento Game is part of Experimento, the international


educational program of the Siemens Stiftung. The program Experimento is based
on the principle of research-based learning and offers teacher trainings and
curriculum-oriented hands-on experiments from the fields of energy, environ-
ment, and health. With Experimento, the Siemens Stiftung also aims to
strengthen the teaching and formation of values during science and technology
lessons. All Experimento teaching materials and additional media are available as
Open Educational Resources on the media portal of the Siemens Stiftung. The
online portal helps teachers to find age-appropriate media to introduce their
students to global challenges such as the greenhouse effect, renewable energies,
or the production of clean drinking water. To strengthen the formation of values
during experimentation, the Siemens Stiftung is taking a new path: the devel-
opment of a gaming module, which is based on the principle of learning through
discovery. This means that children and young people actively shape their
individual learning processes while playing, discovering and understanding
scientific and technological interrelationships through Moral Dilemma Situa-
tions. They themselves develop questions independently, work out answers using
a variety of methods and reflect on the solutions. Thus, the young scientists begin
to recognize that success comes from their own actions – a valuable experience –
which motivates them and strengthens their trust in their own capabilities.

Keywords: Digital game-based learning  Game design  Moral dilemma


situations  Open educational resources  Storyboarding  Storyboard
interpretation technology  Usability engineering  User experience design

1 Introduction

Learning with digital games will necessarily constitute a basic form for teaching and
learning. It must be remembered that digital games conquered the leisure market and
the market for entertainment media. One cannot ignore Digital Game-Based Learning
(DGBL) [1]. Therefore it is necessary to understand how to implement it and for which
purposes it is appropriate. A Digital Game-Based Learning approach for Experimento

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_53
zamfira@unitbv.ro
562 J. Schuldt et al.

is going to support learners to improve their critical thinking skills and the ability to
carry out a change of perspective. Playful modules, within the Siemens Stiftung’s
OER-Platform, encourage the learner to critically reflect upon existing knowledge,
independently develop relevant new questions and seek for answers. Thus, the
Experimento Game boosts the learners’ self-confidence. In addition, it provides them
with the necessary methodologies to independently find answers for new questions and
deduct solutions to real-life problems.

2 The Approach of the Experimento Game

It has been established that children typically focus on action, navigation and
inter-action in their use of digital media. It is important for children to be able to have a
recognizable influence on games when the completion of the game depends on their
actions, strategies, choices and decisions.

2.1 Digital Game-Based Learning


Digital games, particularly serious games, are nowadays seen as an important feature
for providing stimulation and simulation in educational settings. The information, that
the learner is in the process of studying, makes considerably more sense if the learner
has the opportunity to see how that information is applied to the world of action and
experience. Players of serious games may gain real expertise in the act of thinking,
acting, valuing and deciding like a professional.
DGBL is promising [1–4]. Interactivity is an underlying key element in computer
games. A player likes to be in control and make decisions. Controlling characters or
avatars empowers the player by letting him/her take influence on the life of the player
character and on the universe in which it exists. The main purpose of DGBL is to
thoroughly immerse the gamer in the emotional and physical world of their game
character. However, some games take the player for a ‘ride’, ignoring that the balance
between being in control and being driven by the game [2] is what makes a good game.
Children learn better and feel motivated by problem solving and game activities in
comparison to more traditional skills and textbook-based materials that focus on dis-
covering and understanding scientific and technological interrelationships. Most
importantly, the use of interactivity, collaboration and exploration-based approaches
allow them to perform within their own categories of achievement.

2.2 State of the Art of Moral Dilemmas in Computer Games


Some of the most popular contemporary computer games; examples include among
others The Witcher 3: Wild Hunt (Namco Bandai Games, 2015), Beyond: Two Souls
(Sony Computer Entertainment, 2013), Bioshock 2 (2K Games, 2010), Heavy Rain

zamfira@unitbv.ro
The Experimento Game: Enhancing a Players’ Learning Experience 563

(Sony Computer Entertainment, 2010), Call of Duty: Modern Warfare 2 (Activision,


2009) and Fable (Microsoft Game Studios, 2004); have used morality and especially
moral dilemmas in computer games as a marketing strategy, promising that players’
moral choices would critically affect the game experience.
While some of these games have been criticized for presenting shallow moral
dilemmas, that do not reflect the ethical possibilities of aesthetic expression, morality
nevertheless is a matter that professional game designers progressively feel the need to
address. Literature claims [5, 6] to design compelling, ethical experiences for games,
where game developers must create ill-defined problems for players. Accordingly
Moral Dilemma Situations have a lot of potential [5] in digital games.

2.3 Moral Dilemma Situations in the Experimento Game


In the Experimento Game, the player deals with two different Moral Dilemma Situa-
tions. Moral dilemmas are situations where players weigh the consequences of their
choices carefully, because there are at least two or more values battling against each
other and there is no optimal answer or choice. The reason why moral psychology, in
the context of computer games, is such an inspiring topic, is not that a player is
confronted mercilessly with considerations between good and evil, but the under-
standing of different moral values, that the way humans think and respond to important
issues and questions in the real world, can change [5]. Sicart [6] also mentioned, that
Moral Dilemma Situations, “while being computable and inscribed within the rules of
the game, must also force players to apply moral thinking to their decision-making
processes, thereby creating ethical gameplay”. The Moral Dilemma Situations imple-
mented in the game are intertwined with exploring science and technology in unison.
• Moral Dilemma Situation 1 is dealing with the topic ‘How to produce drinking
water? Methods of purifying water’ and
• Moral Dilemma Situation 2 is focusing on ‘How does waste separation work?
Separating materials by density and magnetism’.

2.4 Difficulties in Serious Games Design


How can game designers and developers utilize deep and well-established results of
game engineering to make the design and implementation of digital games for learning
a routine process?
The key problem with so-called serious games [7] is that frequently the playing and
learning aspects contradict each other [2]. It takes a lot of effort to smoothly integrate
learning and playing and it requires appropriate technologies to systematically wrap
educational theory into game play [8].

zamfira@unitbv.ro
564 J. Schuldt et al.

3 Storyboarding of Nonlinear Game Content

Storyboarding is a technology to represent anticipated human-computer interactions;


especially intended experiences of DGBL [9, 10]. Storyboards represent the different
ways in which individual human beings may explore a virtual world through their
avatars. The storyboards flexibility allows for the anticipation of individualized
gameplay and learning experiences.

3.1 Storyboarding for the Experimento Gaming Module


The Experimento Game is an adventure game, which means that its core game
mechanics are similar to those of classic point-and-click adventure games. Like other
adventure games, the protagonist has to solve puzzles to progress in the game. Games
of this genre are mostly story-driven [11], meaning that a good story is a fundamental
part of this type of game.
After creating a good story, the next step is to craft a more detailed visualization of
the story. Game designers and film directors use storyboarding to provide a better
understanding of the look and feel of their stories to others [12]. An interactive
medium, like a game, needs a nonlinear storyboard. A basic introduction to story-
boarding has been given by Jantke et al. [10]. Storyboards are described as hierar-
chically structured graphs.
Further work on storyboarding games, and technologies that interpret storyboards
can be found in the literature [13, 14]. Experimento Game uses a storyboarding
interpretation technology (SIT) that was inspired by Arnold et al. [14].

3.2 The Storyboard Editor


The formal structure of the storyboard was based on the paper [10]. To ensure a good
workflow a storyboard editor has to fulfill these requirements:
• the ability to create the storyboard and save it into a file
• easy editing
• support the author/user (e.g., perform syntax checks)
• accessible to users with a non-technical background
• adaptable
A combination of two software products was used as a storyboard editor for the
Experimento gaming module. A spreadsheet program like Microsoft Excel was used as
the primary software for storyboard editing, due to the files (e.g. .csv) readability by the
parser tool. Excel also fulfilled all the aforementioned requirements towards a story-
board editor. For instance, the document can be enhanced with macros that support the
author during the creation of a storyboard. Although the spreadsheets aesthetics are
intelligible to humans and resemble a classic film storyboard, the document is easily
transformed into a version interpretable by a game engine. To convert the spreadsheet
into a JSON file, a secondary software product, a python based parser, was developed
in-house (Fig. 1).

zamfira@unitbv.ro
The Experimento Game: Enhancing a Players’ Learning Experience 565

Fig. 1. Extract from the Storyboard Editor. On the left is the storyboard that the editor creates,
on the right is a transformed storyboard that the game engine can read.

4 Usability Engineering and User Experience Design

The necessity of Usability Engineering in the development-cycle of a user centered


product has long been recognized by academics and a wide variety of industrial
branches. But as of late, the term User Experience (UX) has gained more and more
popularity, especially as a possible career choice in the Games Industry. But due to its
relative surge in popularity, the actual definition of UX is often ignored and the tasks
UX-Designers are asked to do, diverge far from being focused on good UX.
Most prominent is Don Norman’s and Jakob Nielsen’s definition of UX, which
states that it “encompasses all aspects of the end-user’s interaction with the company, its
services, and its products” [15]. Furthermore Nigel Bevan explains that one has to
extend the understanding of UX as a process over time. Bevan continues to explain that
the craft of UX is “not just achieving effectiveness, efficiency and satisfaction” [16],
which is the case in classical Usability Engineering. It also entails “optimizing the whole
user experience from expectation through actual interaction to reflection on the expe-
rience“ [16]. The afore mentioned definitions pave the way for general UX discussions,
while Celia Hodent described good UX in a Game as the sum of Usability and
Game-flow [17]. Her description is not meant as an accurate definition for UX, but it
accurately lays out the aspects of UX, the design team had to focus on due to time
constraints.
The focus on usability in UX, albeit obvious, is vastly important; if a product is not
usable, how can it’s UX be positive. That is why even the simplest and most generic
interaction paradigms, in the Experimento Game, were tested (analogue and digitally)
and evaluated by the design team before, during and after the implementation. While
usability testing in the game is an ongoing process, that appears when a new function is
designed and added in the software, the planning of the game’s pacing and flow was
thoroughly done beforehand. Various flowcharts (e.g. Fig. 2) and storyboards of the
player’s potential experience were drafted, to visualize and communicate the game’s
flow.

zamfira@unitbv.ro
566 J. Schuldt et al.

Fig. 2. Early flow visualization for the Experimento Game.

Testing the game’s flow with potential users has not been executed as of yet, but a
paper prototype is in preparation. Once the first digital prototype has been developed,
user testing will expand: collecting data via questionnaires and guided testing. After
evaluating said data the game’s next iteration will be initialized.

5 Creating Game Art, Appealing to a Variety of Cultures

The specifications for the Experimento Game state that the design should have simple
2D graphics and be an abstraction of the real world, as the environment should not
resemble a certain region.
In the beginning of the design process a survey with 204 participants showed that a
total abstraction of environment and character is more suitable for a younger audience
than the targeted 11–13 year olds. Out of a collection of 36 character-designs, those
with apparent features, such as eyes and mouths, had a more positive feedback;
showing the importance of being able to read a characters emotion to create empathy.
To compromise on an art style, the “Consensus mapping” process was modified.
With the general specifications in mind, each artist searches for environment and
character inspiration and select ten pictures to present to the team. After giving and
receiving feedback, the whole group decides on three pictures that are guidance for the
environment and character design. With this guidance the artists create own interpre-
tations, setting an art style for the game and the characters that serve as a template for
future designs.
As a result of the survey, the characters received facial features thus giving them
humanoid traits. In order to prevent association with a specific cultural area,
non-human colours and the omission of noses were chosen (Fig. 3). The players will be
able to choose between three characters – a female, male or a gender-neutral persona.

zamfira@unitbv.ro
The Experimento Game: Enhancing a Players’ Learning Experience 567

Fig. 3. Simple and abstracted environment and alienated characters in 2D.

6 Summary and Conclusions

Game development is a prospering industry that is set to grow rapidly in the near
future. Therefore theoretical concepts and practical skills are required for designing
games and software engineering. The Storyboard Interpretation Technology (SIT) is a
feature to support the development of games, especially for educational contexts.
According to the above-cited sources, storyboards are digital objects that can be stored
in a database. Digital systems, intended to implement the anticipated processes rep-
resented in a storyboard, may read the digital document. When humans interact with
the system, the system interprets the storyboard. The Experimento Game is based on
the concepts of SIT. A particular advantage of this technology is the simplicity of
modifying a storyboard in use.
The gaming module aims at supporting the development of a child’s critical
reflection, taking into account that children must be encouraged to understand the deeper
meaning of a problem. Using that knowledge, they are incited to analyze and solve
problems in a scientific context. Furthermore, these experiences do not remain abstract
in that they are directly linked to their everyday lives and everyday problems. However,
the game benefits children most by initiating positive behavioral changes through the
teaching of critical thinking. This is achieved by inviting children to deal with and
analyze Moral Dilemma Situations, which they are confronted with in the game. The
virtual world of the Experimento Game is, metaphorically speaking, the wrapping of the
actual learning content. The more attractive the wrapping is, the more likely it is that a
task is accepted. An attractive wrapping can induce students to learn, not only voluntary,
but also with more intensity and more frequent repetition, while at the same time not
perceiving learning as a burden. The evaluation of the game’s acceptance exceeds this
papers scope. A separate report upon the systematic evaluation will follow.

zamfira@unitbv.ro
568 J. Schuldt et al.

Acknowledgement. The authors gratefully acknowledge the fruitful collaboration with their
customer, Siemens Stiftung. As a non-profit corporate foundation, Siemens Stiftung promotes
sustainable social development, which is crucially dependent on access to basic services,
high-quality education, and an understanding of culture. To this effect, the Foundation’s project
work supports people in taking the initiative to responsibly address current challenges. Together
with partners, Siemens Stiftung develops and implements solutions and programs to support this
effort, with technological and social innovation playing a central role. The actions of Siemens
Stiftung are impact-oriented and conducted in a transparent manner www.siemens-stiftung.org.

References
1. Prensky, M.: Digital Game-Based Learning. Paragon House, St. Paul (2007)
2. Jantke, K.P.: Digital games that teach: a critical analysis. TUI IfMK, Diskussionsbeiträge
22, August 2006
3. Gee, J.P.: What Video Fames Have to Teach us About Learning and Literacy. Palgrave
Macmillan, New York (2007)
4. Söbke, H., Reichelt, M.: “Rat(t)en in der Lehre” - Über die Spiel(un)lust unserer
Studierender am Beispiel digitaler Apps, Teaching Trends 2016 Digitalisierung in der
Hochschule: Mehr Vielfalt in der Lehre. Münster: Waxmann
5. Krebs, J.: Moral Dilemmas in Serious Games. In: 2013 International Conference on
Advanced ICT (2013)
6. Sicart, M.: Moral dilemmas in computer games. Des. Issues 29(3), 28–37 (2013)
7. Ritterfeld, U., Cody, M., Vorderer, P. (eds.): Serious Games: Mechanisms and Effects.
Routledge, New York (2009)
8. Krebs, J., Jantke, K.P.: Methods and technologies for wrapping - educational theory into
serious games. In: 6th International Conference on Computer Supported Education, pp. 497–
502 (2014)
9. Jantke, K.P., Knauf, R., Gonzalez, A.G.: Storyboarding for playful learning. In: World
Conference on e-Learning in Corporate, Government, Healthcare and Higher Education,
e-Learn 2006, AACE, pp. 3174–3182 (2006)
10. Jantke, K.P., Knauf, R.: Didactic design through storyboarding: standard concepts for
standard tools. In: Proceedings of the 4th International Symposium on Information and
Communication Technologies (ISICT) Workshop on Dissemination of e-Learning Tech-
nologies and Applications, Cape Town, South Africa, January 2005, pp. 20–25. ACM Press,
New York (2005)
11. Fernandez-Vara and, C., Osterwil, S.: The Key to Adventure Game Design: Insight and
Sense-making. http://meaningfulplay.msu.edu/2010/
12. Cornell University: Storyboarding in Game Design. http://www.cs.cornell.edu/courses/
cs3152/2013sp/labs/design1/. Accessed 16 Nov 2016
13. Jantke, K.P., Knauf, R.: Taxonomic concepts for storyboarding digital games for learning in
context. In: 4th International Conference on Computer Supported Education 2012, pp. 401–
409 (2012)
14. Arnold, S., Fujima, J., Jantke, K.P.: Storyboarding serious games for large-scale training
applications. In: 5th International Conference on Computer Supported Education 2013,
pp. 651–655 (2013)

zamfira@unitbv.ro
The Experimento Game: Enhancing a Players’ Learning Experience 569

15. Norman, D., Nielsen, J.: The Definition of User Experience. https://www.nngroup.com/
articles/definition-user-experience/. Accessed 21 Nov 2016
16. Bevan, N.: What is the difference between the purpose of usability and user experience
evaluation methods? In: UXEM 2009 Workshop, INTERACT 2009, Uppsala, Sweden
(2009)
17. Hodent, C.: Developing UX Practices at Epic Games, GDC EU (2014). http://www.
gdcvault.com/play/1020934/Developing-UX-Practices-at-Epic. Accessed 21 Nov 2016

zamfira@unitbv.ro
The Finite State Trading Game: Developing
a Serious Game to Teach the Application
of Finite State Machines in a Stock
Trading Scenario

Matthias Utesch1(&), Andreas Hauer2, Robert Heininger2,


and Helmut Krcmar2
1
Staatliche Fachober- und Berufsoberschule Technik München,
Munich, Germany
utesch@in.tum.de
2
Chair for Information Systems, Technical University of Munich (TUM),
Munich, Germany
a.hauer@tum.de, {robert.heininger,krcmar}@in.tum.de

Abstract. In this paper a new methodology to teach the topic Finite State
Machines to upper vocational school students is proposed. A Serious Game
solution was created consisting of nine learning objectives split into categories
about the basics of Finite State Machines, the parallels between Finite State
Machines and stock trading and the application of Finite State Machines in order
to construct Artificial Intelligence. This paper focuses on the existing parallels
between Finite State Machines and the concepts of automated stock trading. The
learning objectives were determined using Bloom’s Taxonomy and imple-
mented into the Serious Game “The Finite State Trading Game” (FSTG). In this
turn-based trading game, the user strives to beat a Non-Player Character by
skillfully trading shares at various difficulty levels. In order to evaluate the
Serious Game approach, a pre-test and post-test situation was performed with
students of a local upper vocational school class at the Technical University of
Munich. The analysis of the results showed major improvements of the students’
knowledge about Finite State Machines for every tested statement. Given the
success of this test setting, FSTG appears to be a promising solution to be used
to support or even substitute traditional ways of teaching.

Keywords: Serious game  Finite state machine  Stock trading  Bloom’s


taxonomy for learning objectives

1 Introduction

Nowadays computer games are played all over the world: For example, in the United
States 72% of the households play video games, with a staggering number of twelve
years adult gamer have been playing in average in their lives so far [1]. But: Do they
know, that all these games – as well as many other applications, like traffic lights – are
based on so-called Finite State Machines (FSMs)? FSMs represent the logic of these
actions. The National Institute for Standardization and Technology defines an FSM as a
© Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_54
zamfira@unitbv.ro
The Finite State Trading Game 571

“model of computation consisting of a set of states, a start state, an input alphabet, and
a transition function that maps input symbols and current states to a next state” [2].
FSMs represent a core concept regarding automation. Automation is part of the
curriculum at upper vocational school (UVS) in Bavaria, Germany [3], as well as at
IT-related degree courses, e.g. in the Bachelor’s Program of the Technical University of
Munich (TUM) [4, 5].
The curriculum for 12th graders at UVS in Bavaria, Germany, [3] covers a special
class dedicated to engineering, called ‘Technology and Computer Science’. This class
comprises the modules basics of modern programming languages, programming styles
and data structures, and object-oriented programming. They are the basis for the next
modules capstone project and systems and processes. The capstone project aims at
understanding, analyzing and evaluating a complex technical system like a power sta-
tion or a car, as well as developing and implementing problem-solving strategies which
are especially suited for this kind of technical system. With systems and processes the
students are expected to gain a better understanding of technical systems and processes.
They should be able to identify the essential elements and processes of a technical
system, and describe their contribution to its overall functionality. The students learn to
understand and apply systems as well as to analyze and evaluate them [6].
FSMs represent a core concept regarding automation. Automation is also part of the
curriculum at UVS in Bavaria, Germany [3]. Furthermore, using the so-called State
Pattern, an FSM representation can be developed for almost every IT-based application
ranging from control systems to business simulations and even computer games [7].
These State Patterns are the optimum architecture for the correct implementation of an
FSM. Because of the ubiquity of FSMs in reality, teaching FSMs to UVS students
seems to be a promising and valuable approach to gain a better understanding of real
world scenarios. Yet – which application would be best suited to meet the learning of
UVS students? Traffic lights build on a small number of states and are present
everywhere to control traffic in our daily life. A more challenging FSM application
would be a stock trading system and its processes of buying and selling shares [6].
A share is traded if the offered share price is equal or lower than the limit price and a
potential buyer has the money to invest in the share.
Using educational software is a common trend [8]. Serious Games promise to
support self-regulated personalized learning (SRPL) [9]. Two examples shall clarify
how playing Serious Games may encourage SRPL. As a first example, [10] describes a
learning scenario for UVS students in the context of the Pupils´ Academy of Serious
Gaming. An important goal of the academy is to reduce the dropout rate of students by
strengthening their study skills even before they start studying at university. The
ERPsimTM Distribution business game is used to improve activities related to e.g. time
management or teamwork. Secondly [11] presents the learning platform Learn@WU
aiming at better ways for self-learning. Furthermore, the trend towards Serious Gaming
is supported by the increasing availability of computers in classrooms. Thus, sum-
marizing the previous considerations, a Serious Game approach based on an FSM
example seems to be well suited for learning about systems and processes in ‘Tech-
nology and Computer Sciences’ classes in Bavarian UVS.
The UVS students are young people who have realized at secondary school level: “I
learn best in an application-oriented way” [10]. Therefore, we developed a stock

zamfira@unitbv.ro
572 M. Utesch et al.

trading game named ‘The Finite State Trading Game’ (FSTG) [6]. With FSTG the
learners strive to skillfully trade shares in order to beat a Non-Player Character
(NPC) through higher profits. The scenario of automated stock trading was used for
two reasons: Firstly, stock trading is a challenging FSM example out of everyday life.
Secondly, automated stock trading is strongly related to the science of Information
Systems. As a precondition, the IT-related knowledge is very heterogeneous at UVS
due to a large variety of the students´ learning histories. Thus, the proposed Serious
Game solution should address FSMs from scratch. This leads to the following research
question [6]:
How to design, create, and present a Serious Game to teach the topic of Finite
State Machines to upper vocational school students in Bavaria using stock trading as
an example?
This research question was solved by four steps [6].
• In the first step we reviewed literature to identify the most important learning
objectives. Additionally, we classified and verified them using the Taxonomy of
Bloom [12] in the revision of Krathwohl [13] (Sect. 2).
• In a second step, we integrated these learning objectives into a Serious Game
solution (Sect. 3).
• In the third step, the software was tested with UVS students. The students’ per-
formance was measured by a pre-test and post-test scenario.
• In the fourth step we evaluated the software. The students’ learning progress was
quantified and analyzed in order to prove the suitability of the Serious Game
solution for the target group (Sect. 4).

2 The Learning Objectives

For being able to teach students about FSMs, the first essential step is to describe the
components of an FSM. According to Yuan and Qu [14] an FSM “consists of a finite
set of states, a start state, an input alphabet, and a transition function that defines the
next state based on the current state and input symbols.” It is important to note, that the
denomination of the elements of an FSM is not standardized. While the term state is
used in practically every literature due to its occurrence in the term Finite State
Machine, the other elements appear in different names in the literature.
In order to teach the concept of FSMs, three steps (A), (B) and (C) have to be
conducted.
(A) The students have to remember and understand the basic concepts and attributes
of an FSM [6].
(B) By using stock trading as an example, the students should gain knowledge about a
real world FSM application and its parallels to automation.
(C) The students should analyze the application of FSMs to construct an Artificial
Intelligence as well as evaluate its benefits and limitations.

zamfira@unitbv.ro
The Finite State Trading Game 573

Consequently, we developed nine learning objectives for these steps by analyzing


and evaluating a couple of representative FSM definitions and applications [6]. The
learning objectives are equally distributed among the three steps.
A common way to structure learning objectives is to describe them according to a
taxonomy. Curricula in Bavaria are structured following Bloom’s Taxonomy (see
Fig. 1). Thus, in order to create, classify, and verify the learning objectives, this
Taxonomy of educational objectives [12] was used. Due to a better understanding of the
FSTG approach, the relevant aspects of this taxonomy will be shortly summarized in
the following.

Fig. 1. Classification of FSTG’s learning objectives according to Bloom’s Taxonomy [6]

Bloom’s Taxonomy is a one-dimensional structure providing the six categories


Knowledge, Comprehension, Application, Analysis, Synthesis, and Evaluation [12]. In
the year 2001 a revision of the original taxonomy was published by Lorin W. Anderson
and David R. Krathwohl [15]. It transformed the one-dimensional structure into a
two-dimensional matrix. The vertical dimension is called the Knowledge Dimension,
while the horizontal dimension is entitled the Cognitive Process Dimension. Since
“statements of objectives typically consist of a noun […] – the subject matter content –
and a verb […] – the cognitive process(es)” [13], the two-dimensional table eases to
distinguish the subject matter content and the respective cognitive process, when
classifying a learning objective [13]. Since the first version of the taxonomy has been
provided by Bloom, also the two-dimensional matrix of the taxonomy will furthermore
be addressed as Bloom´s Taxonomy in this paper.
According to the three steps (A), (B), and (C) of teaching about FSMs, our learning
objectives develop step-by-step the learning content by addressing different Knowledge
Dimensions as well as Cognitive Process Dimensions [6]. Each step is addressed by its
related category of learning objectives. Each category covers three learning objectives
– in total we identified nine learning objectives (see Fig. 1).

zamfira@unitbv.ro
574 M. Utesch et al.

(A) Basics of a Finite State Machine:


LO_1_A1, LO_2_B2, and LO_3_C2
(B) Parallels between Finite State Machines and Stock Trading:
LO_4_C2, LO_5_D2, and LO_6_C3
(C) Construction of Artificial Intelligence using Finite State Machines:
LO_7_D3, LO_8_D4, and LO_9_D5
Most of the learning objectives are positioned on the left side of the table, namely in the
columns Remember and Understand. However, with increasing use of FSTG the learning
objectives grow more complex approaching other Cognitive Process as well as
Knowledge Dimensions. Consequently, the learning objectives LO_5_D2 and LO_7_D3
do not only deal with understanding and applying content, respectively, but introduce the
students to Metacognitive Knowledge, the most sophisticated of the Dimensions.
The learning objectives of category A that cover the basics of an FSM were already
discussed in [6]. Therefore, the following sections of this paper focus on the learning
objectives of the category B, the existing parallels between FSMs and stock trading.
The logic of the NPC evolves the more learning objectives are approached. It is
important to note, that the single evolution stages of the NPC’s logic do not represent a
working FSM which could be directly implemented into a software.
An FSM may either be represented by a State Diagram, a State Table or the Tuple
Representation [6]. This paper relies on State Diagrams.

2.1 Parallels Between FSMs and Stock Trading


As argued above, stock trading is a suitable example to teach about the application of
FSMs by using a real world scenario, and by showing the existing parallels of stock
trading and automation. Thus, in LO_4_C2, LO_5_D2, and LO_6_C3 these parallels
are being explained in detail in order to mediate the usage of automation and FSMs in
everyday life.
LO_4_C2 The students shall be able to recognize the parallels between the limit setting
in stock trading and determining conditions of Finite State Machines.

Limit setting is a constantly occurring process in stock trading. Options, break-even-


points, or using a limited budget are just some examples. This similarity to the deter-
mination of the conditions when creating FSMs shall be recognized and understood by
the students. They have been introduced to the needed basics of FSMs in the previous
learning objectives LO_1_A1, LO_2_B2, and LO_3_C2 [6].
In order to approximate the FSM logic to real life trading, the absolute striking
limits of the two transitions are replaced by relative striking limits in the third evolution
stage in Fig. 2. These limits depend on the price of the share in the last turn. Relative
striking limits are essential to implement any trading strategies that are similar to real
life trading.
Usually a future stock price limit to buy or sell is set based on the current share
price (e.g. buy share, if increase greater than 10%). One (cyclical) strategy for a trading
system would be to invest in shares that have been rising during the observation
process, or analogously to reject the ones that were declining during the observation

zamfira@unitbv.ro
The Finite State Trading Game 575

Fig. 2. Third evolution stage of the non-player character finite state machine

timespan. Alternatively, an inverse and anticyclical strategy would mean to invest in


falling stocks, and to sell rising stocks. This does not necessarily mean that any share at
all is bought or sold. In order to give an example, Table 1 presents an example dataset
of the share prices of three companies over the time of two turns.

Table 1. Example dataset consisting of three companies


Company A Company B Company C
Share price in turn 1 in euro 80 50 30
Share price in turn 2 in euro 85 50 28

In the following example we assume that the first strategy is applied to the dataset
of the three companies. Additionally, we imply that a striking limit of five percent or
less share price movement for buying as well as selling shares is valid. That means, that
shares are bought if the share price has increased five percent or more compared to the
last turn. Similarly, shares are sold, if the share price has decreased five percent or more
compared to the last turn.
In the presented dataset case, the trading system would invest in company A’s shares
and sell shares of company C. Again, it is important to note, that the system would not
take action unless the limit is met. A striking limit of ten percent for buying and selling
would mean that neither a share would be bought nor sold. Figure 3 shows the devel-
opment of an example share including absolute and relative striking limits. It proves that
it is sensible rather to use relative than absolute striking limits, and shows once again
how the conditions determined for the FSM can directly be used in a business related
progress chart.

zamfira@unitbv.ro
576 M. Utesch et al.

Fig. 3. Example progress chart with absolute and relative striking limits

LO_5_D2 The students shall be able to understand that buying and selling strategies
that are used for trading can be translated into conditions triggering
transitions in Finite State Machines.

The presented examples for the dataset in Table 1 show, that any automated trading
at least resembles the behavior of an FSM. In fact, some trading software are created by
using an FSM Model due to the easy implementation and the possible direct translation.
Based on the input data, the FSM is making decisions. The raw data may be classified
and translated into a machine-readable pattern to enable the FSM to process it. This
input is then compared to each available condition of the current state of the machine to
check for any transitions to be triggered. The FSM that has been constructed by now
takes a certain stock price movement for triggering the transition to the next state. By
adding more input and more conditions – like the legal form of the company, the
country of origin of the company, or long time share price analysis – this FSM would
approximate more and more to the complex logic used in real stock trading. The
presented FSM can thus be seen as a minimalistic and very simple depiction of the
trading logic a broker company uses in real life, although it is important to note, that it
does not represent a fully functional logic.
LO_6_C3 The students shall be able to apply that the data foundation used for stock
trading can easily be used as an input for a Finite State Machine.

The usual civic trying to invest their money in some stocks and highly-specialized
trading companies have one thing in common. They both try to gather as much data as
they can get from a company they plan to invest in. Independently from the very
different channels and possibilities each of them has to pursuit this goal, both

zamfira@unitbv.ro
The Finite State Trading Game 577

occasional customers and professional stock traders take the collected data as their
foundation to act (or not to act). On the one hand, the usual stock trader trades shares
every once in a while, but on the other hand modern stock trading is based on high-end
server systems which are buying and selling stocks in high frequency all over the day.
Nowadays, since in these days almost every share price – regardless of whether from
the past or the present – is available, the systems always have some data which are
ready to work with. Data such as share prices are ideally suitable for automated
decisions because of their availability and easy processing due to the machine-friendly
format. Consequently, the Input Alphabet of the logic constructed so far in Fig. 2 are
the share prices of the current round and the last round. The FSM can make decisions
by using only these data. But the trading systems that are being used in real life are
supplied with as many data as possible of every company offering shares, in order to
increase the chance of success for the decisions. These data are classified with complex
algorithms, for example by using automated news engines to classify news and their
impact on the future share price [16]. Based on these classified data the trading systems
decide to buy and sell shares automatically.

3 Application

All of the presented learning objectives were implemented into a Serious Game about
FSMs, which is called The Finite State Trading Game (FSTG). In this turn-based game
the user strives to successfully trade shares on the market to make profit in order to win
the game against an evolving FSM represented by the NPC [6]. Table 2 shows the
learning objectives and the corresponding NPC’s evolution stage side-by-side in order
to allow a quick overview.

Table 2. Learning objectives, evolution stage and test statements


Learning objective (LO) Evolution stage (ES) Test
statement (TS)
Category B LO_4_C2: The students shall be ES3: NPC buys and sells TS_5_C2
able to recognize the parallels multiple shares following
between the limit setting in stock economical rules
trading and determining conditions
of Finite State Machines
LO_5_D2: The students shall be TS_6_D2
able to understand that buying and
selling strategies that are used for
trading can be translated into
conditions triggering transitions in
Finite State Machines
LO_6_C3: The students shall be TS_7_C3
able to apply that the data
foundation used for stock trading
can easily be used as an input for a
Finite State Machine

zamfira@unitbv.ro
578 M. Utesch et al.

In order to verify the FSTG approach the software was tested as part of the Pupils’
Academy of Serious Gaming for 11th grade UVS students [17]. The test lesson itself is
fully described in [6].
According to [6] we created a questionnaire to quantify the test results. The
questionnaire relies on a five point Likert scale [18]: 1 – Strongly Disagree,
2 – Disagree, 3 – Neither Agree nor Disagree, 4 – Agree and 5 – Strongly Agree. The
participants self-assessed their knowledge in a pre-test before the start and a post-test
after the end of the teaching unit.
The classification of the test statements corresponds to the classification of the
learning objectives in Bloom’s Taxonomy (see Fig. 1). The statements are discussed
shortly for the category B – for A see [6].
TS_5_C2 I am familiar with the concept of automation using software and the most
relevant steps for the realization.

The statements TS_5_C2 to TS_7_C3 all test for general knowledge about
automation and Artificial Intelligence in real world scenarios, like stock trading. While
a foreknowledge in these areas helps the students to play the Serious Game success-
fully, the statements are primarily designed to show possible improvements of the
students’ knowledge in the post–test. The higher the scores in the post–test, the more
likely the students have understood the explained theory in the Serious Game and the
associations between FSMs, Artificial Intelligence and share trading. Statement
TS_5_C2 asks for a general understanding of the students for automation, what should
ease to understand the learning content regarding automation of the software. In the
Serious Game the automated stock trading scenario helps the students to understand the
learning content.
TS_6_D2 I know usage situations of Finite State Machines in daily life.

This statement focuses on daily life examples of FSMs like traffic lights or auto-
mated stock trading, which students may know without having any experience with
FSMs. After using the software, the students should be able to identify such automated
systems.
TS_7_C3 I am familiar with limit setting in stock trading and its possible automation
using software.

This statement tests the students’ conceptual understanding regarding automation.


Especially in the post-test, a high score in this statement could prove that the students
can apply the direct translation of stock trading limits into conditions of automated
systems.

4 Results

The students filled the same questionnaire two times on separate documents. This
means that they assessed their knowledge twice: in a pre-test as well as a post-test of
FSTG [6]. This paper focuses on the test statements TS_5_C2 to TS_7_C3.

zamfira@unitbv.ro
The Finite State Trading Game 579

4.1 Evaluation of the Pre-test


The answers to the statements of category B in the pre-test show that the students are
not familiar to automation in daily life. When asked for knowing the concept of
automatization and the relevant steps for a software realization in test statement
TS_5_C2, none of the students chose the option 5 – Strongly Agree, but 50% of the
students chose a negative answer. Similar results can be found for the test statement
TS_7_C3, which focuses on the automatization of stock trading mechanisms. The
lowest results of the test statements of category B were recorded for test statement
TS_6_D2. This statement asks for knowledge about usage situations of FSMs. The low
score of 95% of negative answers shows that the students are not even aware of getting
constantly in contact with such automata. Additionally, some of the negative answers
can be attributed to the use of the term “Finite State Machine”, since many of the
students were new to this concept. This was tested in category A of the test sheets [6].

4.2 Comparison of the Pre-test and the Post-test


In order to allow a better overview for the two test rounds, the results of the pre-test and
the post-test need to be compared. Figure 4 shows the five (dis)agreement levels of the
Likert Scale as well as the answers to the test statements of category B. Just like all
other test statements, the results show major improvements. The best improvement of
these test statements is reflected in test statement TS_6_D2, “I know usage situations of
Finite State Machines in daily life”. Previous to the teaching unit, none of the students
rated this test statement as 5 – Strongly Agree or 4 – Agree. After the teaching unit, the
students reported a considerable improvement: 45% selected 5 – Strongly Agree as well
as 45% 4 – Agree. This improvement in the self-assessment can be attributed to use of
the stock trading scenario as a real life scenario in the software.

Fig. 4. Results of the answers to the test statements of category B in the pre-test and the post-test

zamfira@unitbv.ro
580 M. Utesch et al.

4.3 Findings
The results of the post–test seem to verify the success of our FSTG approach to learn
about FSMs by playing a Serious Game. The evaluation statements with less or no
progress show, which parts of the software need some rework.
As a qualitative result, FSTG succeeded to keep the students motivated during the
lesson without further incentives. On the quantitative side, major progress on every
statement can be shown [6]. Only 0.05% of all answers of the post-test were negative.
This is a remarkable fact, when compared to overall 73.50% negative answers in the
pre–test. Hence the number of positive answers shifted from 16.50% in the pre–test to
89.00% positive answers in the post–test. However, a shift of the answered options to
the positive side had to be expected due to the fact that most statements are directly
addressed in the software. Nevertheless, this test is a success, since big progress was
recorded in all test statements. The positive results in category B of the post-test prove
that the students understood the parallels between FSMs and stock trading. Addi-
tionally, the shift of the results shows that the setting of FSTG in a trading scenario
helps the students to grasp the learning content – the application and the concept of
automation in a real life environment, like the trading market.

5 Conclusion

Upper vocational schools (UVS) represent an essential part of the school system in
Bavaria, Germany. In this school form, the curriculum of grade 12 contains the
modules capstone project as well as systems and processes, among the complementary
modules basics of modern programming languages, programming styles and data
structures, and object-oriented programming. The use of the Finite State Machine
(FSM) concept allows to introduce these modules due to the easy scalability and the
widespread application fields of FSMs. In combination with the various advantages of
Self-Regulated Personalized Learning, a serious game in form of an IT-based learning
solution appears to be a promising way of teaching about FSMs.
As a consequence, the goal of this research was to design, create, and present a
Serious Game to teach the topic of FSMs to UVS students in Bavaria by using stock
trading as an example.
In order to do so, nine learning objectives regarding the topic FSMs have been
created, classified and verified by using the Taxonomy of Bloom [13]. The learning
objectives were split into three categories: The basics of FSMs, the parallels between
FSMs and stock trading and the application of FSMs in order to construct Artificial
Intelligence. This research focused on the most important parallels between FSMs and
the mechanisms of a stock trading system. In the next step, the learning objectives were
integrated into a Serious Game called “The Finite State Trading Game” (FSTG). In this
turn-based game the user strives to trade shares efficiently in order to beat the
Non-Player Character (NPC) opponent in several levels and with varying difficulties.
This computer-controlled opponent uses an FSM logic to make its decisions. During
the gameplay important facts about the attributes and the behavior of the NPC are
explained to the user by using a graphical representation of the NPC FSM.

zamfira@unitbv.ro
The Finite State Trading Game 581

The developed Serious Game was tested in a lesson in the context of the “Pupils’
Academy” [19] with students of the local UVS “Staatliche Fachober- und Beruf-
soberschule Technik München” at the Technical University of Munich. Before and
after using the Serious Game the students self-assessed their knowledge about FSMs in
order to record their managed progress on the topic. For the creation of the test
statements the five-point scale developed by Likert [18] and again the Taxonomy of
Bloom was used.
The evaluation of the test sheets proved the test on the target group as a success. In
the pre-test, 73.5% of the answers were rated negative. Nevertheless, only 0.05% of the
given answers in the post-test were negative answers. Comparing the two test rounds, a
significant shift to the positive side of the answers was recorded for each of the test
statements [6].
Since the first test setting of the software has been successful, the developed
Serious Game seems to be suitable for widespread usage in classrooms. The learning
objectives seem to be fitting to the requirements to teach this concept as well as to the
target group. Due to the similarity of the German vocational secondary school and the
German gymnasium in teaching style, the next logical step is to test in gymnasium
classes as well. While it is currently more a supplement to the present teaching way, the
software could work as a stand-alone teaching way in the future by adding more
teaching content and functionalities. Additionally, former research [17] has shown
positive impact of playing serious games on the results in the final exam, the German
Abitur. Thus, we want to focus with our research on this aspect.
The construction of an own FSM in the program illustrates a possible vertical
expansion of FSTG. After successfully playing the levels of the game the users could
get the option to create an own FSM themselves in order to play against the created
logic. The students could instantly apply and intensify their learned knowledge and
thus also provide another verification method for the efficacy of the software in this
way.
Based on the results of the lesson as part of the Pupils’ Academy FSTG can be rated
as a success and has shown once more that serious games are a valuable didactic
approach and can be used to modernize the present ways of teaching.

References
1. Entertainment Software Associate ESA. 2011 essential facts about the computer and video
game industry. http://www.isfe.eu/sites/isfe.eu/files/attachments/esa_ef_2011.pdf
2. Black, P.: Finite state machine. http://xlinux.nist.gov/dads/HTML/finiteStateMachine.html
3. Bayerisches Staatsministerium für Unterricht und Kultus, “Lehrpläne für die Fachoberschule
und Berufsoberschule Ausbildungsrichtungen Technik, Agrarwirtschaft, Gestaltung Unter-
richtsfach: Technologie/Informatik Ausbildungsrichtung Wirtschaft, Verwaltung und
Rechtspflege Unterrichtsfach: Technologie Jahrgangsstufen 11 bis 13”, ed (2006)
4. Technical University of Munich: Bachelor’s Program Informatics: Curriculum and Courses
(2012). http://www.in.tum.de/en/current-students/bachelors-programs/informatics/curriculum-
and-courses.html

zamfira@unitbv.ro
582 M. Utesch et al.

5. Nipkow, T.: Introduction to Theory of Computation (2011). https://campus.tum.de/


tumonline/WBMODHB.wbShowMHBReadOnly?pKnotenNr=454045&pOrgNr=14189
6. Utesch, M., Hauer, A., Heininger, R., Krcmar, H.: An IT-based learning approach about
finite state machines using the example of stock trading. In: Interactive Collaborative
Learning (ICL) 2016, Belfast, UK (2016)
7. Adamczyk, P.: The anthology of the finite state machine design patterns. In: The 10th
Conference on Pattern Languages of Programs (2003)
8. Docebo: E-learning Market Trends and Forecast 2014–2016 Report. https://www.docebo.com/
landing/contactform/elearning-market-trends-and-forecast-2014-2016-docebo-report.pdf
9. Zimmerman, B.J.: Becoming a self-regulated learner: an overview. Theor. Pract. 41, 64–70
(2002)
10. Utesch, M., Heininger, R., Krcmar, H.: Strengthening study skills by using ERPsim as a new
tool within the pupils’ academy of serious gaming. In: 2016 IEEE Global Engineering
Education Conference (EDUCON), pp. 592–601 (2016)
11. Andergassen, M., Ernst, G., Guerra, V., Mödritscher, F., Moser, M., Neumann, G., et al.:
The evolution of e-learning platforms from content to activity based learning: the case of
Learn@WU. In: 2015 International Conference on Interactive Collaborative Learning (ICL),
pp. 779–784 (2015)
12. Bloom, B.S.: Taxonomy of Educational Objectives, vol. 1: Cognitive Domain. McKay,
New York (1956)
13. Krathwohl, D.R.: A revision of Bloom’s taxonomy: an overview. Theor. Pract. 41, 212–218
(2002)
14. Yuan, L., Qu, G.: Information hiding in finite state machine. In: Fridrich, J. (ed.) IH 2004.
LNCS, vol. 3200, pp. 340–354. Springer, Heidelberg (2004). doi:10.1007/978-3-540-30114-
1_24
15. Anderson, L.W., Krathwohl, D.R.: A taxonomy for learning, teaching, and assessing: a
revision of Bloom’s taxonomy of educational objectives. ed: Allyn and Bacon (2001)
16. Groß-Klußmann, A., Hautsch, N.: When machines read the news: using automated text
analytics to quantify high frequency news-implied market reactions. J. Empirical Finance 18,
321–340 (2011)
17. Utesch, M., Heininger, R., Krcmar, H.: The pupils’ academy of serious gaming:
strengthening study skills with ERPsim. In: 2016 13th International Conference on Remote
Engineering and Virtual Instrumentation (REV), pp. 93–102 (2016)
18. Likert, R.: A technique for the measurement of attitudes. Arch. Psychol. 22, 5–55 (1932)
19. Utesch, M.C.: Five years of the pupils’ academy of serious gaming: enhancing the ability to
study. In: 2015 IEEE Global Engineering Education Conference (EDUCON), pp. 189–198
(2015)

zamfira@unitbv.ro
A Serious Game for Learning Portuguese Sign
Language - “iLearnPSL”

Marcus Torres2, Vítor Carvalho1,2(&), and Filomena Soares1


1
R&D ALGORITMI Centre, University of Minho, Guimaraes, Portugal
2
IPCA – Polytechnic Institute of Cávado and Ave, Barcelos, Portugal
vcarvalho@ipca.pt

Abstract. Several thousands of people in Portugal use Portuguese sign lan-


guage (PSL). Children belonging to this community have difficulties with
communication and learning processes. There are some applications that focuses
on teaching/learning PSL but with a limited interaction with the user, such as
presentation of images of a hand gesture picture or avatar where the user must
mimic the respective PSL hand gesture. Some other applications are more
advanced using sensors to detect movements and gestures performed by the
user. However, these applications do not implement an automatic interaction
with the user of PSL. Following this trend, the main idea of this project is to
develop a solution where deaf child can learn PSL in an interactive, dynamic
and funny way. Therefore, this paper describes the first insights in the devel-
opment of an interactive and didactic virtual game tool to learn the Portuguese
Sign Language, in particular the numbers from 0 to 9. The alphabet and colors
will be also included in the future. The target group considers deaf children in
the first cycle of education level (primary Portuguese school). The developed
game promotes the automatic interaction with the user through the Leap Motion
controller. The system captures the hands and fingers gestures performed by the
user and the graphical interface returns the adequate feedback. The preliminary
results obtained show a good level of user experience with therapists.

Keywords: Leap motion  Portuguese sign language  Deaf people  Sign


recognition  Serious game

1 Introduction

Sign language is a language which uses a system of articulations of the hand to mediate
a conversation between people with hearing or speech disabilities. Each country has its
own sign language and some have more than one so; there is no standard language in
this matter.
In Portugal, it is estimated that there are about 120,000 individuals with some
degree of hearing loss (included here are the elderly who are gradually losing their
hearing) and about 30,000 with severe and profound deaf problems [1, 2]. Therefore,
Portuguese Sign Language (PSL) was created and developed as a form of communi-
cation. This community is not only made up of people with hearing problems but also
by family members, professionals and friends who daily get along with them.

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_55
zamfira@unitbv.ro
584 M. Torres et al.

There are few resources (such as dedicated dictionaries, games and interactive
stories) available for this community, and there is still little support in this matter. Thus,
there is a need to respond to the special educational requirements of deaf children, who
present difficulties in communication, learning, and social interaction.
In 2013, the Leap Motion Controller was released. This devise facilitates computer
support hand recognition. Moreover, the data gathered by the device is relatively
accurate and can be used in several classification methods.
The main motivation of this project involves the lack of interactive applications for
learning PSL. Following this trend, it was designed a serious game that tries to
overcome this gap. The first goal of the game is to teach users to learn the numbers
from 0 to 9. Other activities related to the program followed in the first cycle education
will be developed.
This paper is organized in 5 sections. Section 2, “Related Work” describes some
examples of tools, applications and projects being developed, Sect. 3, “Developed
Game”, presents the interface, methodology and architecture of the game, Sect. 4,
“Preliminary Validation”, presents the first analysis of the game with therapists and
finally, Sect. 5, enunciates the final remarks.

2 Related Work

This section introduces Leap Motion Controller and a brief explanation of its char-
acteristics and Application Programming Interface (API). There are also described
some tools, applications and projects in development for auxiliary sign languages
serious games. Finally, some sign language recognition systems are described.

2.1 Leap Motion Controller


The Leap Motion [3, 4] uses two monochrome infra-red (IR) cameras and three IR
LEDs and it observes an approximately hemispherical area at a distance of 1 m. The
device generates almost 300 frames per second of reflected data, which is then sent
through a USB cable to the host computer, which makes it a suitable sensor to accu-
rately identify gestures of hands and fingers [5].
The Leap Motion system employs a right-handed Cartesian coordinate system. The
origin is centered at the top of the Leap Motion Controller. Figure 1 presents the device
and its three-dimension axis system.

Fig. 1. Leap motion controller and its axis system [6].

zamfira@unitbv.ro
A Serious Game for Learning Portuguese Sign Language - “iLearnPSL” 585

The Leap Motion API recognizes and tracks the hand, finger and finger-like tools.
The device operates in an intimate proximity with high precision and tracking frame
rate and reports discrete positions, gestures, and motion. Each frame of tracking data
contains the measured positions and other information about each entity detected in that
snapshot. The hand model provides information about the identity (left or right),
position (coordinates in the working area space), palm orientation (directional vector)
and other characteristics of a detected hand, the arm to which the hand is attached, and
lists of the fingers associated with the hand [6], Fig. 2.

Fig. 2. Leap motions hand model “PalmNormal” and direction vectors define the orientation of
the hand [6].

A finger object provides a bone object describing the position (coordinates in the
working area space) and orientation (directional vector) of each anatomical finger bone.
All fingers contain bones ordered base to tip [6]. In Fig. 3 there are presented the
fingers bone model of a hand.

Fig. 3. Leap motion fingers bone model [6].

The main weakness of the leap motion is its lack of accuracy when something
obstructs the device and the hand or if some fingers obstructs other fingers [4].

2.2 Sign Language Recognition Approaches


Several researchers developed automatic sign language recognition with the application
of a custom sensor glove and a Kinect [7, 8]; this approach had good results with a

zamfira@unitbv.ro
586 M. Torres et al.

classification accuracy of 84,1% and 74,8%, respectively, however it has some limi-
tations, namely, the user must be standing and/or has to put on a glove.
Other conducted their research using a Leap Motion Controller with different
approaches. A research work was focused on the development of a decision tree
algorithm where the application decides if a certain hand gesture is correct through
elimination and a very small cross over probability with an accuracy of 82,7% [9]. And,
some used a Support Vector Machine (SVM) as a supervised learning model that uses
training observations to recognize patterns and perform classifications or regression
analyses to recognize the gesture. Each of these were developed in their own sign
language with an a result ranging between 85% and 99% [10–13]. Also, a project with
the usage of two Leap Motion for an Arabic sign language recognition study was also
observed. This had a classification accuracy of 97,1% [14].

2.3 Auxiliary Applications for Learning Sign Language


There are available some alternative forms of learning sign language, such as story-
telling with translation and dictionaries to illustrate the meaning of a word and the
corresponding signal [15, 16]. There are many other gaming applications, more or less
animated that allow the learning of sign language [17–20]. However, they have a
teaching mechanics for images, videos or animations where the user goes only by
mimicking the gesture presented. On the other hand, there are good applications for
smartphones allowing user interaction where the sensor detects the hand gesture;
“RogerVoice” and “MotionSavvy” are two examples [21, 22]. Nevertheless, these are
just communication applications and not interactive learning games.
At the Leap Motion app store official website [23], there can be found several
applications dedicated to the usage of this sensor. There are various type of categories
such as games, creative tools, educational, experimental, computer controls, among
others. Moreover, there is a 3D virtual reality section where the leap motion is com-
plemented with Oculus Rift and HTC Vive virtual reality headset. Has an example,
“Fingerspeller”, is an experimental type application [24] that recognizes American Sign
Language hand gesture and compares them against displayed highlighted letters. “ASL
Digits” is a mini tool for learning American Sign Language [25], that can recognize and
demonstrate numbers gestures from 0 to 9 in real time.
For the PSL, there are some alternative ways of learning [20, 26]. The most used
today is an online platform called “Virtual School LGP” where the gestures of the PSL
are taught through educational videos [26]. This instructs the meaning of words and
certain possible conversations between two persons. However, it is focused mainly to
adults and it does not allow user interaction. Also, there are other oriented for children
which are presented next.
In 2015, a group from the University of Minho developed a serious game for PSL
for helping disabled/deaf children based on a story with the interaction of a Kinect
sensor [27]. Also, from the same research group, a sign recognition system for learning
PSL was developed, with the Leap Motion and a gesture and pose recognition
framework called LeapTrainer [28], based on the hangman game [29]; the game detects
the PSL alphabet letters gestures represented by the players hand.

zamfira@unitbv.ro
A Serious Game for Learning Portuguese Sign Language - “iLearnPSL” 587

Researchers from the Porto Engineering Institute (ISEP, Portuguese acronym),


through the research group “Graphics, Interactions and Learning Technology” (GILT),
are developing a support project in communicating with deaf and hearing impaired
people named “Virtual Sign” [30]. This is a two-way real-time translator that uses the
potential of engineering to teach PSL. It uses a glove with sensors, a Kinect and an
educational game. This project is currently in the implementation phase.
It can be found in the literature some interactive applications but, on one end, they are
not dedicated to education purposes of PSL, and on the other end, they are not interactive.
Summing up, despite the variety of interesting applications, none contains the idea
proposed in this paper. So, as stated previously, considering the gap observed in the
available interactive applications for learning PSL, especially those dedicated to chil-
dren, the research team has started the development of a serious game, which is
described in the next section.

3 Developed Game

This section described the hand gesture recognition model and the developed game
environment.

3.1 Hand Gestor Recognition


As mentioned earlier, one of the most important features of this game involves the
automatic interaction of children through the exercise of implementing hand gestures of
sign language. The Leap Motion sensor is considered in this work to undertake this
issue. With the data collected it is possible to create variables where we can distinguish
between hand gestures. Considering a careful analyses and multiple testing there are
extracted the most relevant aspects of each gesture by assigning values to these vari-
ables. By this way each hand model (sign language gestures from 0 to 9) was created.
In Fig. 4 we can see a representation of the PSL numbers.

Fig. 4. Number gesture of PSL. On the left, the representation of the hand gesture [33]. On the
right, the same hand representation displyed in Unity interface through the leap motion.

zamfira@unitbv.ro
588 M. Torres et al.

In the game, when a hand gesture is accomplished in real-time, these variables will
be compared, with a studied margin of error, if it matches the respective hand gesture
model. This margin was determined by multiple testing and analyses of this variables
result; the margin limits were defined by what we considerate appropriate to the
respective hand gesture. The variables can contain the following information:
• Finger extension: The Unity. Leap Dynamic Link Library (DLL) has a function that
can determine if a finger is extended or not;
• Bone Orientation: This is determined by the angle of a certain bone vector of a
finger with the palm normal vector and palm direction vector. This is also a function
from the Unity DLL that determines angles between 3 dimensional vectors.
Example: if the game is waiting for a finger to be extended, the bone orientation
must be higher than 75º of angle to be considered approved;
• Finger Contact: It determines if there is contact between two particular bones of
different fingers. This is obtained true if the distance of these two bones is lower
than 20 mm (virtually).
The users’ hand can float around in the Leap motion detection space because all of
these variables have the palm of the hand as a reference. We describe an example of the
detection of the hand gesture associated to the number “2”: The hand model of number
“2” is obtained when the thumb and pinky fingers are extended and the other fingers are
not extended. So, in the game scenario, if it is asking to represent the number “2” hand
gesture, when the player is executing that same gesture, the game algorithm will detect
if there is a match between the fingers extended by the real-time players’ hand with the
respective hand gesture model, this case, number “2” gesture. Table 1 shows the
variables in consideration to detect the hand gesture.

Table 1. Hand Model Variables


Hand model variables Description
Thumb finger extension See if thumb finger is extended
Index finger extension See if index finger is extended
Middle finger extension See if middle finger is extended
Ring finger extension See if ring finger is extended
Pinky finger extension See if pinky finger is extended
Thumb finger orientation See if thumb finger is oriented upwards
Pinky finger orientation See if pinky finger is oriented leftwards

Table 2 shows the functions that classifies the hand model variables as approved.
For the same case, classifier for the number “2” gesture. These results can be by margin
of values or a Boolean value.

zamfira@unitbv.ro
A Serious Game for Learning Portuguese Sign Language - “iLearnPSL” 589

Table 2. Hand model Variable classifier for gesture number “2” in PSL.
Hand model Function Margin/IO
variables
Thumb finger Finger extended True
extension
Index finger Finger not extended True
extension
Middle finger Finger not extended True
extension
Ring finger Finger not extended True
extension
Pinky finger Finger extended True
extension
Thumb finger Angle between thumbs Proximal bone vector and Palm <110º,
orientation normal vector >85º
Angle between thumbs Proximal bone vector and Palm <60º,
directional vector >40º
Angle between thumbs Intermediate bone vector and <110º,
Palm normal vector >85º
Angle between thumbs Intermediate bone vector and <60º,
Palm directional vector >30º
Pinky finger Angle between thumbs Proximal bone vector and palm >75º
orientation normal vector
Angle between thumbs Intermediate bone vector and >75º
Palm normal vector

All finger Orientation classifications are determined by the angle of the fingers
Proximal bone and Intermediate bone direction vector with the palms normal vector. In
the case of the ‘thumb’, it also uses the palms directional vector. This occurs because
the thumb can bend in front ways and sideways. This methodology was decided to
improve accuracy and to reduce the conflict between gestures with very similar
patterns.
As studied in the research papers mentioned above [10–13], despite of all the Hand
Gestures Models being capable to create, with some precision, some gestures, executed
by the player, they are difficult to be captured by the Leap Motion if they are in their
regular position in space. So, for some of these gestures the player needs to orientate
their hand towards the Leap Motion so it can capture all the fingers clearly. This occurs
because the Leap Motion capture cannot determine the fingers with precision if they are
not all clearly visible, giving in this case false data.

3.2 Game Environment


The game iLearnPSL was developed using the Unity software, a powerful platform to
create 2D or 3D games [31, 32]. This game engine was chosen because the Leap
Motion community has created a DLL to interact with the software, allowing a friendly

zamfira@unitbv.ro
590 M. Torres et al.

game development. It takes the hand tracking data from the Leap Motion device and
allows the users hand to interact with a virtual scene. Scripts in the Leap.Unity
namespace interacts directly with “GameObjects” and other UnityEngine components
where they take tracking data and put it to use in Unity.
In this case, it is a 3D game to teach users to learn numbers from 0 to 9 using PSL.
The player surpasses multiple levels by executing numeric gestures in PSL. The dif-
ferent challenges consider executing a certain number, giving the result to various type
of calculations such as adding, subtracting, among other challenges. Figure 5, presents
the overall game architecture.

Fig. 5. Predicted schematic of the game architecture.

zamfira@unitbv.ro
A Serious Game for Learning Portuguese Sign Language - “iLearnPSL” 591

As the game is aimed to children, the interface was developed using an environ-
ment suitable for them in a user-friendly way. It will be employed in a school envi-
ronment and the usage of bright colors and animated objects and images is adequate.
The playing mechanics is a first-person game where the player can walk around,
interact with objects and enter several rooms using the mouse and/or keyboard. These
classrooms are where the challenges take place.
The level management is based on unlocking the next classroom where the player
can find new challenges of different gestures to accomplish and different/harder math
problem.
In the challenge mechanism, the strategy is to ask one single gesture at a time, to
avoid game algorithm conflicts in the detection of multiple hand gestures, so there will
be no cross over, as well as to allow the player to be concentrated in only one gesture.
Figure 6 presents the scheme of a game challenge.

Fig. 6. Scheme of the game challenge.

zamfira@unitbv.ro
592 M. Torres et al.

In Fig. 7, one can see the game interface where the player is trying to execute a
challenge where he/she must make the PSL number “2”, solution of summing one
apple with another apple.

Fig. 7. On the left: example of the game interface of a challenge asking the result of the
equation. On the right: game interface of the challenge succeeded.

As observed in Fig. 7, while the user is trying to execute the hand gesture, a virtual
simulated hand appears in the game environment. This feature is very helpful because it
has a feedback mechanism where the hand can be monitored and if a certain gesture is
being incorrectly performed, the virtual hand will show an indication of which finger is
not being executed correctly.
The game has a help system that includes a list of all gestures and so the player can
see and practice the same way as the game challenge does. Also, it has a score log
where the player can track his/her progress throughout the game. The score mechanism
is based on challenges accomplishment where points are earned by the player.

4 Preliminary Validation

This project was submitted to a preliminary validation of the user interface and
activities performed by consulting a Sign Language Interpreter and a Special and Deaf
Children Therapists and Assistants.
By the opinions collected, the feedback was very positive and very well-accepted.
In global, they described it as an interesting, autonomous and helpful tool to learn PSL.
Also, a good way for typically developing children who are interested in learning PSL.
It could have a good impact to allow deaf children to be more included in the com-
munity [34].
Some aspects were pointed out for future development such as the game challenge
strategy. The game must be very well explained so the children and therapists can have
a good understanding of what the challenges are about (which hand gesture must be
accomplished). Also, the challenges difficulty/content should be aware of the
age/school grade of the children.
Deaf children do not have much support at school or at home. They merely use
conversation and some simple images in their learning/education of PSL and do not use
any kind of auxiliary applications. Also, the majority comes from hearing parents and
their lack of knowledge of the PSL aggravates this situation where the children can find
only few or even none help at home. Moreover, they believe that this project could
improve the child’s communication and social interaction.

zamfira@unitbv.ro
A Serious Game for Learning Portuguese Sign Language - “iLearnPSL” 593

5 Final Remarks

The goal of this research is the development of a serious game (using Unity software)
for gesture recognition in Portuguese Sign Language focusing on assisting deaf dis-
abled children using the Leap Motion sensor. It is a multi-level 3D game (iLearnPSL)
where the player must surpass challenges within the game engine considering the
learning of numbers from 0 to 9.
The sign language recognition is a condition based system, where each hand
gesture has multiple relevant characteristics (variables) which match a certain value
with a studied error margin.
Despite the potential of the Leap Motion, making it an adequate device to help
solving this problem, it has some difficulties in detecting all fingers of a hand in some
hand gestures. To solve this, the player must withdraw his/her hand from the correct
position and face it to the device while executing the same gesture.
Nowadays, there are few interactive learning applications or games for Portuguese
Sign Language. So, this project will make a step forward on this matter. The advantage
of the proposed application is that the children will have higher integration, interac-
tivity and dynamism to the game allowing, through gestures, to learn Portuguese Sign
Language. Moreover, it includes a feedback system for error detection in the execution
of a certain gesture, allowing that the user will not be led into error by the same gesture,
if poorly executed. These reasons make this a more effective and automatic learning
game.
As future work, it is expected to develop more robust hand gesture models to
increase the game accuracy, as well as to extent the content of the game to other type of
challenges as the alphabet and colors. Also, the goal is to develop a hand recognition
system using machine learning, where Support Vector Machines (SVM) can be a
possibility.
To improve its validation and credibility, real world experiments with deaf children
will also be considered.

Acknowledgments. This work has been supported by COMPETE: POCI-01-0145-FEDER-


007043 and FCT – “Fundação para a Ciência e Tecnologia” within the Project Scope:
UID/CEC/00319/2013.

References
1. Carvalho, C.A.F.: A narrativa em crianças surdas: papel da Língua Gestual Portuguesa
(2013). (in Portuguese)
2. Informação - Comunidade. http://www.apsurdos.org.pt/index.php?option=com_content&
view=article&id=43&Itemid=57. Accessed 7 July 2015. (in Portuguese)
3. Weichert, F., Bachmann, D., Rudak, B., Fisseler, D.: Analysis of the accuracy and
robustness of the leap motion controller. Sensors (Basel) 13(5), 6380–6393 (2013)
4. Potter, L.E., Araullo, J., Carter, L.: The leap motion controller. In: Proceedings of the 25th
Australian Computer-Human Interaction Conference on Augmentation, Application, Inno-
vation, Collaboration, OzCHI 2013, pp. 175–178 (2013)

zamfira@unitbv.ro
594 M. Torres et al.

5. Guna, J., Jakus, G., Pogačnik, M., Tomažič, S., Sodnik, J.: An analysis of the precision and
reliability of the leap motion sensor and its suitability for static and dynamic tracking.
Sensors 14(2), 3702–3720 (2014)
6. API Overview—Leap Motion C# SDK v2.3 documentation. https://developer.leapmotion.
com/documentation/csharp/devguide/Leap_Overview.html#hands. Accessed 4 Apr 2016
7. Sun, C., Zhang, T., Xu, C.: Latent support vector machine modeling for sign language
recognition with kinect. ACM Trans. Intell. Syst. Technol. 6(2), 1–20 (2015)
8. Zafrulla, Z., Brashear, H., Starner, T., Hamilton, H., Presti, P.: American sign language
recognition with the kinect. In: Proceedings of the 13th International Conference on
Multimodal Interfaces - ICMI 2011, p. 279 (2011)
9. Funasaka, M., Ishikawa, Y., Takata, M., Joe, K.: Sign language recognition using leap
motion controller. In: International Conference on Parallel and Distributed Processing
Techniques and Applications, PDPTA 2015, pp. 263–269 (2015)
10. Khelil, B., Amiri, H.: Hand gesture recognition using leap motion controller for recognition
of arabic sign language. In: 3rd International Conference on Automation, Control,
Engineering and Computer Science, ACECS 2016, Proceedings of Engineering and
Technology (PET), pp. 233–238 (2016)
11. Quesada, L., López, G., Guerrero, L.A.: Sign Language Recognition Using Leap Motion,
pp. 277–288. Springer, Heidelberg (2015)
12. Chuan, C.-H., Regina, E., Guardino, C.: American Sign Language Recognition Using Leap
Motion Sensor. In: 13th International Conference on Machine Learning Application,
pp. 541–544 (2014)
13. Simos, M., Nikolaidis, N.: Greek sign language alphabet recognition using the leap motion
device. In: Proceedings of the 9th Hellenic Conference on Artificial Intelligence - SETN
2016, pp. 1–4 (2016)
14. Mohandes, M., Aliyu, S., Deriche, M.: Prototype Arabic sign language recognition using
multi-sensor data fusion of two leap motion controllers. In: 2015 IEEE 12th International
Multi-conference on Systems, Signals and Devices, SSD15, pp. 1–6 (2015)
15. LIVPSIC - Livraria de Psicologia e Ciências da Educação. http://www.livpsic.com/v4/
detalhe01.php?id=1340. Accessed 8 July 2015. (in Portuguese)
16. ASP - Associação de Surdos do Porto. http://www.asurdosporto.org.pt/artigo.asp?idartigo=
1250. Accessed 8 July 2015. (in Portuguese)
17. Four Online Kids’ Games to Learn Sign Language. http://www.brighthubeducation.com/
special-ed-hearing-impairments/2910-learning-asl-with-internet-browser-games/. Accessed
8 July 2015
18. Sign 4 Me for iPad - A Signed English Translator para iPhone, iPod touch e iPad na App
Store no iTunes. https://itunes.apple.com/pt/app/sign-4-me-for-ipad-signed/id383462870?
mt=8. Accessed 8 July 2015
19. ASL American Sign Language – Aplicações Android no Google Play. https://play.google.
com/store/apps/details?id=com.teachersparadise.aslamericansignlanguage. Accessed 8 July
2015
20. Wix.com culturas_surdos created by leticia_nadia_lgp based on kindergarten. http://leticia-
nadia-lgp.wix.com/culturas_surdos#!page-8. Accessed 16 Apr 2016
21. RogerVoice, An Android App That Helps The Deaf Have A Conversation On The Phone -
Forbes. http://www.forbes.com/sites/federicoguerrini/2014/09/26/tech-that-matters-rogervoice-
will-allow-the-deaf-finally-have-a-conversation-on-the-phone/. Accessed 8 July 2015
22. Tech Tackles Sign Language – MotionSavvy - Forbes. http://www.forbes.com/sites/
karstenstrauss/2014/10/27/tech-tackles-sign-language-motionsavvy/. Accessed 8 July 2015
23. Leap Motion App Store|Leap Motion Apps for Motion Control. https://apps.leapmotion.
com/. Accessed 16 Apr 2016

zamfira@unitbv.ro
A Serious Game for Learning Portuguese Sign Language - “iLearnPSL” 595

24. Fingerspeller|Leap Motion Developers. https://developer.leapmotion.com/libraries/462.


Accessed 16 Apr 2016
25. Leap Motion App Store|ASL Digits. https://apps.leapmotion.com/apps/asl-digits/windows.
Accessed 16 Apr 2016
26. Escola Virtual de Língua Gestual Portuguesa - Página Inicial. http://www.lgpescolavirtual.
pt/index.php?module=home. Accessed 2 Oct 2015. (in Portuguese)
27. Soares, F., Sena Esteves, J., Carvalho, V., Lopes, G., Barbosa, F., Ribeiro, P.: Development
of a serious game for Portuguese Sign Language. In: 7th International Congress on Ultra
Modern Telecommunications and Control Systems and Workshops (ICUMT), Brno, Czech
Republic, 6–8 October 2015
28. O’Leary, R.: LeapTrainer (2013)
29. Soares, F., Sena Esteves, J., Carvalho, V., Moreira, C., Lourenço, P.: Sign language learning
using the hangman videogame. In: 7th International Congress on Ultra Modern Telecom-
munications and Control Systems and Workshops (ICUMT), Brno, Czech Republic, 6–8
October 2015
30. ISEP. http://www.isep.ipp.pt/new/viewnew/4146. Accessed 6 Oct 2015
31. Unity - Game engine, tools and multiplatform. http://unity3d.com/pt/unity. Accessed 4 Apr
2016
32. SDK Libraries—Leap Motion Unity SDK v3.1 documentation. https://developer.
leapmotion.com/documentation/orion/unity/devguide/Leap_SDK_Overview.html#unity.
Accessed 4 Apr 2016
33. Wix.com culturas_surdos created by leticia_nadia_lgp based on kindergarten. http://leticia-
nadia-lgp.wix.com/culturas_surdos#!alfabeto-e-números. Accessed 16 Apr 2016
34. Torres, M., Carvalho, V., Soares, F.: iLearnPSL – development of an interactive application
for learning Portuguese sign language: first insight. In: CISPEE16, Vila Real, Portugal,
19–21 October 2016

zamfira@unitbv.ro
The Implementation of MDA Framework
in a Game-Based Learning in Security Studies

Jurike V. Moniaga, Maria Seraphina Astriani, Sharon Hambali,


Yangky Wijaya(&), and Yohanes Chandra

School of Computer Science, Bina Nusantara University,


Kemanggisan, Indonesia
Jurike@binus.edu, Seraphina@binus.ac.id,
sharon_hambali@hotmail.com, yangkywijaya@gmail.com,
yohaneschandra95@gmail.com

Abstract. Many studies have already confirmed the effectiveness of applying


Game-Based Learning in certain environments. However, very few remain to
study the design process or the game model of Game-Based Learning. This
paper explains how the framework is implemented to a game best suited for
Game-Based Learning in Security Studies classroom. The aim of this study is to
define and develop a game best suited for enhancing learning outcomes of
National Defense Strategy subject in International Relations Department. The
research implements MDA Framework in the development process of the game.
The implementation process is done using IT-BluTric framework. IT-BluTric
framework lists all processes that needed to be done. After several processes are
done, feedbacks from field expert are collected. The questions for the feedbacks
are linked to the 4 Key Characteristics of a Learning Game (challenge, curiosity,
fantasy, control), which has a deep correlation with MDA Framework. The final
result is an assessment of using IT-BluTric framework in developing a
Game-Based Learning, along with feedbacks for the game. The conclusions are
hoped to give insights on how MDA Framework is implemented in a
Game-Based Learning and the whole development process could serve as a
reference to other Game-Based Learning applications.

Keywords: Game-based learning  MDA framework  Video game  Game


engineering  IT-BluTric framework

1 Introduction

In this day and age where society is made up of “digital natives”, many uses of IT
applications have emerged beyond our expectation. Technology has indeed become
crucial in assisting various aspects of human’s lives. Nowadays it is very common for
us to see technology being applied in different fields, including medicine, business,
military and education. One of these applications which very recently and suddenly
caught many attentions of scientific communities is a concept called Digital
Game-Based Learning (DGBL) [1].

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_56
zamfira@unitbv.ro
The Implementation of MDA Framework 597

By its simplest definition, Prensky defines any learning process through video
games on a computer or any other media whether online or not. In addition, he stated
applying DGBL is not strictly to the classroom, but it can be applied to teach various
things including in business or law [2]. As of now, the application of DGBL has been
tested and proven effective by many (see [3–5]). However, many are still in doubt
about the effectiveness of DGBL. Unfortunately, this doubt is considered under-
standable as it would be hard to conduct a comprehensive analysis on this issue [1].
That is why, in order to ensure the authenticity of DGBL research, the design and
development process of the game must be done in accordance to proven studies and
framework. It is also helpful to have an expert of the related field to give constant
feedbacks on the game. This research aims to monitor the implementation process of a
previously defined DGBL design. The DGBL will be specifically used to enhance
learning process in Security Studies of International Relations Major. The game was
designed using MDA Framework, whereas the implementation will be based on a
framework for creating IT-based learning method called the IT-BluTric [7].
The choosing of this topic is due to the applicability of video games in assessing the
learning content of security studies. Essentially, security studies, or also known as
strategic studies, is a subfield within International Relations that studies potential
security threats of a nation and how to solve or prevent those threats [8]. The con-
sideration process of what strategies should be used to address a particular threat is
similar to resources allocation process in many games. It was due this that this topic is
deemed suitable to be applied using DGBL.
The result of this research is an assessment of the IT-BluTric framework imple-
mentation in the development process of DGBL. This paper illustrates the whole
process of DGBL development in this research. Furthermore, feedbacks of field experts
will be given to iteration of the game in order to make sure the game still delivers the
desired learning objectives.

2 Literature Reviews

2.1 MDA Framework


Gamification is a term that is derived from digital media industry. The term is well
known since 2010 when it is widely used by developers. A research that exclusively
studies the definition of gamification states that “it is the use of game design elements
in non-game contexts” [9]. This term explains that many things can be modified
psychologically into games. For example, gamification can be tied or linked to tradi-
tional learning process which resulting in serious game.
One of the game design elements mentioned includes MDA Framework. It is a
framework that is accepted by gamification. MDA Framework is a bridge that links
game designers and players; it consists of Mechanics, Dynamic, and Aesthetics [10]
(Figs. 1 and 2).

zamfira@unitbv.ro
598 J.V. Moniaga et al.

RULES SYSTEM “FUN”

Fig. 1. Formal game consumption [9]

MECHANICS DYNAMICS AESTHETIC

Fig. 2. Formal game consumption (2) [9]

As can be seen from the pictures above, Mechanics in MDA Framework refer to the
game system such as algorithm, levels, rules, points, badges, highscores, etc. In short,
mechanics are what drive the users’ actions. Dynamics refer to what the users can see
on the screen based on their inputs.
Aesthetic on the other hand, goes down on a deeper level where game designers
have to convey the emotional and psychological messages to the users [9]. Aesthetics
are often seen as the “fun” factors of the game; they made up the emotional connection
created during the interaction between the player and the game [10].

2.2 Key Characteristics of a Leaning Game


A study conducted by Kim and Lee that compares MDA Framework with other design
framework successfully created 4 key factors that computer games must have. These 4
factors, being challenge, curiosity, fantasy, and control, are the answer to “what make
things fun to learn?” [9]
In the explanation of MDA Framework, mechanics are responsible for game or
level difficulties which are heavily related to the concept of “challenge”. The game
needs to have enough pressure and rewards to attain the users’ attention. In addition to
challenges, the game’s aesthetics that evokes emotions can trigger users’ “curiosity“.
The attention acquired shows positive results in proofing that “curiosity” and “chal-
lenge” have great roles in the gamification of learning.
It has been repeatedly stressed that the main aim of game aesthetics is to carry out
emotions to users. Narration, graphics, sound, and atmosphere must be remarkable to
be able to imprint a “fantasy” in users’ mind. Kim and Lee state that fantasy is
intimately related to reward and feedback which are parts of game mechanics.
The final key factor being “control” is heavily related to gameplay. A gameplay
comprises of rewards, graphics, sound, users’ actions, narration, and difficulties. Noting
this fact, gameplay is the parent of mechanics, dynamics, and aesthetics; which bring
out the conclusion that “control” covers all MDA Framework (Fig. 3).

zamfira@unitbv.ro
The Implementation of MDA Framework 599

Fig. 3. Fundamental primary factors of digital mobile game learning [9]

2.3 IT-BluTric (IT Blueprint Metric)


IT BluTric is a framework that was created to help to determine IT Initiatives for each
pillar (application, infrastructure, operation, governance, and security) of the IT
Blueprint framework. Following this framework may prevent, or at least decrease the
failure possibility in an IT project. To understand this framework better, we first must
understand what IT Blueprint framework and IT Initiatives are, before implementing it
using IT-BluTric [7].
IT Blueprint framework is a framework that is created initially for the purpose of
helping the development process of any IT-based projects. It is originally created from
best practices and IS Strategic Planning method [12–14]. Based on ref [11], creating an
IT Blueprint begins with doing 5 initial steps: (1) define the goal of the project;
(2) assess current position in context of technology environment; (3) decide on which
potential technologies could be used to reach the goal; (4) make details about the
possible ways to achieve the goal; (5) determining the way to decide when the project
is considered done.

Fig. 4. IT blueprint framework [10]

zamfira@unitbv.ro
600 J.V. Moniaga et al.

As can be seen in Fig. 4, the 5 pillars helps IT Blueprint framework translates “the
business” in the form of information flow, to create the technology. To help gain a
deeper and better understanding of the each pillar of the IT Blueprint Framework, a list
of IT Initiatives is assigned to each pillar. IT Initiative itself is a guideline of what
features are going to be on the project. Applying these IT Initiatives into the IT
Blueprint framework by certain rules, is referred as IT-BluTric [7].
IT-BluTric will divide each IT Initiative in accordance to its relation to one of the
IT Blueprint pillars. Every initiative will then be ranked using 4 aspects: urgency,
importance, cost and timeframe (see Table 1). By doing so, developers will know
which feature needs to be done first. This also makes monitoring the project, so that it
does not go off from its original purpose, a little bit easier.

Table 1. IT blueprint - legend


Urgency Importance Cost Timeframe
^ = not too urgent ! = less important $ = small * = less than 1 week
^^ = urgent !! = important $$ = medium ** = 1 week–1 month
^^^ = highly urgent !!! = very important $$$ = much *** = more than 1 month

1. Urgency
Urgency refers to how soon an IT Initiative should be done. When the urgency
value of an initiative is high, it means that the initiative should be prioritized over those
with lower urgency value. Urgency value of an initiative (U) can be counted by using
project time (PT) and spare time (ST) (see Eq. (1)). The maximum urgency value is
100% [11].

U ¼ ðPT=ðPT þ STÞÞ  100 ð1Þ

2. Importance
Importance is very similar to urgency, as both aspects are used to decide what
initiative should be done first. Their difference is, while urgency focuses on whether an
initiative should be done as soon as possible, importance sees the impacts of a process.
High importance value means a process has a big impact on other processes. This
means that, processes with less importance values cannot be done unless those with
high importance values are done [11] (Table 2).

Table 2. IT blueprint - importance


Importance Explanation
Less It can be done later
important
Important Should be done
Very Should be done; has big impact as other processes depends on this process,
important other processes will be interrupted if this process is yet to be done.

zamfira@unitbv.ro
The Implementation of MDA Framework 601

3. Cost
Cost aspect is the budget needed to finish a particular initiative/process. Cost does
not refer to the value of the budget required but rather the ability to provide them.
Higher cost value means that it will take longer for the money to be provided, meaning
that the process will take longer to finish. This can be due to many reasons, including
making detailed calculation and the organizational bureaucracy. Defining cost value
does not have any particular formula or metrics to be used, simply because the dif-
ference between each and every organizations [11].
4. Timeframe
Timeframe is the time needed to complete an initiative/process. Usually, timeframe
can be categorized into 3: short, medium and long period. In this framework, the
timeframe is defined like the table below [11] (Table 3).

Table 3. IT blueprint - timeframe


Timeframe Explanation
Less than Short timeframe. The process should be done in less than 1 week
1 week
1 week– Medium timeframe. The process can be done up to 1 month after the project
1 month started
More than Long timeframe. Process needs more than 1 month to be finished. Sometimes,
1 month big and complex project needs more than 1 year to complete the whole thing

3 Implementation

3.1 IT-BluTric
The vision of this research and application is to create a video game based on a
predefined design that will serve as an alternative learning method in Security Studies.
The design of the game was already defined in the previous research [6]. It mainly uses
MDA framework [10] along with expert consultations as the guideline for creating the
design (for details, see [6]).
This research implements the design into a complete game application using the
IT-BluTric framework. In order to do this, the IT Initiatives of each pillar are defined
based on the needs of the application. Then IT-BluTric is implemented in order to
determine the working order of application development. The IT-BluTric implemen-
tation can be seen in Tables 4, 5, 6, 7 and 8 below.

zamfira@unitbv.ro
602 J.V. Moniaga et al.

Table 4. IT BluTric - application


Application ^ ! $ *
Assets creation ^^^ !!! $ ***
Gameplay ^^^ !! $ ***
Opening scene ^^ !! $ *
Report scene ^^ ! $ **
Main menu ^ ! $ *
Quality assurance (feedbacks from experts) ^^ !! $$ *

Table 5. IT BluTric - infrastructure


Infrastructure ^ ! $ *
Device compatibility ^^ !! $$ *
Mobile and desktop version ^ ! $$ *

Table 6. IT BluTric - operation


Operation ^ ! $ *
Installation ^^^ !! $ *
Tutorial ^ !! $ *

Table 7. IT BluTric - governance


Governance ^ ! $ *
Disclaimer ^ ! $ *

Table 8. IT BluTric - security


Security ^ ! $ *
Game data protection ^ ! $ *

Each of the IT Initiatives is measured using the 4 aspects of IT-BluTric framework:


urgency (^), importance (!), cost ($) and timeframe (*). This metric helps in drawing
the timeline of the development process. Typically, processes with high urgency values
are prioritized above others. Though there are some cases where process with high
importance value is done first, seeing as other processes are dependent on it. Next in
consideration is the timeframe aspect while cost is considered last. The process of
IT-BluTric is not necessarily in this order. The order may change following the project,
and this order is specifically designed for the development of this game. After
implementing the IT-BluTric as described above, a timeline of the development process
can be determine.

zamfira@unitbv.ro
The Implementation of MDA Framework 603

Table 9. Timeline

TABLE IX. TIMELINE


IT Initiative Estimated time
Status
iteration spent
Assets creation 9 weeks Done

Gameplay 10 weeks Done

Installation 2 days Done

Report scene 2 weeks Done

Opening scenes 3 days Done

Device compatibility 2 days Done

Quality assurance 1 day Done

Main menu 2 days not yet

Tutorial 3 days not yet

Game data protection 1 day not yet


Mobile and desktop
2 days not yet
version
Disclaimer 1 day not yet
= Application; = Infrastructure; = Operation; = Governance; = Security

Table 9 shows every IT Initiatives in order from the first one until the last. Each
initiative entry is given estimated time to finish. This research effectively started on
July, making it 5 months since the first process is done. This is the reason some of the
processes are still yet to be done. However, the progress up until now is sufficient
enough to provide relevant data on the framework’s performance in implementing the
design of the game.

3.2 Quality Assurance


Quality Assurance is very important step in this research as it monitors the develop-
ment progress of the game. With expert feedbacks, researchers can keep tracks of
possible differences or occurring problems in the development progression with the
initial vision of the game. Kim and Lee stated there are 4 key characteristics of a
learning game that encompasses MDA Framework among other game design frame-
works [9]. With this idea, the questions given to the experts for feedback are drawn
from these characteristics. The questions can be seen on the table below (Table 10).

zamfira@unitbv.ro
604 J.V. Moniaga et al.

Table 10. Feedback questions


Characteristics Questions
A. Challenge A1 Does the game depict clear goal for the player that is relevant to the
learning outcomes of the subject?
A2 Does the game provide different difficulties in each level/stage?
A3 Is there any hidden knowledge that the players might acquire from the
game that is not explicitly shown in the game?
A4 Does the game provide any factors of randomness/uncertainty within
it?
B. Curiosity B1 Does the game project motivation for players to enhance players’
willingness to learn more about the related subject?
B2 Does the game (particularly the art & visual effects) produces enough
interests to keep the players’ involvement in the learning process?
C. Fantasy C1 What kind of atmosphere does the game project?
C2 Does the atmosphere produced by the game relevant enough to
increase players’ motivation and engagement in the learning process?
D. Control D1 Does the game produce enough determination for the player to repeat
the game in order to get a better result and fully grasp the learning
intention of the game?

4 Results

Results shown in this research are the products of IT Initiative entries up to quality
assurance. This is the reason why most of the processes are done in the Application
pillar. The results of the scenes in the game can be found on the following figures, with
the exception of the quality assurance process. Result of quality assurance will be
feedbacks by lecturer of related subject. The feedbacks will be related to MDA
Framework [10] and 4 key characteristics of a learning game [9].

4.1 Game Screenshots


Figs. 5, 6, 7 and 8

Fig. 5. Gameplay screen I

zamfira@unitbv.ro
The Implementation of MDA Framework 605

Fig. 6. Gameplay screen II

Fig. 7. Report screen

Fig. 8. Opening screen

4.2 Feedbacks
Feedback results from the 3 experts regarding the game can be found in Table 11.
Experts are the experienced people who understand and know the heuristics of this
area, put themselves in the shoes of a user to test the game and give the feedbacks to
researchers [15, 16].

zamfira@unitbv.ro
606 J.V. Moniaga et al.

Table 11. Feedbacks


Questions Answers
A1 Yes; the game is built based on the existing theory of defense strategies. This
theory is introduced in the class prior to the game testing
A2 Yes; for each ‘security threat’, the game provides 3 rounds for the players to
play. The second and third round would reflect the strategies that players choose
in the previous round. In result, the player gets less option hence prompting them
to do better calculations in their next round
A3 Yes; the game provides actual information about Indonesia’s geographic and
demographic situation
A4 No, it needs rational logic based on defense theory and an exact quantitative
calculation of the strategies
B1 It is projected to be that way. The game includes the element of challenge and
fun in an appropriate amount
B2 Yes, it involves characters, symbols, and pictures that are relevant to the actual
decision making process in a country
C1 The game attempts to incorporate the actual setting when a defense strategy is
being formulated. In addition to that, players are given an opportunity to reflect
on their strategies through an interactive report and statistics provided at the end
of the game
C2 The game emphasizes the aspect of “consequence” in each of the strategy.
Meaning that each strategy chosen would have a significant impact either
positive or negative to the security threat. When the impact is negative, the
player consequently loses their resources that make them unable to choose
D1 Yes, since the game provides limited chances for player to address the threat

To guide the experts to do collect the feedbacks, DECIDE framework been chosen
because it fits to the heuristics. These are the steps of the DECIDE framework:
determine goals, explore questions, choose paradigm, identify practical issues, decide
how to deal with ethical issues, and evaluate – present data [17, 18].
From the feedbacks collected, it is clear to see that the game is still on track of its
initial goal and design. The questions asked were based on the definition of each 4 key
characteristics of a learning game [9]. Seeing as most of the questions were answered
with positive (yes) answers, it can be said that the design successfully encompassed
MDA Framework as these characteristics cover the entire framework. With these
results, game development will proceed without doing many changes to the initial
planning. The implementation of MDA framework in a game-based learning in security
studies is a perfect match because it can deliver the learning objectives.

5 Conclusion

DGBL has been proven as an effective learning method. Unfortunately, not many can
provide a comprehensive analysis on the validity of these studies. That is why, in order
to reduce the possibility of failures in developing the game, this research combine and
implement the IT-BluTric framework in the development process of the game.

zamfira@unitbv.ro
The Implementation of MDA Framework 607

The development of the game is in accordance with a previously defined design of a


related research. The design focuses on defining three components of MDA Framework
(Mechanics, Dynamics, Aesthetics) and found 4 key characteristics of a learning game
that is deeply related to MDA Framework. These characteristics are then used, along
with the initial MDA Framework as parameters for feedbacks by the field experts. This
feedback is used to measure whether or not the game has deviated from its original
purpose.
Based on the feedback, the game is still in line with the original goal. This means
that the game successfully implemented the intended MDA Framework. Also, the game
is considered effective enough to depict and deliver the intended learning objectives,
though actual test in classroom is still needed to be conducted.
Furthermore, this research started effectively in July 2016. Looking at the timeline
drawn from the IT-BluTric framework and the results of the research progress, this
research is on the right track and schedule. Researchers are able to deliver each IT
initiatives intended for 5 months’ time. Consequently, looking at the positive feed-
backs, it can be concluded that IT-BluTric is indeed suitable in creating a Digital
Game-Based Learning in Security Studies.

References
1. Van Eck, R.: Digital game-based learning: it’s not just the digital natives who are restless.
Educause Review, March/April 2006
2. Prensky, M.: Digital Game-Based Learning. Paragon House, St. Paul (2007)
3. Erhel, S., Eric, J.: Digital game-based learning: impact of instructions and feedback on
motivation and learning effectiveness. Comput. Educ. 67, 156–167 (2013)
4. Guillén-Nieto, Victoria, Aleson-Carbonell, Marian: Serious games and learning effective-
ness: the case of It’s a Deal! Comput. Educ. 58(1), 435–448 (2012)
5. Kickmeier-Rust, M.D., Albert, D.: Educationally adaptive: balancing serious games. Int.
J. Comput. Sci. Sport 11(1), 15–28 (2012)
6. Ayu Asih Kusuma Putri, R., Wijaya, Y., Moniaga, J.V.: A design model for digital
game-based learning in international relations study developing an innovative learning
method for defense strategy course in Bina Nusantara University. In: International
Conference on Game, Game Art, and Gamification (ICGGAG) (2016). Unpublished
7. Astriani, M.S., et al.: Delivering an interactive presentation in supporting of dynamic
teaching method with an IT blueprint framework: IT initiative-ITBluTric. In: International
Conference on Information Management and Technology (ICIMTech) (2016). Unpublished
8. Elkus, A.: Professor, Tear Down This Wall: Is the Divide Between Security Studies and
Strategic Studies Permanent?” War on the Rocks, 18 April 2016. http://warontherocks.com/
2016/04/professor-tear-down-thiswall-is-the-divide-between-security-studies-and-
strategicstudies-permanent/
9. Kim, J.T., Lee, W.-H.: Dynamical model for gamification of learning (DMGL). Multimedia
Tools Appl. (2013)
10. Ruhi, U.: Towards a descriptive framework for meaningful enterprise gamification. Technol.
Innov. Manage. Rev. 5(8), 5–16 (2015)
11. Astriani, M.S., Pradono, S., Moniaga, J.V.: IT Initiative for creative interactive teaching
presentation based on IT blueprint framework. In: Advances in Educational Technologies
(2014)

zamfira@unitbv.ro
608 J.V. Moniaga et al.

12. Astriani, M.S., Pradono, S.: IT Blueprint and school. In: Proceedings of the 10th WSEAS
International Conference on Computational Intelligence, Man-Machine Systems and
Cybernetics, and Proceedings of the 10th WSEAS International Conference on Information
Security and Privacy, pp. 160–167. World Scientific and Engineering Academy and Society
(WSEAS) (2011)
13. Astriani, M.S.: IT Blueprint – jembatan bisnis dan teknologi. In: Binus Information
Communication and Technology Conference (2011)
14. Cassidy, A.: A Practical Guide to Information Systems Strategic Planning, 2nd edn.
Auerbach Publications, Boca Raton (2006)
15. Nielsen, J.: Enhancing the explanatory power of usability heuristics. In: Conference
Proceedings, CHI 1994 (1994)
16. Molich, R., Nielsen, J.: Improving a human- computer dialogue. Commun. ACM 33(3),
338–348 (1990)
17. Smith-Atakan, S.: The FastTrack to Human-Computer Interaction. Thomson Learning,
Boston (2006)
18. Scneiderman, B.: Designing the User Interface: Strategies for Effective User-Interface
Engineering. Addison-Wesley, Boston (2005)

zamfira@unitbv.ro
Industrial Virtual Environments
and Learning Process

Jean Grieu, Florence Lecroq(&), Hadhoum Boukachour,


and Thierry Galinho

LITIS (Laboratoire d’Informatique, de Traitement de d’Information


et des Systèmes), Normandy Le Havre University, Le Havre, France
{jean.grieu,florence.lecroq}@univ-lehavre.fr

Abstract. Today, we are in the fourth industrial revolution. In this revolution,


we include the Industry 4.0. The connectivity of all the objects, in our life or in
the industry with the sensors and the actuators connected on the industrial
network, create the Industry 4.0.
Other people propose another definition for the Industry 4.0: the simulation of
the industry is the new Industry 4.0. They build a virtual factory, with all the
sensors, actuators, networks, Programmable Logic Controllers (PLC’s), and so
on…, they study the simulation with the reaction of the process, refine the
system, and after they build the real factory in another part of the world.
On the other hand, the disaffection of the students with the engineering
studies obliges us to change our way of teaching. It means that we have to adapt
the structures of our courses to the new e-native students. Considering this trend,
our research team has built a tool for teaching, based on video game tech-
nologies, to attract and keep students. This tool is a virtual campus, similar to the
real one of our University.

Keywords: Simulation  Learning scenarios based on virtual worlds  Games


engineering  Collaborative work in virtual environments  Virtual and remote
laboratories  Industry 4.0

1 Introduction and Background

Today, we are in the fourth industrial revolution. In this revolution, we include the
Industry 4.0. The connectivity of all the objects, in our life or in the industry with the
sensors and the actuators connected on the industrial network, create the Industry 4.0.
Other people propose another definition for the Industry 4.0: the simulation of the
industry is the new Industry 4.0. They build a virtual factory, with all the sensors,
actuators, networks, Programmable Logic Controllers (PLC’s) and so on…, they study
the simulation with the reaction of the process, refine the system, and after validation,
they can build the real factory in another part of the world.
Since the early 1990s, technologies of virtual environments - virtual reality, aug-
mented reality, multi-user 3D platforms - continue to offer new tools for a better
understanding of the complex systems in various areas, particularly in industrial
activities. The rise of communication networks, the wide dissemination of tablets and

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_57
zamfira@unitbv.ro
610 J. Grieu et al.

mobile phones of the latest generation, allow a new approach of the learning process
and teaching methods. In practice, a user, or a group of users can immerse themselves
in such an environment via a visualization tool adapted to the context like a computer
screen, a projection screen, or a headset. Today, we use these devices to simulate
industrial systems for learning, training or decision-making. This new research area
results from the union of two multidisciplinary teams of researchers at Le Havre
University: the first one interested in Decision Support Software (DSS) and Industrial
Risk Prevention, the second one working on the 3D video games technologies applied
to learning and training.
This contribution is organized in four sections. In the first one, we describe a DSS
for risk prevention using Case Based Reasoning (CBR) approach linked to a
MultiAgent System (MAS). The second section presents an Intelligent Tutoring System
(ITS) fit to prevent students’ dropout risks. This module is linked to the virtual campus
GE3D described in the following section, in which we explains how a 3D virtual
campus can be used for simulating and teaching PLC’s. Finally, in the last section, we
discuss the most relevant elements of our research and the next developments.

2 Risk Prevention

2.1 Decision Support System Using Case Based Reasoning


Our research is within the framework of DSS (Fig. 1) which allows to represent, follow
and analyze the evolution of a dynamic situation. The DSS is a tool whose main
objective is to help decision-makers to manage decision process in case of crisis or
before it occurs. Following this, DSS analyses a current situation dynamically and
compare it to past situations.

Fig. 1. Decision support system representation.

This system allows to represent the observed situation but also its evaluation.
Evaluating the situation can be performed by calculating its possible consequences. This
can be carried out using previous situations whose consequences are known. So, analogy
reasoning relies on the following hypothesis: if a A situation looks like a B situation, the
consequences of the A situation ought to be similar to those of the B situation.
Case-Based Reasoning (CBR) [1] is a methodology of resolution of problems being
based on the reuse of past experiments for solving new problems. Decision Support
Systems are among the most promising applications of CBR. Thanks to the CBR, the
human abilities in problem solving are enhanced by power of the computing system.

zamfira@unitbv.ro
Industrial Virtual Environments and Learning Process 611

The reasoning approach of CBR uses past cases. A case is defined by a problem
and its solution. The problem to be solved is called target case. The solved problems
are called source cases and are stored in a case-base. The CBR cycle [2] is composed of
five steps (Fig. 2):

Fig. 2. Case-Based Reasoning cycle.

We distinguish various types of CBR:


• Static case and static cycle;
• Dynamic target and static cycle;
• Dynamic target case and dynamic cycle (our approach).
Usually, the followed situation contains a great number of dynamic parameters, i.e.,
the value of these parameters changes very often. Systems allowing the management of
such situations must be dynamic in order to be able to handle these evolutions. As a
consequence, to design these systems, we need a flexible and adaptive architecture. The
complexity of this category of systems has led us to choose the MultiAgent Systems
(MAS) [3].

2.2 The Role of the MultiAgent System (MAS)


First, let us give a short definition of a MAS: it is a system made of agents. Most of the
time an agent is a computer process - agents can be distributed on different machines
[4], but it also can be a robot, a human, etc. An agent is an autonomous and adaptive
entity, able to make communications and actions [5, 6]. Our initial MAS development
was intended as a Preventive Monitoring Information System. The objective of such a

zamfira@unitbv.ro
612 J. Grieu et al.

system is to watch permanently the current state of industrial risky zones, called current
situation - or in our case: the target CBR - in order to allow decision makers to gather
information as soon as possible about the potential risk generated by this current
situation [7].

3 The ITS AI Module in GE3D

In this paragraph we present an intelligent tutoring system which aims at decreasing


students’ dropout rate by offering the possibility of a personalized follow up. We
address the specific problem of the evolution of the large amount of data to be pro-
cessed and interpreted in an Intelligent Tutoring System (ITS). In this regard we present
the architecture of our decision support system used as the core of the intelligent tutor
which could be applied to a variety of teaching fields.
ITS is a solution towards helping decrease students’ dropout rate by offering per-
sonalized follow up either in blended or distance learning courses [8]. Quoting Hafner
[9], an ITS: “is educational software containing an artificial intelligence component.
The software tracks students’ work, tailoring feedback and hints along the way. By
collecting information on a particular student’s performance, the software can make
inferences about strengths and weaknesses, and can suggest additional work.”
For the prototyping, the MAS have been applied to dropout prevention in case of
distance learning. The goal is to prevent the risk of abandonment of the learner. The next
adaptations aim individualising the learning process, including in a 3D virtual envi-
ronment in order to build an ITS. As a result, we combine works on the 3D virtual campus
(GE3D), the CBR system and the MAS as illustrated by the next diagram (Fig. 3):

Fig. 3. ITS integration in the virtual campus.

zamfira@unitbv.ro
Industrial Virtual Environments and Learning Process 613

In this system [10], we recognize the generic MAS engine (independent of the
domain), the knowledge part (in green) which allows to specify the domain, and finally
the 3D immersion, part necessarily dependent of the learning area. It is noteworthy that,
depending of the case, the human tutor can intervene in the process of decision as
planned by the paradigm of the CBR. We propose an intelligent tutoring system with
its 3D students’ interface and its internal decision support system designed to face the
challenge of processing the evolution of a large amount of data.

4 A Virtual Campus for Teaching Engineering

On the other hand, the disaffection of the students with the engineering studies obliges
us to change our way of teaching to encourage them to return to our training courses.
Considering this trend, our research team has built a learning platform online.
There is a lot of learning system online all over the world. But these conventional
systems are for the students who are used to work alone or who are advanced students.
After studying the results of interviews with students, we have developed GE3D [11],
which is a learning system online, in accordance with their wishes and our criteria.
Even if this tool uses some elements coming from the 3D video games, it remains a
pedagogical tool.
We use a hybrid solution, compromise between 3D real time and 3D precalculated,
which allows us to get the best of both systems for creating computer graphics. The
choice of a virtual campus rather than a traditional platform for distance learning was
obvious for the students of the department of the Genius of Electricity and Industrial
Computing from the IUT of Le Havre. These “e-native” students have adopted now for
a long time the technologies used in the video games. GE3D is a multi-users tool, with
a synchronous technology; this means that any action of a user in the virtual world will
be perceived at the same time by the other users of the platform.
The technical choices were made according to the following specifications:
• A 3D web technology with fluidity;
• An Open source technology to enable us our own developments;
• A platform with a client-server architecture;
• On the client side: a hardware and network resource-efficient system.
We will describe the use of this virtual space with a course on the PLC’s (Pro-
grammable Logic Controllers).
The typical pedagogical scenario of a course on the PLC’s is as follows:
• An appointment is given to all the students in the amphitheater (Fig. 4);
• Before coming in the amphitheater, the students download a document with blanks
that they must complete during the lecture;
• During the presentation, the teacher can use the screen for slides or videos. He can
also use a white board and a microphone for the audio part. In the same time, the
students write on their documents and ask questions if necessary with the public
chat;

zamfira@unitbv.ro
614 J. Grieu et al.

Fig. 4. Amphitheater in the virtual campus GE3D.

• After the course, the students respond to a multiple choice test on line. If they
succeed to pass the test, they can reach the exercise room. If not: they repeat their
test in the virtual examination room;
• In the room for the exercises (Fig. 5), the teacher gives the students some exercises
and the students propose their solutions by using the whiteboard;

Fig. 5. Exercise room with interactive whiteboard.

• When all the exercises are completed, everybody can join the industrial room
(Fig. 6). Here, the teacher shows how the equipments are running and he stays with
the students answering to their questions;

zamfira@unitbv.ro
Industrial Virtual Environments and Learning Process 615

Fig. 6. Training room with students.

• When the students succeed all the exercises, they can download specifications of
various industrial processes described in the videos available in the next room
(Fig. 7);

Fig. 7. Training room with specifications of operative parts.

• After programing, the students join the teacher in the real room (Fig. 8) of the
PLC’s to validate their solutions. They use a simulator of operative part, made in 3D
(Fig. 9) [12].

zamfira@unitbv.ro
616 J. Grieu et al.

Fig. 8. Students programming PLC’s in the real room.

Fig. 9. Operative part simulator controlled by a PLC.

5 Conclusion and Perspectives

At the time of the fourth industrial revolution, Programmable Logic Controllers


(PLC’s) occupy a more and more important place in the Industry 4.0 or Smart Industry.
Indeed, all the elements of the operative parts have to communicate their data through
industrial grids to PLC’s and other supervising devices.
Furthermore, the Industry 4.0 integrates 3D simulation. Before producing com-
ponents or engines, before implanting chain of conveyors or even a complete work-
shop, decision makers work on obtained results with simulators, to finally validate their
creations.
It is in the framework of teaching Automation (PLC’s) that the virtualisation of
industrial processes brings all its specificity and its efficiency:

zamfira@unitbv.ro
Industrial Virtual Environments and Learning Process 617

• First of all, for its cost: indeed, the acquisition of a real operative part, built with
hydraulic cylinders, engines, sensors and other actuators, represent a highly pro-
hibitive expenditure for a training center. Whereas the virtualisation, with less cost,
permits to copy out numerous operatives parts to infinity;
• Then, for the security: a virtualised machine will be safely turned on by learners
with nothing to fear of a wrong handling for their security or the equipment
integrity;
• At last, for the realism: the simulation of the industrial process allows students to
see immediately the action of their program. Previously, their attention was very
relative in front of a blinking led representing the start-up of the conveyor. Today,
when their program is validated on the simulator, they are proud of their job and all
the more so motivated to do it.
However, currently we can find some inconveniences linked to the utilisation of
those technologies:
• The modelization ad hoc of 3D objects remains a time-consuming activity;
• VR headsets currently proposed are still heavy and can cause nausea;
• The choice and the conception of interfaces allowing interactions remain a tech-
nological lock which has to be lifted;
• The lack of maturity of a learner can engender a confusion between virtuality and
reality.
At the present time, our realisations focus essentially on driving virtualised
industrial installations by Programmable Logic Controllers (Fig. 9). Thereafter, we
consider the behavioural analyse of learners thanks to sensors («Eyetracking» and
headset «EEG») and the conception of adapted Human Machine Interfaces.
We can expect that technics of treatment and massive data, the “Big Data”, will
allow us to better identify the needs and behaviours of learners by personalizing their
pedagogic process so as to reduce abandon risks.
Eventually, 3D technologies showed us their utility in comprehension of complex
systems. Inasmuch as it already participates the approach of “Simplexity” which
involves creating simple, intuitive and ergonomic interfaces, to help for
decision-making in more and more complex universes.

Acknowledgments. This project has been supported by the European Commission under the
ERDF (European Regional Development Fund) Programme through the 5.5 Action of the
GRR CLASSE Programme.

zamfira@unitbv.ro
618 J. Grieu et al.

References
1. Kolodner, J.: Case-Based Reasoning. Morgan Kaufmann Publishers, San Francisco (1993)
2. Aamodt, A., Plaza, E.: Case-based reasoning: foundational issues, methodological
variations, and system approaches. AI Commun. 7(1), 39–59 (1994). IOS Press Amsterdam,
The Netherlands
3. Jennings, N., Wooldridge, M., Sycara, K.: A roadmap of agent research and development.
Auton. Agent. Multi-agent Syst. 1(1), 7–38 (1998). Kluwer Academics Publishers, Boston
4. Moulin, B., and Chaib-draa, B.: An overview of distributed artificial intelligence. In:
Foudations of Distributed Artificial Intelligence, pp. 3–55. Wiley, New York (1996)
5. Ferber, J.: Les systèmes multi-agents. InterEditions, Paris (1995)
6. Wooldridge, M.: An Introduction to Multiagent Systems. Wiley, Chichester (2002)
7. Boukachour, H.: Système de veille préventive pour la gestion de situation d’urgence:
modélisation par organisations d’agents. Application aux risques industriels. Ph.D.
dissertation. Le Havre University, France (2002)
8. Willging, S.: Factors that influence students’ decision to dropout of online courses.
J. Asynchronous Learn. Netw. (JALN) 8(4), 105–118 (2004)
9. Hafner, K.: Software tutors offer help and customized hints (2004). http://www.nytimes.
com/2004/09/16/technology/circuits/16tuto.html?_r=0
10. Person, P., Galinho, T., Lecroq, F., Boukachour, H., Grieu, J.: Intelligent tutor design for a
3D virtual campus. In: 6th IEEE International Conference on Intelligent Systems (IS),
pp. 74–79. IEEE Publisher, Sofia (2012). doi:10.1109/IS.2012.6335194
11. Grieu, J., Lecroq, F., Person, P., Galinho, T., Boukachour, H.: A virtual campus for
technology-enhanced learning. In: Education Engineering (EDUCON), IEEE 2010, pp. 725–
730. IEEE Publisher, Madrid (2010). doi:10.1109/EDUCON.2010.5492506
12. Riera, B., Vigaro, B.: Virtual systems to train and assist control applications in future
factories. In: 12th IFAC Symposium on Analysis, Design, and Evaluation of
Human-Machine Systems, pp. 76–81. Elsevier publisher, Las Vegas (2013)

zamfira@unitbv.ro
How Game Design Can Enhance Engineering
Higher Education: Focused IT Study

Olga Dziabenko1(&), Valentyna Yakubiv2, and Lyubov Zinyuk2


1
University of Deusto, Bilbao, Spain
olga.dziabenko@deusto.es
2
Vasyl Stefanyk Precarpathian National University, Ivano-Frankivsk, Ukraine
yakubiv.valentyna@gmail.com, lyubov.zink@gmail.com

Abstract. This paper seeks to report on the current state and attitudes towards
higher education (HE) curriculum for the creative (game) industry sector in
Ukraine. It is based on preliminary findings from high education and industries
surveys, which examined the competences, demanded by this important sector
of the UA economy from one hand, and, from another, offers of HEs in
developing them. Moreover, a review of the literature performed to define the
core employees’ profiles and their competences on the field job market. This
paper explores competences, professional and transversal, that are important for
the Ukrainian game industry and in what way should students be taught for
“creative” tasks. This paper offers interested parties an analysis on how HE in
Ukraine can develop relevant curriculum and deliver “industrial” education for
students who intend to operate in this sector. The study results could be helpful
for HE and policy makers to respond to current and future education needs.

Keywords: Engineering curricular  Game design  Game development 


Creative industry

1 Introduction

The world IT market is growing annually on 5–20% including game industry [1].
According Global Games Market Report [2] worldwide game industry will generate a
total of $99.6 billion in revenues in 2016, what is increasing almost 8.5% compared to
2015. Moreover, it expects that global market will grow up to 6.6% toward 2020,
eventually reaching $118.6 billion. It is no surprise that IT products in whole and
computer games particularly have become a major industry and are one of the fastest
growing application markets in Ukraine [3]. The development scene of creative sector
of world economy is expanding, therefore, amount of startups and companies based in
European countries and worldwide are increasing. Such movement influences on
accepting of new education policy which offers new grades, courses and curriculum
according the contemporary challenges and demands. For example, every EU Tech-
nical university has at least one program devoted to the game design [4] and approx.
280 bachelor and master programs on game design are available at 385 USA colleges
and universities [5].

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_58
zamfira@unitbv.ro
620 O. Dziabenko et al.

A primary goal of the work presented in the paper is to figure out the required
knowledge and skills on the local and national game sector market in Ukraine. A side
benefit of this work will be a development of learning modules and an implementation
them in a curriculum that meets the demands and interests of the future games
engineers.
Games developers produce games for different operational system using existing
engine machines or creating new one. A game production can involve from few
employees to large studios and take several months or even years from creating ideas
and characters to programming and testing. Each stage of digital game development
involves various tasks for different roles, such as:
• Designer – creates a game flow and how play in a game. The game could be original
ideas or work from an existing concept
• artist – creates the game’s visual characters, objects and scenery, and produces
concept art and drawings (storyboards) at the planning stage
• animator – brings the characters, objects and scenery to life with computer mod-
elling and animation software during the production stage
• programmer – creates the code to make the game work. On this stage the work
could include graphics design, artificial intelligence, or gameplay software.
The literature review shows that the candidate on a position of a digital games
developer needs to demonstrate follow competences:
• excellent computer skills
• a wide knowledge and understanding of computer games
• creativity and imagination
• a logical approach to problem-solving
• good teamwork and communication skills
• flexibility and adaptability
• the ability to work under pressure and meet deadlines
• patience and attention to detail
• willingness to keep up with industry developments and learn new skills.
A survey consolidates and relates on above mentioned competences – knowledge
and skills - needed from students of engineering schools dreaming to work in digital
game industry. It was oriented mainly on a game sector as well as a high education in
Ukraine and organized in all of parts of country and performed in frame of the Erasmus+
project “GameHub: University-enterprises cooperation in game industry in Ukraine”.
The analysis of the resent survey will result by an integration of computer game
design with a software engineering course. The modules for the course will be
implemented in next steps of the project performance, along with additional course
material including syllabus, slides, projects, and other course materials specific to game
design in software engineering.
This paper is presented as follows: Sect. 2 introduces the already mentioned
GameHub Initiative as it is – the project goals and its wider and specific objectives,
target groups, and partnership; Sect. 3 outlines a profile of the IT engineers working at
game industry in Ukraine as well as development knowledge and skills of IT students
using the potential of existing national higher education; Sect. 4 describes the study on

zamfira@unitbv.ro
How Game Design Can Enhance Engineering Higher Education 621

the common (core) and specific (professional-oriented) competences for game industry
job positions and discusses preliminary results; Sect. 5 introduces the didactic approach
recommended for the use in GameHub pilot action and finally, Sect. 6 summarizes our
conclusions and introduces possible future work.

2 GameHub Initiative

The GameHub project was created to modernize the existing engineering education in
Ukraine by enhancing students’ knowledge and skills in creative game development
sector. The project was started at autumn 2015 and is co-financed by ERASMUS+
programme, Cooperation for Innovation and the Exchange of Good Practices key
action and Capacity Building in Higher Education action.
The main goal of the project is building a bridge to connect in mutually beneficial
cooperation universities and game industry by fostering and human investing the
Ukrainian emerging ICT creative business sector.
The paper delivers three main outcomes: (1) preliminary analysis of high education
and industries surveys concentrating on the competences, both professional and
common (core), important for this sector of UA economy; (2) a review of the literature
performed to define the main employee’s profiles and their competences on a job
market of the field; (3) recommendations for HE in Ukraine concerning methods and
instruments for the development of relevant curriculum and deliver of “industrial”
education for students who intend to operate in this sector.
IT market in Ukraine is increasing every year and requires more and more qualified
specialists. High salaries, great amount of working places, opportunity for career
growth (including traveling abroad) attract many young professionals and unemployed
persons to the IT industry. However, employees do not always meet the employers’
needs and possess necessary knowledge and skills. The reason is that most educational
programs in IT specialties are out of date, and, as a result, do not correspond to the
labor market requirements in rapidly developing IT sector.
In order to understand the needs of the game market in Ukraine and build con-
temporary profile of the university IT student the several surveys were performed.

3 IT Engineer Profile in Game Industry, Ukraine

A complex analysis of IT specialist competences complemented by social portraits


helps to determine employee’s social and professional aspects needed in game industry
in Ukraine as well as demanded main competence level in this field for the teaching
staff, and students majoring in “Information Technologies” [6, 7].
For receiving social and professional characteristics of IT specialist we used the
results of social survey conducted by IT Outsourcing News (2015), platform Rabota.ua
in IT sector, and Ukrainian community of programmers. These data draws the fol-
lowing portrait: mainly a man with the average age from 19 to 36, with higher edu-
cation in science or computer sciences. He actively applies his knowledge and skills,
constantly improves them with offered training, workshops or courses. His main job is

zamfira@unitbv.ro
622 O. Dziabenko et al.

software development in a position of a developer, manager or tester. Although


appreciating the possibility to develop a high-tech field he works due to high salary and
possibility of career growth. He is mobile, i.e. ready to change his living place. The
average experience is from 3 to 5 years [8, 9].
To study the state-of-the-art in digital game design in Ukrainian higher education
two surveys have been conducted: (1) for academic staff; and (2) for University stu-
dents majoring in “Information Technologies” or “Computer Sciences”. Both of them
cover the questioning of 100 University teachers and around 500 students from dif-
ferent regions of Ukraine such as Donetsk, Kharkiv, Odessa, Kherson, Kiev, and
Ivano-Frankivsk.
Analysis of the academic staff competences and skills shows that training of future
game design practitioners completely depends on their background in the field, applied
teaching methods and tools that fit to constant changing IT market requirements.
Therefore, the survey demonstrates that the average age of lecturers is around 49 with
20-years work experience. 67% of the academic staff is men while the majority of them
work in the field of Information technology. Only half of the respondents have suffi-
cient knowledge of foreign languages (mostly English). They have average mastering
of Java, C++, PHP and SQL and basic level of Java Script, Python, Objective, Perl and
Ruby [12]. Among graphic environments high level of skills is observed in Adobe
Photoshop and Adobe Illustrator, average one is in 3dsMax and Blender, and basic is in
Maya, Cinema 4d and Vuex Stream. 27% of the respondents have previous work
experience in digital game design. In conclusion, the academic staff of Ukrainian
universities has some knowledge and skills in game design, programming languages,
graphic environments.
The analysis of the University students’ survey, with more than 500 respondents
builds an average Ukrainian University student portrait - future game industry repre-
sentatives. The common portrait includes basic knowledge of such programming
languages as Java, JavaScript, C++, PHP, SQL (33%), usage of graphic environments:
Adobe Illustrator, Adobe Photoshop, 3ds Max, Blender, Maya, Cinema 4d (70%),
work experience in game industry (26%). They consider game design industry as one
of the most perspective for the employment and are interested in interface development,
project design, programming and project development [12].

4 Professional Competency Based on Gaming Industry


Analysis

In order to create the knowledge and skills needed by game studios, the questionnaires
were designed. 41 game business representatives were interviewed and questioned. The
respondents evaluated common (core) and specific (professionally-oriented) knowledge
and skills which are necessary for the digital game design employee [10]. To simplify
the process, we suggested to evaluate the most popular eight job positions: content
manager, storyteller, scriptwriter, sound programmer, web-client programmer,
sketcher, 3D Character Artist/3D Environment Artist, QA tester, JS programmer [11].
For each job positions the presented below knowledge and skills were voted on the
scale not important at all and very important. The common capacity includes, e.g., to

zamfira@unitbv.ro
How Game Design Can Enhance Engineering Higher Education 623

identify and solve problems; o work in team and achieve mutual goals; to apply gained
knowledge and understanding of subject area/profession in practice; to adapt to dif-
ferent situations and flexibility; to work independently; to accept constructive feedback
on the work; to pay attention to details and quality evaluation; for creative and
imaginative capabilities; for self-education and self-development; for diversity sensi-
tivity; for effective communication/interpersonal skills; for analysis and synthesis; task
planning and time management; for excellent verbal and written communication in
foreign languages; for leadership and decision-making.

Table 1. Specific knowledge and skills necessary for game development


# Specific knowledge and skills
1 Ability to apply the principles, methods and algorithms of computer graphics
2 Ability to apply object-oriented approach for design of complex systems
3 Ability to use technology and tools for intelligent systems building
4 Ability to design rules and mechanics of a game
5 Ability of scriptwriting, storyboarding and concept art
6 Knowledge of drawing techniques both traditional and digital
7 Deep understanding of capabilities and benefits of different hardware platforms
8 Knowledge of various programming languages
9 Knowledge and working skills with databases
10 Knowledge and skills in system programming.
11 Basic understanding of compliers, linkers and interpreters
12 Build automation and test automation skills
13 Knowledge and working skills in algorithms, dynamic programming tasks
14 Ability and skills in code organization within file and between files
15 Skills in problems and systems decomposition
16 Other (please specify)

For each job position we created two competence profiles for common knowledge
and skills, the other for specific ones (Table 1). As an example, diagram on the Fig. 1
shows that leadership and decision-making as well as teamwork are dominated over the
work independently for a content manager (common knowledge and skills).
In addition, the study of curricular of the GameHub European academic partners
demonstrates that all competences may be compiled in four clusters: Design, Pro-
gramming, Creative skills, and Transversal skills.
Design combines competences used for preparation of the preliminary models and
sketchers for digital game, planning of digital game form and structure.
Programming includes competences necessary for computer programming, soft-
ware development, analysis, content development, algorithm generation, testing algo-
rithm requirements, algorithm/architecture solution.
Creative skills describe competences used for the development of digital game
sketch, image, music due to aesthetic principals and high level attraction.

zamfira@unitbv.ro
624 O. Dziabenko et al.

Fig. 1. Content Manager: competences (very important – green, important – yellow, slightly
important – red, and not important at all – blue)

Transversal skills include traditional knowledge and skills that contribute to the IT
specialist’s personal fulfilment, such as: communication skills; foreign language pro-
ficiency; basic knowledge in mathematics, physics, natural sciences; ability to study,
for social responsibility; ability for entrepreneurship, cultural literacy, creativity. Here
can be also referred skills in project management, knowledge of planning and control
methods, project monitoring and analysis.
Based on the above-mentioned GameHub surveys and conducted analyses the
competence profile of IT specialist in digital game industry is created. It includes but
not limited to:
Common competences
– Responsibility, care about quality of work;
– Adaptability and interpersonal skills;
– Efficiency and ability to self-improvement;
– Creativity, ability to system thinking;
– Focus on achieving a success.
Instrumental competences
– Capacity for research work, analysis and synthesis of technical information;
– Teamwork;
– Outstanding computer/programming skills;
– Capacity for written and oral communication in their native language.

zamfira@unitbv.ro
How Game Design Can Enhance Engineering Higher Education 625

Specific/professionally-oriented competences
– Ability to develop user’s requirements specifications to software;
– Ability to perform requirements analysis, develop specification of software
requirements, conduct their verification and certification;
– Basic understanding of the fundamentals of software simulation/design, types of
models, main concept of unified modeling language UML;
– Ability to simulate different system aspects for which the software is developed;
– Ability to develop algorithms and data structures for software products;
– Understanding of current tendencies in software structure and architecture, software
design methods;
– Basic understanding of modern psychological principals of human-machine inter-
action, methods of human-machine interface development;
– Ability to human-machine interface analysis, design and creation of prototypes;
– Ability to reusable components development and application;
– Knowledge of basic methods and techniques of visual programming;
– Ability to solve mathematical, physical and economic problems via development of
appropriate applications;
– Ability to use hardware possibilities.
The obtained competence profile allows to establish methodic approaches for the
development of didactic base for the curricular modification. In other words, conducted
competence analysis of employers’ requirements in digital game industry makes it
possible to justify the preliminary structure of training-methodic supply of GameHub
laboratory.

5 Didactical Approaches and Methods

The GameHub project devoted to develop modern IT educational programs at partners


Ukrainian Universities based on successful European experience and directed to the
acquiring skills and knowledge on digital game design that completely corresponds to
the needs and requirements of IT employers. This part of paper explains the innovative
teaching methods and tools that fit to the Ukrainian HEI style of lecturing.
For this purpose, the currently applied didactic methodology and instruments used
at Ukrainian Universities for training IT disciplines were studied at six Universities.
The study was performed considering from the point of view of main tendencies in the
development of innovative teaching tools and complex methods application. It has
determined the peculiarities, advantages and disadvantages of the full-time, part-time,
distance, online and blended forms of studies. Based on the analysis results we can say
that all these forms of studies are suitable for training of our modified educational
program, although the teaching methods and instruments should be revised and added
in order to fulfill various creative tasks [13].
The review of innovative teaching methodology successfully applied in European
Universities as well as didactic approaches offered by Tuning Academy [15] provides a
great number of various techniques that could be used teaching practical skills in digital
game design. On our opinion, the most suitable could be a project-based teaching that

zamfira@unitbv.ro
626 O. Dziabenko et al.

efficiently develops several competences at the same time. The project may incorporate
a teamwork involving creative capacity and directing to the real-life problems, e.g.,
production of working prototype of STEM game for secondary education level.

6 Conclusion

In the paper we presented the obtained results that allow to create a set of common and
specific (professionally-oriented) competencies, and to determine methodical approa-
ches for the development of didactic base to improve the engineering curricula in
Ukrainian technical schools and universities. The conducted competence analysis of
employers’ requirements in digital game industry makes it possible to establish a
GameHub laboratory [14] - the structure of education equipment for building mean-
ingful final students projects with cooperation of national and international creative
industry.
Moreover, based on the above-mentioned analyses, studies and surveys we have
developed recommendations how to apply innovative teaching methods for the
development of needed common and specific competences in digital game design for
training specialists for appointed job position.
In the paper we show the required knowledge and skills on the local and national
game sector market in Ukraine is settled. The paper offers to apply “creative” tasks and
project-based approach as most effective methods for teaching students in this field.
In the future the consortium is planning to create the learning modules and mate-
rials in format of open education resources to modernize the engineering curricular
matching and satisfying to real-life tasks and objectives in creative sector of Ukraine.
The developed open education resources will be tested and evaluated through the pilot
action at six engineering schools. The results of this trial will be published on the
project website (http://gamehub-cbhe.eu/) and in the facebook group discussion wall.
The study results could be helpful for HE and policy makers to respond to current
and future education needs.

Acknowledgement. This work was partially funded by the European Union in the context of the
GameHub project (Project Number: 561728-EPP-1-2015-1-ES-EPPKA2-CBHE-JP) under the
ERASMUS+ programme. This document does not represent the opinion of the European Union,
and the European Union is not responsible for any use that might be made of its content.
We want to thank all GameHub partners who contributed to the interview, survey and dis-
cussion of analysis of labor market for game industry in Ukraine.

References
1. PwC forecasts (2016). http://venturebeat.com/2016/06/08/the-u-s-and-global-game-industries-
will-grow-a-healthy-amount-by-2020-pwc-forecasts/. Accessed 10 Aug 2016
2. Global Games Market Report (2016). https://newzoo.com/insights/articles/global-games-
market-reaches-99-6-billion-2016-mobile-generating-37/. Accessed 20 Aug 2016

zamfira@unitbv.ro
How Game Design Can Enhance Engineering Higher Education 627

3. International Factfile 2015: Ukraine, Games industry news, The Market for computer &
video games, 9 November 2015. http://www.mcvuk.com/news/read/international-factfile-
2015-ukraine/0158786. Accessed 20 Aug 2016
4. Animation Career Review (2016). http://www.animationcareerreview.com:8080/careers-
animation. Accessed 20 Aug 2016
5. The 2016 Essential Facts About the Computer and Video Game Industry, Entertainment
Software Association (ESA), April 2016, Ipsos MediaCT for ESA. http://essentialfacts.
theesa.com/Essential-Facts-2016.pdf. Accessed 20 Nov 2016
6. Official data of the State Statistic Service of Ukraine. http://www.ukrstat.gov.ua. Accessed 3
Nov 2016
7. IT specialist portrait in Ukraine. https://dou.ua/lenta/articles/it-portrait-2015/. Accessed 12
Aug 2016
8. Official data of the Association of the Ukrainian Outsourcing Companies. Exploring
Ukraine. IT Outsourcing Industry. http://hi-tech.org.ua/exploring-ukraine-it-outsourcing-
industry-the-volume-of-it-outsourcing-services-provided-in-ukraine-has-grown-by-a-factor-
ten/. Accessed 15 Nov 2016
9. Official data of the Association: Information Technologies in Ukraine. http://itukraine.org.
ua/analitychni-materialy. Accessed 3 Nov 2016
10. International standard of education classification. http://www.uis.unesco.org/Education/
Documents/isced-fields-of-educationtraining-2013RU.pdf. Accessed 3 Nov 2016
11. National standard of education classification (project). http://naps.gov.ua/uploads/files/sod/
NSKO.pdf. Accessed 3 Nov 2016
12. GameHub report: Task Analysis; Development of Competence Profiles. http://gamehub-
cbhe.eu/project-results/. Accessed 20 Nov 2016
13. GameHub report: Developed Didactical Approach in Training. http://gamehub-cbhe.eu/
project-results/. Accessed 20 Nov 2016
14. GameHub report: Eleborated Technico-Pedagogical Requirement on game Laboratory.
http://gamehub-cbhe.eu/project-results/. Accessed 20 Nov 2016
15. Tuning Academy. http://www.unideusto.org/tuningeu/publications.html. Accessed 20 Nov
2016

zamfira@unitbv.ro
Physioland - A Serious Game
for Rehabilitation of Patients
with Neurological Diseases

Tiago Martins1,2, Vítor Carvalho1,2(&), and Filomena Soares1


1
R&D ALGORITMI Centre, University of Minho, Guimaraes, Portugal
2
IPCA – Polytechnic Institute of Cavado and Ave, Barcelos, Portugal
vcarvalho@ipca.pt

Abstract. Current society has observed an increasing number of victims of


neurological disease, with reduced mobility, leading to a necessity to perform
physical therapy to optimize their quality of life. This action results in phys-
iotherapeutic programs filled with repetitive exercises, often fastidious, that lead
to the demotivation of patients and consequent poor adherence and withdrawal.
As a result of the technological evolution, new tools such as serious games are
emerging, so their use in the field of physical therapy can modify the way
patients face their treatments, promoting their motivation. Thus, we have
developed a serious game based on image processing techniques to motivate and
monitor patients with neurological diseases in their physical therapy practice.

Keywords: Serious games  3D sensor  Neurological disease  Reduced


mobility  Physical therapy  Motivation

1 Introduction

The number of people affected by neurological diseases, that is, diseases related to the
Nervous System, whose basic functional unit is the neuron or nerve cell, is increasing
daily. By receiving and transmitting the nerve impulses, the neurons conduct the
collected information [1]. The occurrence of lesions in neurons, whether genetic,
congenital or acquired, triggers cellular dysfunctions, compromising the transmission
of electrical signals, culminating in neuronal death and, consequently, configuring a
neurological disease [2]. Some examples of neurological diseases are multiple scle-
rosis, stroke, Friedreich’s ataxia and Parkinson’s disease.
Although different in their genesis, because the causes of neuronal damage are
different, the neurological diseases that have been mentioned before have something in
common: the transmission of the electrical signals for the movement cease, the muscles
lose the activity and atrophy, compromising the mobility of the affected patients [3].
Neurons are mostly amitotic cells, so when they are injured, they cannot be repaired
by cell division. However, it is common to observe a recovery, although partial, of the
lost functions, even in cases where the consequences of the injuries are severe, which
shows that the Nervous System has the capacity to develop mechanisms that allow it to
adapt the possibilities of the individual to the challenges of the environment. Thus,

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_59
zamfira@unitbv.ro
Physioland - A Serious Game for Rehabilitation of Patients with Neurological Diseases 629

when there is loss of neurons, new alternative neural circuits are sought which can
replace the injured structures. These adaptive mechanisms are the manifestation of an
intrinsic property of the Nervous System, neuroplasticity or neuronal plasticity, which
can be defined as the ability it has to modify its structural and functional organization,
as long as it is subjected to repeated stimuli [4].
It seems to be consensual, in the literature on this subject, that the practice of motor
exercises induces neuronal plasticity. Therefore, people with mobility problems as a
result of neurological diseases should have adequate treatment programs that include
functional activities, as believing that their training encourages neuroplasticity, patients
can achieve a positive clinical evolution, which will give them independence and,
consequently, a better quality of life [5].
Therapeutic resources based on movement, applied to patients with neuronal
lesions, can stimulate new connections with the Central Nervous System, contributing
to its reorganization: new dendrites can sprout; it is possible to extend branches already
present; existing synapses can be altered or new synapses can be created; changes in
axons may occur; new neurotransmitters can be produced [6]. It is here that physical
therapy becomes essential to promote sensory stimuli able to encourage neuroplasticity
and, thus, to contribute to the recovery of the functional movements of patients who
have suffered sequels due to neurological diseases [7].
This paper is organized as follows: Sect. 2 introduces the Physical Therapy; Sect. 3
refers to Serious Games and Motivation, Sect. 4 presents the purpose of this work;
Sect. 5 mentions the Physiotherapeutic Exercises adapted to electronic game situations;
Sect. 6 details the Monitoring of Physical Therapy Exercises Using Image Processing
Techniques; Sect. 7 describes the development of the Physioland Game, namely the
General Architecture, the Physioland Concept and the Special Features of Develop-
ment; Sect. 8 focus on the Results obtained from the experience; Sect. 9 shows the
Conclusion of this work.

2 Physical Therapy

According to the World Confederation for Physical Therapy, physical therapy is a


health specialty that provides targeted services to develop, maintain, and restore
maximum movement and functional capacity throughout life, including physical,
psychological, emotional and social well-being [1]. It studies, prevents and treats the
functional kinetic disturbances that occur in organs and systems of the human body,
originated by genetic mutations, traumas or acquired diseases [8].
Most patients with neurological disorders require physical therapy programs to
maintain their independence for the longest possible time. But the exercises of tradi-
tional therapy, although necessary, are monotonous and boring, which leads the par-
ticipants to feel discouraged and to abandon treatment prematurely, especially if they
do not feel short-term improvements [9]. Therefore, it is necessary to create alternatives
that offer the patients pleasant environments capable of encouraging them to perform
with satisfaction the programs that have been prescribed to them, leading them to
abstract from the therapy itself and enter into a dimension of fun, even forgetting they
are performing a therapy [10, 11].

zamfira@unitbv.ro
630 T. Martins et al.

3 Serious Games and Motivation


The results of several studies have shown that serious games, when placed in the
service of physical therapy and rehabilitation, have enormous potential to create such
environments, arousing the motivation of patient players, who end up fulfilling all the
time provided for their therapy, and so the chances of success are high.
Patient motivation is an important challenge that has direct implications for the
quality of his/her involvement in the treatment process. It origins dynamic, active and
persistent attitudes, leading the patient to value the idea of success, to the detriment of the
obstacles found. The motivated patient shows enthusiasm in participating in the tasks,
looks for new opportunities and shows willingness to start new challenges [12, 13].
A playful environment, rich in possibilities for patients to interact, communicate,
risk, develop skills, increase self-esteem, socialization, persistence, will probably be a
motivating factor that leads them to remove the setbacks that their treatments imply
[14]. One way of introducing this recreational component into physical therapy and
rehabilitation centers can be achieved through the use of serious games.
These technological resources provide several elements able to promote patient
motivation. It is the case of several forms of performance feedback that these games
typically integrate, such as numerical scores, progress bars, dialog boxes, vibration/force
and sound controller. The immediate feedback helps the player to see his/her progress,
to understand his/her mistakes and to learn from them, to encourage him/her to improve
the performance. It constitutes a rewarding incentive, which can lead to increased
motivation and pleasure, creating a greater desire to complete successfully special tasks
and achieve the desired results [15, 16].
These technological tools can be adjusted to the individual needs of each person,
and reproduce any situation, under the same conditions and as many times as one
wishes [17]. The fact the player fails an exercise and he/she is able to repeat it is a
motivating factor. Thus, with these technologies, the most difficult exercises represent
activities that give him/her pleasure, become challenges to overcome [10].
Serious games can awaken patients’ fantasy and curiosity by providing environ-
ments that offer opportunities to collaborate, compete, relate to other players, take risks
without fear of failure, manipulate virtual worlds and interact with them, to create
expectations that will not always be confirmed [18–20].

4 Purpose
As mentioned, serious games entertain players and reinforce healthy movements, while
create engaging and pleasant environments, providing challenging and rewarding
experiences. However, we have verified that there is a gap with regard to serious games
specifically designed to support the traditional physical therapy. To overcome this gap,
it became a general objective of this work to develop a serious game, which was called
Physioland, to motivate and monitor the practice of physical therapy of patients with
neurological disease. The non-invasive system uses image processing techniques to
monitor patients and it adjusts to electronic game situations some exercises proposed
by traditional physical therapy. This game is intended for all people with neurological
disorders in mild or moderate condition, having or not balance.

zamfira@unitbv.ro
Physioland - A Serious Game for Rehabilitation of Patients with Neurological Diseases 631

5 Physiotherapeutic Exercises

After analyzing several exercises, with health professionals, six exercises were selec-
ted. They are interesting from a technological and physiotherapeutic point of view, and
with different dynamics to be adapted to electronic game situations, in order to motivate
patients to perform them. The chosen exercises were as follows (Fig. 1):
• Glenohumeral joint abduction/adduction (Fig. 1(a));
• Glenohumeral joint flexion/extension (Fig. 1(b));
• Radioumeral joint flexion/extension (Fig. 1(c));
• Hip joint abduction/adduction (Fig. 1(d));
• Cross-movement (Fig. 1(e));
• Pulleys (Fig. 1(f)).

Fig. 1. Set of exercises chosen for the Physioland Game. (a) – Glenohumeral joint
abduction/adduction; (b) – Glenohumeral joint flexion/extension; (c) – Radioumeral joint
flexion/extension; (d) – Hip joint abduction/adduction; (e) – Cross-movement; (f) – Pulleys

The glenohumeral joint abduction can be defined as the movement occurring in the
frontal plane, around a horizontal axis directed dorsoventrally, and that moves the arm
away from the midline of the body. The glenohumeral joint adduction is the movement
in the opposite direction to that of abduction, that is, it takes place in the same frontal
plane, around the same axis, but it approximates the arm of the midline of the body [21].
The glenohumeral joint flexion occurs in the sagittal plane, being performed for-
wards and upwards, around a transverse axis. If the movement is in the same plane, but
in the opposite direction, representing the return of this flexion, is called the gleno-
humeral joint extension [21].
The radioumeral joint flexion occurs in the frontal plane and consists of the
movement of the arm towards the shoulder, which results in a decrease of the
arm-forearm angle. In the opposite direction, it is performed the radioumeral joint
extension, which provides a growth of the same angle, since the arm moves away from
the shoulder [21].

zamfira@unitbv.ro
632 T. Martins et al.

The hip joint abduction is performed in the frontal plane, around a horizontal axis in
the anteroposterior direction, and consists of the lateral elevation of the leg, that is, it
moves away from the midline of the body. As with the glenohumeral joint, also for the
hip joint, the abduction return movement, which approaches the midline leg of the
body, is called the hip joint adduction [21].
Cross-movement is a diagonal movement, with each arm alternately moving for-
ward and up or down, attempting to reach a target appearing on the opposite side,
placed randomly by the physiotherapist.
The exercise that uses the pulleys consists of a combined movement of the
glenohumeral joint abduction/adduction and the radioumeral joint flexion/extension
and, therefore, it is performed in the frontal plane.

6 Monitoring of Physical Therapy Exercises Using Image


Processing Techniques

As mentioned earlier, it was intended to create a simple non-invasive system. At the


outset, anything that might complicate the interaction process with the patient, from
sensors, markers or something that had to be placed on the patient’s body, was elim-
inated. Priority was given to patient comfort, since the purpose of the system was
focused on analyzing his/her motivation.
Due to the mentioned requirements, it was thought a single motion monitoring
system exclusively based on image processing, preferably using a single device not to
impede the assembly/calibration process in a real environment situation.
Regarding traditional intensity sensors, the depth cameras offer some advantages:
they work in low light levels; present a calibrated scale estimate; are invariant to color
and texture; resolve ambiguities of silhouette; greatly simplify the task of subtracting the
background from the image. But the most important is its ability to synthesize realistic
images of depth, thus building a large and inexpensive set of training data [22, 23].
To monitor the therapeutic exercises performed by the patients, the Kinect sensor
from Microsoft was used, exclusively using the depth image, that is, the Kinect depth
sensor. The Kinect (Microsoft) camera returns an image of 640  480 pixels, at 30
frames per second, with depth resolution of a few centimeters [22].
To recognize the joints of the human body, we resort to some functions already
available in the Microsoft Kinect SDK, namely the skeleton function, the main function
for recognition of human skeletal joints.
Initially, specific algorithms for monitoring the chosen exercises were developed,
namely, monitoring of compensation, alignment, speed, acceleration and performance
technique, based on the coordinates of the joints involved in each exercise.
Throughout the research process and after having participated in several physical
therapy sessions accompanied by physical rehabilitation professionals, the research

zamfira@unitbv.ro
Physioland - A Serious Game for Rehabilitation of Patients with Neurological Diseases 633

team realized that in the technique of performing each exercise there are four main
characteristics to detect if it is correctly performed: angles defined by different body
segments, alignment, compensation and speed [24].
Regarding the angles, the ideal values are specific to each exercise. Being A, B, and
C three joints, with their coordinates it is possible to define two vectors ~ v1 and ~v2 ,
according to Eqs. (1) and (2):
ƒ!
~
v1 ¼ BA ¼ A  B ð1Þ
ƒ!
~
v2 ¼ BC ¼ C  B ð2Þ

The amplitude h of the angle of the two vectors is given by Eq. (3):
ƒ! ƒ!
1 BA : BC
h ¼ cos ƒ! ƒ! ð3Þ
BA  BC

The alignment, with few exceptions, requires that some joints align with each other in
the exercise execution plan. It is necessary to compare the coordinates of the involved
joints, relative to a given axis. If h is this axis, and A and B are any two joints, the
referred comparison will be made by determining the module of the difference between
the coordinates of A and B, relative to h, that is, by calculating the value of the
following expression (4):

j hA  h B j ð4Þ

After making the comparisons two by two, the largest of the values obtained is
recorded. Ideally, for the alignment to be correct, this value should be as close to 0
(zero) as possible, which means that the coordinates of all the compared joints, relative
to the axis in question, are relatively close to each other, which symbolically is
translated by the expression (5):

hA ffi hB ð5Þ

Compensation refers primarily to leaning the body to the right or to the left (lateral),
forward or backward. The procedure for checking if there is or there is not compen-
sation is the same that was described for alignment. It may happen that the joints
involved are other, and that the axis to be considered is also different.
The average angular speed, x, of running an exercise, in radians per second, in a
time interval Dt, in seconds, in which the angle described by a body segment undergoes
a change Dh, in radians, is given by Eq. (6). It should remain noticeably constant
during the exercise performing, which corresponds to an average acceleration

zamfira@unitbv.ro
634 T. Martins et al.

approximately equal to zero. This acceleration, c, expressed in radians per second


squared, is obtained by Eq. (7), where Dx represents the variation of the mean angular
speed, in the time interval Dt [24].

Dh
x¼ ð6Þ
Dt
Dx
c¼ ð7Þ
Dt

These equations will be used to determine the speed and the acceleration of the exe-
cution of each monitored exercise. An exception is the cross-movement, for which only
compensation is monitored, because it is a very free movement.

7 Physioland Game

The Physioland has a medieval concept, a topic that can interest the extended target
audience of the game – people with neurological disorders in mild or moderate con-
dition. It has a 3D environment, and it was developed in the game engine Unity 3D [25].
It was our main task to develop a narrative that was easily understood by neuro-
logical patients, which they could easily interpret. It was also decided to create game
situations that should be approached, as much as possible, of what the player is doing,
when he/she performs the physiotherapeutic exercises, without distracting him/her and
without losing his/her interest and motivation, this is, game situations directed to the
needs and restrictions of the patients.

7.1 General Architecture


The overall architecture of the system is based on five main components: the game, the
hardware, the peripherals, the remote database and the player (Fig. 2).
The game consists of a graphical interface, game logic, scenes, characters and set-
tings. For their construction, physics, sound, graphics, artificial intelligence engines and
user input control methods are required. All of these features are integrated and are made
available with the Unity 3D game engine. The game has a SQLite database, with a simple
structure, which stores some information regarding the sessions of health professionals
and the best results of the players, in each level, when they play the game in free mode
This game consists, in general, of game modes, settings and playing levels. After the
Physioland loading screen (Fig. 3(a)), the main menu (Fig. 3(b)) appears with the
options: play, free mode and settings. In the settings, the user can define the sound
volumes, both for the music and for the effects, the language to be used in the game, to
disable or activate the alerts, when the exercises are being executed, as well as to verify
the existence of new updates (Fig. 3(c)). Login is required to play in normal mode. In
practice, the function of this login is to authenticate the clinic and not a user. Any
healthcare professional at a particular clinic can use his/her credentials to log in, being
automatically assumed the clinic of this professional. It is necessary to highlight that the

zamfira@unitbv.ro
Physioland - A Serious Game for Rehabilitation of Patients with Neurological Diseases 635

Fig. 2. General system architecture

Fig. 3. Physioland screenshots: (a) Physioland loading screen; (b) Main menu; (c) Settings

zamfira@unitbv.ro
636 T. Martins et al.

patients must be always accompanied by their health professional who is responsible for
all the settings and for putting the game to work. The patient should only run the game.
The game can be run in play or free mode, but the first is the most versatile and
complete. When a health professional chooses this mode (the clinic login is required in
the settings and Internet connection), it appears the list of health professionals of the
clinic that is authenticated. After the selection of the health professional, the list of
patients of the selected professional is displayed and, when one of these is selected,
he/she can see the summary table of the patient. The levels are then loaded with their
settings for that patient. At the beginning of each level, a screen with information is
displayed along with an animation that shows how the exercise should be performed.
At the end of each level a summary is presented with the score obtained. After the
sequential execution of all levels intended for the player, it is redirected to the main
menu.
In this game mode, all information about the player is loaded from the remote
database, as well as all data coming from the performance of each player are sent to the
health professional. According to the specifications of each level and to the players, the
challenges can be performed with the upper or lower limbs, on the right side, on the left
side or on both sides, for 2.5 min or 5 min each one.

7.2 Physioland Concept


Physioland has a simple narrative, which is a requirement for this target audience. This
is based on the story of a family consisting of three characters: the father, the mother
and the son. The father is responsible for supporting the family, having to collect the
largest number of coins, in return for which he has to overcome several challenges:
«the village», «the fishing», «the boat», «the sunset», «the waterfall» and «the bridge».
These names came from the patients during the game development process because
they have a direct relation to the scenery that can be observed. The mother and the child
have the function to help the main character to achieve the proposed goals. The
narrative is easy to interpret since the player reviews him/herself in the philosophy of
the game. Although this narrative is situated in a different historical period from the
present one, it is timeless.
The initial design of the game was based on building a small world that is no more
than a medieval village, with elements from that era. At various levels, the main
character appears in distinct places, where he has to make challenges to obtain the
family’s livelihood.
In the first level, «the village», the player is impelled to perform the exercise of the
glenohumeral joint abduction/adduction, in order to pick up the coins that appear in the
arch described by the hand of the limb that the character is exercising (Fig. 4(a)), which
is the one of the opposite side of the player, to give the player the sensation he is seeing
himself in a mirror. As the arm moves away from the midline of the body, the character
picks up coins, with increasing amounts. When he reaches the last coin, upward (ab-
duction), new coins appear, to be collected in the opposite movement. When the last
coin goes down (adduction), the process restarts, repeating itself until the stopwatch
reaches zero. When the patient is in an incorrect position, the coins disappear and
cannot be collected.

zamfira@unitbv.ro
Physioland - A Serious Game for Rehabilitation of Patients with Neurological Diseases 637

Fig. 4. Game screenshots: (a) First level; (b) Second level

The second level, «sunset», refers to the exercise of the glenohumeral joint
flexion/extension, which leads the player to pick up coins that, just as on the previous
level, are positioned in the arch described by the limb hand that matches the player and
the character (Fig. 4(b)). The logic of the score is the same as the previous level, as is
the procedure that occurs when an incorrect position is detected by the player.
In the third level, the «boat», the player will have to perform the exercise of the
radioumeral joint flexion/extension to pick up the coins that are in the arch described by
the hand of the limb that the character is exercising (Fig. 5(a)), which is, again, the one
of the opposite side of the player. The patient score procedure follows the standards of
the levels already described and the patient must guarantee a minimally correct posture
to be able to execute the game.

Fig. 5. Game screenshots: (a) Third level; (b) Fourth level

The fourth level was called «cascade» and refers to the exercise of the hip joint
abduction/adduction. It is the only level of the game that provides the patient with the
exercising of the lower limbs. The character has to pick up the coins that appear in the
arch described by the foot of the limb that is being exercised (Fig. 5(b)), opposite to the
one of the player. The shorter the coins are of the midline of the character’s body, the
greater is their value. The most valuable coin is obtained when the patient completes
the abduction. Then, new coins come up and he begins to perform the exercise in the
opposite direction, that is the adduction. In this sense, the coins appear with values in
descending order. The situation repeats until the stopwatch reaches zero.

zamfira@unitbv.ro
638 T. Martins et al.

The fifth level, «bridge», gives the patient an exercise with freer movements
(Fig. 6(a)). As the game progresses, the main character has to pick up the coins that
appear randomly in front of him (more to the right, to the left, to the top, to the bottom,
more or less distant), which requires the player to perform the cross-movement exer-
cise. The main character (and therefore the player) must alternate between the left upper
limb and the right, regardless of where the coin appears. However, this model is not
rigid, it can be executed with only one arm, or use the other if the patient cannot reach a
coin with the first, since it is important that the problems of each patient are respected.
Whenever one coin is collected, another appears elsewhere, after a few seconds.
Theoretically, whenever the player picks up a coin, he/she should pick up his/her arm
to the starting position. Since not all patients can do it, the game does not check this
situation, making it more versatile. If the patient is in compensation, which, with
respect to performance, is the only variable monitored, the error procedure is the same
as in previous exercises.

Fig. 6. Game screenshots: (a) Fifth level; (b) Sixth level

«Fishing» is the nickname for the sixth level (Fig. 5(b)). The character, who is in a
small boat, has to pick up the coins that are falling from the top, alternately, to the left
and to the right. However, on each side, there is a margin of randomness in the fall of
the coins. To overcome this challenge, the patient has to move the boat to the side
where they will appear, performing the exercise of the pulleys. This challenge can be
run on several levels of difficulty (easy, medium, difficult, and very difficult). These
levels of difficulty are reflected in the speed of the falling of coins. All coins have the
same punctuation value. If the patient drops a coin into the water, he/she is penalized,
and the score is withdrawn. Whenever the patient takes an incorrect posture, the alert is
given and the coins disappear during this time, a situation similar to what happens in
the other levels.
On the right side of the game screen is a graph corresponding to the acceleration of
the patient’s movement, which varies in color: green, yellow, orange and red. It is
possible to observe, on the left side of this graph, an arrow that moves up and down, as
it concerns the acceleration of the movement in one or in the other direction of exe-
cution. The patient can be guided by this graph to confirm that he/she is achieving a
speed control.

zamfira@unitbv.ro
Physioland - A Serious Game for Rehabilitation of Patients with Neurological Diseases 639

7.3 Special Features of Development


During the execution of a given exercise, the system stores, every second, the align-
ment and compensation values (maximum deviations), as well as the values of speed,
acceleration and monitored angles, to be send later on to the remote database.
Whenever the system detects the patient is quiet, during the execution of a challenge,
through his/her posture (with similar values to the initials) and the speed of execution
(approximately equal to zero), it discards the respective values.
The game has a backoffice that manages the health clinics, their staff and their
patients, (Fig. 7) being the place where it is possible to configure the sessions of the
patients and to consult the results obtained by the same, in each of them.

Fig. 7. Generic communications architecture with the backoffice

As previously indicated, Unity 3D was the game engine used to create Physioland.
It enables the development of games in 3D environments and programming in C# and
Javascript language, which is very flexible and has libraries with which it is possible to
integrate the Microsoft Kinect SDK.
Among the software used to develop the graphical components of Physioland, there
are: Blender, a complex but versatile open-source tool for 3D development; Adobe
Fuse, for the creation of characters and their composition, texturing, rendering and
skinning; Adobe Photoshop, for the texturing and creation of 2D elements of the
graphical interface; IClone Pro, for animation; IClone 3dXchange, for the rigging and
export of models for Unity 3D.
The Steinberg Cubase software, an audio workstation, was used for Physioland
sound-plastic process, and video editing was achieved with Adobe Premiere Pro, a
professional software for editing audiovisual contents.

zamfira@unitbv.ro
640 T. Martins et al.

8 Results

Physioland underwent a clinical experience with a duration of ten weeks, using a


sample of eleven patients of both sexes and aged between 17 years and 83 years with
mild to moderate neurological diseases.
As previously indicated, the goal of this study was to try to understand if Physi-
oland would be able to motivate the players to practice physical therapy.
At the end of the ten weeks, they were given a questionnaire to collect their opinion
on several aspects of Physioland. This questionnaire consisted of three parts: the first
one to collect data to characterize the respondents; the second, on a seven-point Likert
scale, subdivided into three groups, associated with the ease of use of Physioland, the
appearance and performance of the game and satisfaction of use; the third one con-
sisting of questions of open response to allow respondents to be able to express
themselves freely about Physioland.
The analysis of the collected opinions revealed that the patients were unanimous in
considering that Physioland is ease to use, with nice appearance, and good perfor-
mance. They also considered Physioland as an interesting game, which provides a
pleasant, fun-filled environment, and whose usefulness is indisputable.

9 Conclusion

The serious game Physioland was developed to motivate and monitor patients with
neurological diseases in their physical therapy practice. The use of the Unity 3D game
engine in combination with the Microsoft Kinect sensor for the detection of physio-
therapeutic movements proved to be a very reliable solution and with very good
accuracy for the proposed objective, without freezes and with real-time response.
The results obtained with the questionnaire to the eleven patients show that
Physioland is a game able to challenge the patient to do more and better, to motivate to
continue his/her treatment, to encourage to complete the exercises and to help to
abstract from the annoyance that therapy causes. It is a game that all the patients
interviewed prefer to use for physical therapy exercises, to the detriment of traditional
practice. When they were asked to give an overall appreciation of Physioland, the word
“motivator” was the one most used by them.
Thus, it can be said that the developed game was able to respond positively to the
goal defined for this research: to develop a serious game, based on image processing
techniques, to motivate and monitor the practice of physical therapy of patients with
reduced mobility, as a consequence of neurological disease, providing a good com-
plement to traditional physical therapy. In the course of the experience, the patients
showed more and more enthusiasm, wishing to play Physioland over and over.

Acknowledgments. This work has been supported by COMPETE: POCI-01-0145-FEDER-


007043 and FCT – “Fundação para a Ciência e Tecnologia” within the Project Scope:
UID/CEC/00319/2013 and also by FCT – “Fundação para a Ciência e Tecnologia” within the
Project Scope: SFRH/BD/74852/2010.

zamfira@unitbv.ro
Physioland - A Serious Game for Rehabilitation of Patients with Neurological Diseases 641

References
1. Santos, K.G.L.: Bases anatomofisiológicas do corpo humano I, Rio de Janeiro. Universidade
de Castelo Branco, Brasil (2010). (in Portuguese)
2. Shieh, C.-C., Coghlan, M., Sullivan, J.P., Gopalakrishnan, M.: Potassium channels:
molecular defects, diseases, and therapeutic opportunities. Pharmacol. Rev. 52(4), 557–594
(2000)
3. Villar, F.A.S.: Alterações centrais e periféricas após lesão do sistema nervoso central.
Considerações e implicações para a prática da fisioterapia. Rev. Bras. Fis. 2(1), 19–34
(1997). (in Portuguese)
4. Ribeiro Sobrinho, J.B.: Neuroplasticidade e a recuperação da função após lesões cerebrais.
Acta Fisiátrica 2(3), 27–30 (1995)
5. Borella, M.P., Sacchelli, T.: Os efeitos da prática de atividades motoras sobre a
neuroplasticidade. Revista Neurociências 17(2), 161–169 (2009). (in Portuguese)
6. Mulder, T., Hochstenbach, J.: Adaptability and flexibility of the human motor system:
implications neurological for rehabilitation. Neural Plast. 28(1–2), 131–140 (2001)
7. Oliveira, C.E.N., Salina, M.E., Annunciato, N.F.: Fatores Ambientais que influenciam a
plasticidade do SNC. Revista Acta Fisiátrica 8(1), 6–13 (2001). (in Portuguese)
8. Peres, C.P.A.: Estudo das sobrecargas posturais em fisioterapeutas: Uma abordagem
biomecânica ocupacional. Universidade Federal de Santa Catarina, Florianópolis (2002). (in
Portuguese)
9. Smith, S.T., Talaei-Khoei, A., Ray, M., Ray, P.: Electronic games for aged care and
rehabilitation. In: Proceedings of the 11th International Conference on e-Health Networking,
Applications and Services, Sydney, NSW (2009)
10. Martins, T., Carvalho, V., Soares, F.: Application for physiotherapy and tracking of patients
with neurological diseases – preliminary studies. In: IEEE 2nd International Conference on
Serious Games and Applications for Health (SeGAH), Vilamoura, Portugal (2013)
11. Martins, T., Carvalho, V., Soares, F.: Monitoring of patients with neurological diseases:
development of a motion tracking application using image processing techniques. Int.
J. Biomed. Clin. Eng. (IJBCE) 2(2), 37–55 (2013)
12. Alcará, A.R., Guimarães, S.É.R.: A instrumentalidade como uma estratégia motivacional.
Revista de Psicologia Escolar e Educacional 11(1), 177–178 (2007). (in Portuguese)
13. Sprinthall, N.A., Sprinthall, R.C.: Psicologia educacional - Uma abordagem desenvolvi-
mentalista, Lisboa: McGraw-Hill (1993). In Portuguese
14. Kaufmann-Sacchetto, K., Madaschi, V., Barbosa, G.H.L., Silva, P.L., Silva, R.C.T., Filipe,
B.T.C., Souza-Silva, J.R.: O ambiente lúdico como fator motivacional na aprendizagem
escolar. Cadernos de Pós-Graduação em Distúrbios do Desenvolvimento, vol. 11(1), pp. 28–
36 (2011) (in Portuguese)
15. Burke, J.W., McNeil, M.D.J., Charles, D.K., Morrow, P.J., Crosbie, J.H., McDonough, S.
M.: Optimising engagement for stroke rehabilitation using serious games. Vis. Comput. Int.
J. Comput. Graph. 25(12), 1085–1099 (2009)
16. Neves, D.E., Santos, L.G.N.O., Santana. R.C., Ishitani, L.: Avaliação de jogos sérios casuais
usando o método GameFlow. Revista Brasileira de Computação Aplicada 6(1), 45–59
(2014). (in Portuguese)
17. Sik-Lányi, C., Brown, D.J.: Design of serious games for students with intellectual disability.
In: Proceedings of the 2010 International Conference on Interaction Design and International
Development, Bombay, India (2010)

zamfira@unitbv.ro
642 T. Martins et al.

18. Martins, T., Araújo, M., Carvalho, V., Soares, F., Torrão, L.: PhysioVinci – a first approach
on a physical rehabilitation game. In: Proceedings of the Fifth International Conference on
Serious Games Development and Applications, SGDA 2014, Berlin, Germany (2014)
19. Malone, T.W.: What makes things fun to learn? Heuristics for designing instructional
computer games. In: Proceedings of the 3rd SIGSMALL Symposium and the First SIGPC
Symposium on Small Systems, New York, NY, USA (1980)
20. Talug, D.Y.L: Lifelong learning throughout today’s occasions namely social media and
online games. In: Procedia – Social and Behavioral Sciences, 4th World Conference on
Educational Sciences, WCES-2012, Barcelona, Spain (2012)
21. Sharkey, J.: The Concise Book of Neuromuscular Therapy: A Trigger Point Manual. Lotus
Publishing, Chichester (2008)
22. Shotton, J., Girshick, R., Fitzgibbon, A., Sharp, T., Cook, M., Finocchio, M., Moore, R.,
Kohli, P., Criminisi, A., Richard, A., Kipman, A.Blake: Efficient human pose estimation
from single depth mages. IEEE Trans. Pattern Anal. Mach. Intell. 35(12), 2221–2840 (2013)
23. Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Moore, R., Kipman, A.,
Blake, A.: Real-time human postura recognition in parts from single depth images. In: IEEE
Conference on Computer Vision and Pattern Recognition (CVPR) 2011, Colorado Springs,
CO, USA (2011)
24. Martins, T., Carvalho, V., Soares, F.: Tracking of physiotherapy exercises using image
processing techniques. In: Controlo 2016 - Proceedings of the 12th Portuguese Conference
on Automatic Control, Guimarães, Portugal (2016)
25. Martins, T., Araújo, M., Lopes, O., Carvalho, V., Soares, F., Matos, D., Marques, J.,
Machado, J., Torrão, L.: PhysioVinci – Solução Integrada para Reabilitação Física de
Pacientes com Patologias Nerurológicas, Video Jogos 2014, 6–7 Novembro 2014, Barcelos,
Portugal (2014). (in Portuguese)

zamfira@unitbv.ro
Human Computer Interfaces, Usability,
Reusability, Accessibility

zamfira@unitbv.ro
The Development of ICT Tools
for E-inclusion Qualities
An Early Case Study

Dena Hussain(&)

Department of Engineering Science, University West,


SE- 461 86 Trollhattan, Sweden
Dena.hussain@hv.se

Abstract. With the diversity and increasing use of different information and
communication technologies (ICT) in the educational sector, new pedagogic
approaches are also being introduced and have had a major impact on the
educational sector, focusing on different perspective including improved edu-
cational methods and in both schools and homes, information and communi-
cation technologies (ICT) are widely seen as enhancing learning, fulfilling their
rapid diffusion and acceptance throughout developed societies. But the need to
utilize ICT tools to support and guide educators in finding the right support for
students with special individual needs is still a challenge, investigating different
challenges that are presented to teachers in their working environment is an
ongoing matter. One of these challenges that teacher face frequently is creating
an inclusive environment. An “inclusive education” is a process of strengthening
the capacity of the education system to reach out to all learners involved. It
changes the education in content, approaches, structures and strategies, with a
common vision that covers all children of the appropriate age range. Inclusion is
thus seen as a process of addressing and responding to the diversity of needs of
all children. Therefore an inclusive education system can only be created if
schools become more inclusive, in other words, if they become better at edu-
cating all children in their communities with their individual needs. Therefore,
creative forms of communication should be encouraged to promote personalized
care, hence the focuses of this research is to investigate the use of data process
flow map with the aim to guide the teacher towards an inclusive way of
thinking.

Keywords: Information and communication technologies  Inclusion 


Education system

1 Introduction

The utilization of ICT tolls has been investigated and introduced in many studies in
different context. The potential of social inclusion and exclusion that technology can
offer, and the way in which technology can facilitated to access information sources,
learning opportunities and personal agencies can be investigated [1]. The World
Declaration on Education for All, adopted in Jomtien, Thailand (1990) [2], sets out an

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_60
zamfira@unitbv.ro
646 D. Hussain

overall vision: universalizing access to education for all children, youth and adults, and
promoting equity. This means being proactive in identifying the barriers that many
encounter in accessing educational opportunities and identifying the resources needed
to overcome those barriers [2]. Flexible teaching-learning methodologies necessitate
shifting away from long theoretical, pre-service-based teacher training to continuous
in-service development of teachers [2]. A survey regarding ICT in education was
commissioned in 2011 by the European Commission Directorate General Communi-
cations Networks, Content and Technology to benchmark access, use of and attitudes
to ICT in schools in the EU27, Croatia, Iceland, Norway and Turkey, the conducted
survey investigating the use of ICT in education. More than 70% of teachers surveyed
at all grades expressed a positive or very positive opinion about the relevance and
positive impact of ICT to support different students’ learning processes (working
collectively, autonomously, practicing, etc.) and objectives (motivation transversal
skills, higher order thinking skills, etc.) [3]. The possibility of people participating in
the Information and Knowledge Societies is dependent on the availability and
affordability of ICTs and relevance of contents and services, but also on their acces-
sibility: ‘users must be able to perceive, understand and act upon ICT interfaces’ [3].
The objective of this study is to investigate the utilization of Information and
Communication Technologies (ICT) in creating a digitalized process that can assist
educators in finding the right support for pupils with special individual needs, were
generalized teaching methods cannot be applied and student needs are challenging to
recognize. ICT is a particularly valuable tool for children with special needs and can
improve their quality of life, reducing social inclusion and increasing participation.
The aim of the platform is to guide the teacher towards an inclusive way of
thinking, creating a balance between three main factors, which are the student, the
environment and associated activities, resulting in full participation which is the main
concept for this research project to create an inclusive environment for every child with
special educational needs, creating a unified model for inclusion.

2 Background and Method

This study is part of a European project where the objective of the research is to utilize
the Index of inclusion as a mind map, several associates are involved, including a
Municipality representing Sweden as strategic partnership, together with municipalities
in Germany and Iceland. The project involves different education schools (pupils aged
8–18), primary schools (pupils aged 5–12) and secondary schools (pupils aged 11–18).
The average age of pupils involved in the project is 9.5 years in the primary school and
14.5 in the secondary. The overall project focus is to learn from each other by sharing
experiences of the inclusion work carried out in each country, hence teachers act as a
gateway and their skills development and curriculum resources need increased support.
The fact that the drive towards equity in education through the support of accessible
ICT is a main concept hence is the main research goal in this project. The focus of this
paper, is to create an ICT tool “Digi-Flow” that can assist educators to find the right
support for students with special educational needs. The goal is to create an ICT tool
“Digi-Flow” in such a way that can help determine what is best for children with

zamfira@unitbv.ro
The Development of ICT Tools for E-inclusion Qualities 647

special needs. It is important to focus on creating an optimum learning environment so


that all children can learn and achieve their individual potential. Therefor the research
approach for this project was done via a collaborative teaching environment working
with different educators with different pedagogic backgrounds that emerged during
which proved to be a positive side effect of the collaboration. The process of
exchanging information between groups, increases knowledge of the study group, and
this experience, by helping to widen perspectives and provide accurate knowledge
about the study group involved.
During this process the term “full participation” was identified during this research
as a main pedagogic goal to create inclusion qualities, creating a unified model for
Inclusion. In order to achieve this three main factors where identified, which are the
student, the environment and associated activities.
By using a Scrum-agile approach it helped identify the main users of the tool in
addition to different usability requirements and characteristics which lead to creating an
initial prototype, this approach also helped to verify and identify new and changed
requirements. During several different sprints were able to identify several main con-
cepts used to create a unified model for inclusion between all associated partners,
which helped create the main functional requirements of the ICT tool “Digi-Flow”.
With the concept of “Full participation” as core requirement, the objective of the
ICT tool is to help the teacher investigate and determine the different factors to assess
and create a statically overview of the main factor for an unbalance environment.
Helping the teacher determine the needed actions and resources, creating an “Action
plan” clarifying how to achieve balance and therefore achieving “Full participation”.
As shown in Fig. 1.

Fig. 1. Dimentions of full participation.

3 Result and Discussion

Several fundamental outcomes have been achieved during the early stages of this
research, including a full pedagogical assessment for the Index of inclusion, bench-
marking the concept of “inclusion” between all European partners, creating a unified
model that can be adapted to all countries. The development of the ICT tool
“Digi-Flow” was influenced by gathering different reflections and information which
was collected via different investigations and surveys that where performed, giving a
clearer objective for the tool and the data needed to be included.

zamfira@unitbv.ro
648 D. Hussain

Since the ICT platform was created with partnerships from three different European
countries it was important to understand different national requirements and regulations
therefore three data clusters were considered:
1. Country legal, regulatory and programmatic commitments.
2. Country capacity to implement and apply the introduced solution.
3. Country actual results for children with special needs.
The objective of this research is to create an effective platform which can help
determine what is best for children with special needs, therefore the most important
element is to create a logical structure of questions. The designed platform will utilize
the Index of inclusion as a mind map in a unified form between all three countries, and
as a design process following the Inclusion stages- process flow for the digital tool. In a
previous study the dimensions of inclusion where identified and categorized into three
main categories [4], which are:
1. Equivalence: the school’s capability to see/recognize and understand the pupils
preconditions and needs.
2. Accessibility: the school’s capability to adapt teaching, localities and social com-
munity from a diversity of needs.
3. Participation: the school’s capability to stimulate pupils to ‘take part’; learning to be
lead, to lead oneself and learning to lead others.
To insure quality and effectiveness an auditing process was used during the development
of the ICT tool, the main objective of this process was to confirm the different data
collected and to help verify the different platform requirements and specifications during
developed. Creating a more efficient and effective environment for all partners included
in the project hence reduce and forms of redundancy in both the development method
and data collected. The behavior of this process was integrated within the development
method used, therefore had an incremental nature. As shown in Fig. 2, the audit process
consist of several main stages and sub stages which are:

Fig. 2. Audit process

zamfira@unitbv.ro
The Development of ICT Tools for E-inclusion Qualities 649

4. Planning: The aim of this phase was to help create a focus point, selecting specific
features from a list of different requirement definitions, and to identify which
requirement set to verify and develop further into specifications. Sub stages
included:
(a) Select priority from list
(b) Review objectives
(c) Set standard
5. Data Collection: During this phase the defined requirement sets where expressed in
different data forms, reviewing the objective of the different data needed to be
included and why. This was achieved via three sub stages:
(a) Design audit
(b) Collect data
(c) Analyze data
6. Reporting: The objective of this phase was to verify and validate requirements that
where translated into specification via prototyping via collecting feedback from all
partners and participants.
7. Implementation and Monitoring: All feedback gathered in the previous stage was
assessed and evaluated, redefining requirements and introducing changes when
needed, and therefore reviewing initial requirement standard and creating action
plans.
8. Review and Re-audit: The aim of this phase was to review all decision making and
create additional plans.
Early results show that the need for such as tool has been confirmed, but also that
the potential users for the tool can vary. Using this process also helped identify which
type of data can and must be included in the ICT platform and why, such data included
information regarding the child´s perspective and related social background. As shown
in Fig. 3, different ideas where gathered and analyzed to help create the data sets
required for the ICT tool “Digi-Flow” [4].

Fig. 3. Related information

zamfira@unitbv.ro
650 D. Hussain

The results also show different user groups, indicating that the user group which
found most need for such a platform are special educators, whom work directly with
children with special needs on different levels. As shown in Fig. 4.

Fig. 4. Potential users

The study also emphasizes that the tool can be utilized as a platform to improve
communication not just for the different educators working with children with special
needs but also as a platform for communication between the school and parents.
The developed platform consists of different questions which are divided into
categories, weights and bases. The categories are used as a factors to link resources and
the bases are to filter the questions. Base in this project means two things.
9. Base-Questions (or parent-questions) can have questions depend on them, meaning
sub-questions (or children-questions).
10. Base-Questions act’s as the filtering process.
As the platform fetches a question it examines if it’s a base-question. If it is,
depending on the answer provided, it will identify any sub-questions linked to it. The
‘weights’ in this ICT platform are used to determine the importance of the questions.
Weights are stored in every answers and effect the questions in a ‘positive’ or ‘nega-
tive’ way. Every question and resources are linked to a category. It maintains good
structure and simplifies understanding and goal to what the objective of the question is.
The main categories of full participation are displayed with a numerical representation
of their relevancy to the current evaluation. The results obtained from different user
groups utilizing the tool includes, a main resource page, which contains links and
information of persons and country organizations that can be contacted. Additional
studies and prototyping is in progress as part of this study and are part of future results
verifying the advantages of ICT tools in this context. To what extent the ICT tool is
spread will be measured in long-term perspectives and assessed accordingly.

zamfira@unitbv.ro
The Development of ICT Tools for E-inclusion Qualities 651

4 Conclusion

As a conclusion, using ICT tools to link schools and different recourses can deliver
substantial educational benefits, for both teachers and children with special needs. By
assisting teacher in the process towards an inclusive environment helps create a sus-
tainable and effective platform which can help determine what is best for the students
and helping the teacher determine the needed actions and link resources to relevant
information.

References
1. Sheehy, K.: ICT and special educational needs: a tool for inclusion. Br. J. Learn. Disabil. 33
(4), 206–207 (2005)
2. Policy Guidelines on Inclusion in Education Published by the United Nations Educational,
Scientific and Cultural Organization 7, place de Fontenoy, 75352 Paris 07 SP, France,
UNESCO (2009)
3. Policy Guidelines on Inclusion in Education Published by the United Nations Educational,
Scientific and Cultural Organization, UNESCO, p. 7 (2013)
4. Hussain, D.: “The utilization and development of ICT tools for inclusion qualities in cases of
special need children.” Book of industry papers, posters and abstracts. In: International
Conference on Health and Social Care Information Systems and Technologies (2016)

zamfira@unitbv.ro
Insights Gained from Tracking Users’
Movements Through a Cyberlearning
System’s Mediation Interface

Daniel Stuart Brogan, Debarati Basu(&), and Vinod K. Lohani

Virginia Tech, Blacksburg, USA


debarati@vt.edu

Abstract. Cyberlearning has the ability to connect learners from diverse set-
tings to learning resources regardless of the learners’ proximity to traditional
classroom environments. Tracking users’ movements through a cyberlearning
interface provides data that can be used both to interpret students’ level of
engagement in the learning process and to improve the cyberlearning system’s
user mediation interface. The Online Watershed Learning System (OWLS),
which serves as the end user interface of the Learning Enhanced Watershed
Assessment System (LEWAS), is an open-ended guided cyberlearning system
that delivers integrated live and/or historical environmental monitoring data and
imagery. Anonymous user tracking in the OWLS helped to identify students
from various courses as ‘groups of users’ across the world and assisted in
providing information about the importance of various components of the
mediation interface. A pilot test of this tracking capability was conducted in two
first-year engineering courses at Virginia Western Community College during
the fall 2015 semester. During this pilot test, tracking data was collected from a
total of roughly 80 students from a total of four course sections. The data
collected included the amount of time that each student spent using each
component of the OWLS, the paths that he or she used to navigate through these
components and how frequently each student returned to the OWLS. Sugges-
tions for system modifications based on comparison of the time students spent
using various system components with students’ post-test evaluation of the
educational value of these components are included. To address the limitation of
the data collected during the pilot study, which could not identify a user across
different devices, a user login system is being developed for investigating
individualized learning. The current system will address the need to understand
in real-time the learner-specific pathways of content and progression, and these
learners’ levels of engagement within the system.

Keywords: Cyberlearning systems  Mediation interfaces  User tracking


data  Student learning

1 Introduction

Cyberlearning systems are innovative learning infrastructures that use communication


technology and networked computing to support teaching and learning. [1, 2] The NSF
Taskforce on Cyberlearning notes that cyberlearning has the ability to help learners
© Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_61
zamfira@unitbv.ro
Insights Gained from Tracking Users’ Movements 653

from academic, commercial and political organizations, as well as independent learn-


ers, to learn from anywhere across the globe and at any time, either from inside or
outside of traditional classroom spaces [3, 4]. The literature emphasizes that many
cyberlearning environments actively engage learners in the learning process and pro-
vide learners with personalized learning experiences [2, 5]. Advancing such person-
alized learning is one of the NAE 14 Grand Challenges of Engineering [6].
The Online Watershed Learning System (OWLS), which serves at the end user
interface of the Learning Enhanced Watershed Assessment System (LEWAS), is an
open-ended guided cyberlearning system that was developed to deliver integrated live
and/or historical environmental monitoring data and imagery from the LEWAS to end
users. Within the theoretical framework of situated learning, this system was designed
to use data and imagery from the OWLS to situate users at the LEWAS site. By using
an HTML5-driven interface that runs in both traditional and mobile devices’ web
browsers, the OWLS is available to users across hardware and software platforms from
any internet-connected location of their choice. The OWLS has several components
and features including live video of the measurement site, an interactive graph, local
weather radar, background information and several case studies [7]. In this paper, we
discuss a pilot test of user tracking functionality added to monitor students’ interactions
with the OWLS in order to assess students’ engagement with the system.

2 Literature Review

Engagement can be defined as the interaction between the students and their learning
environment, and the literature emphasizes that students’ learning is directly correlated
to how meaningfully engaged they are in a task [8]. Engagement can have three
interrelated dimensions: behavioral, emotional and cognitive [9]. According to Kuh
[10], time on a task is a construct for measuring engagement, including several other
constructs such as, student involvement, social and academic integration, good prac-
tices in graduate education, etc. For traditional classrooms, engagement has been
measured with class attendance [11], which is the only easily quantifiable visible
indicator of time on task. Attendance or participation in class has also been determined
as an important variable for measuring success [11] and behavioral engagement [9].
Similarly, for cyberlearning systems, interaction with the interface can be a measure for
behavioral engagement [12].
Various approaches for monitoring interface interactions have been used in the past.
Traditionally, observational methods such as observations, video recording, question-
naires, etc. [13, 14] have been used to monitor users’ interactions from an external
viewpoint. However, given that one of the strengths of cyberlearning systems is that
they can be used anywhere at any time, this type of monitoring is often not possible.
Therefore, it is advantageous to store/log the digitized traces of the users from within a
cyberlearning system [15]. These traces can be represented by the time sequence of
actions completed by a user/learner, such as, mouse clicks, typed keys, navigation
through web-pages, etc. [16]. Analyzing large amounts of these traces to digitally track
students’ usage of a cyberlearning tool assists in identifying their preferences and the
bottlenecks faced by each learner [5]. This also aids in identifying the patterns of

zamfira@unitbv.ro
654 D.S. Brogan et al.

enquiry of various students while solving a given problem [17] and in identifying
students’ participation in the learning environment by providing the degree of usage of
the system by the student [18]. This participation explains the level of engagement of
the students with the cyberlearning platform [18] and identifies their study habits [19].
Hence, by implementing a user tracking system within a cyberlearning tool, the two
components of the active learning process, i.e., student activity and engagement of an
individual user [20], can be better understood. These deep insights obtained from user
tracking data allow educators to (1) predict students’ performances, (2) understand the
efficacy of the learning materials, and (3) validate/evaluate the teaching strategies.
These insights will improve the quality of education and lay the foundation for a more
effective education system [21].
The patterns of students’ activities within a cyberlearning environment provide
evidence to cluster similar students and to develop predictive student models or task
models. This information aids in making personalized, intelligent, online learning
systems, which are efficient for engaging various types of learners [21]. For example,
according to the knowledge level of the learners, adaptive tests and personalized
learning materials can be recommended as scaffolding to help learners complete tasks
[22, 23]. Therefore, integrating a cyberlearning system, such as the OWLS, with user
tracking capability is an innovative approach that can be incorporated to advance the
current engineering education system. This addresses the need to understand in
real-time what each learner is doing within the learning environment as well as the need
to know learner-specific pathways of content and progression [4]. This allows
advancement of technology that adapts learning materials based on assessment of an
individual’s learning experiences and his or her level of engagement with the OWLS.

3 User Tracking in the OWLS

Anonymous tracking to store/log the digitized traces of users was added to the OWLS
to monitor the paths that users take through the OWLS, how long they spend using
various components of the OWLS and how frequently they return. This information
provides insights into the importance of various components of the OWLS. This
tracking was accomplished using Google Analytics. Additionally, the UUID.js library
was used to assign an alpha-numeric universally unique identifier (UUID) to each
device-browser pair. This UUID was stored in a tracking cookie along with the time in
ms. This approach does have some tracking limitations. For example, a single user
accessing the OWLS from multiple browsers on one or more devices will appear as
multiple users because the UUID is assigned to the device-browser pair rather than to
an individual. Other limitations are that users whose cookies are erased after a session
cannot be tracked from one session to the next, that some users may have disabled
tracking cookies completely, and that browser updates often erase tracking cookies.
Due to these limitations, user logins are suggested for future versions.

zamfira@unitbv.ro
Insights Gained from Tracking Users’ Movements 655

4 Identifying Groups of Users

Once the data is collected, it must be analyzed, and one challenge in analyzing the data
collected by Google Analytics is to cluster and identify groups of users. This was
accomplished by collecting 15 different pieces of identifying information from each
OWLS page visit, i.e., the UUID, the UTC time at page load in ms, the local URL, the
user’s current country, region (e.g., state) and city, the device category (e.g., desktop,
tablet, mobile), the operating system and version, the browser and version, the screen
resolution, the internet service provider (ISP), the referring website, and if the next
webpage viewed by the user was outside of the OWLS. For analysis, each page view
from October 9, 2015 to May 25, 2016 was assigned a known user ID and a user
group. Known LEWAS team members were identified and separated from other groups
of users. Remaining known users and user groups were identified by noting their
physical location and the use or not of a school ISP. A known user appearing in
multiple locations was assigned to the highest ranking group he or she appeared in from
the list of user groups.
During this period there were 11,231 page views within the OWLS. Figure 1 shows
the number of pageviews per day for the largest user groups. Note that students in
particular courses use the OWLS extensively for no more than a few days before
moving on to other topics but the LEWAS team members (red) have a regular back-
ground presence. Thus, it is important to remove known LEWAS team members’
pageviews from analysis of the intended users.

Fig. 1. OWLS page views per day by user groups

5 User Tracking Pilot Test

One of the groups of users in Fig. 1 was a pilot test of roughly 80 students from a total
of four sections of two first-year engineering courses at Virginia Western Community
College (VWCC, purple). Students in these courses participated in the user tracking
pilot test on October 13, 20–21 and 26–30, 2015. A total of 173 device-browser pairs

zamfira@unitbv.ro
656 D.S. Brogan et al.

(UUIDs) were recorded from these implementations. However, one UUID was
excluded from analysis as being that of an instructor who visited the OWLS before
introducing it in various course sections. One challenge to analyzing this data is that the
campus computers that students used erased tracking data at browser exit. This explains
both a lack of multiple sessions for a single UUID and the large number of UUIDs
relative to the numbers of students enrolled in these courses (n = 78). It also prevented
analysis of how often users returned.
Some insights can be gained from the ways that these users interacted with the
OWLS. Approximately 27.9% of these users viewed three pages within the OWLS
before leaving. At the other extreme, 10 users (5.8%) viewed 17 or more pages with the
two longest user paths being 40 and 44 pages before leaving the OWLS. Figure 2
shows two concurrent course sections on the day that they were first introduced to the
LEWAS. The blocks with smaller heights at the right ends of users’ interactions with
the OWLS are of unknown temporal duration and are indicated by a reduced vertical
thickness of constant length. These are users’ final pages in the OWLS. The fact that
several users transitioned between OWLS pages at approximately the same times
suggests that they were following the actions of the instructor on the projected display.
Thus, designed instruction, rather than open-ended exploration, drove which pages
students visited when first introduced to the OWLS. This also accounts for the large
percent of users who viewed three pages in the OWLS before leaving. Accordingly, the
homepage, the interactive graph and the data download page were the three most
visited OWLS pages by the students in these courses.

Fig. 2. Use of the OWLS on October 21, 2015 in two courses at VWCC with users 1-20 clearly
in course 1 and users 26–45 clearly in course 2

6 Ranking OWLS Features

In order to evaluate the components and features of the OWLS, the post-tests given to
students in the pilot test at VWCC included an item worded “What was the learning
value of the following components of the OWLS (circle your choices):” followed by
ranking levels for several OWLS features. For the purposes of comparing the features
of the OWLS, the ranking levels were assigned integer numeric values of 1 to 4 for
each group, and these values were averaged for each group to determine scores for each

zamfira@unitbv.ro
Insights Gained from Tracking Users’ Movements 657

feature with a score of 1 being the worst and a score of 4 being the best. The results are
shown in Fig. 3. These results imply that access to real-time data ranks as the most
important of these features followed by the anywhere/anytime access that cyberlearning
systems provide. These data availability features are followed by the data visualization
features (live camera and interactive graphs) and supporting information features (case
studies, overhead view/map and background information).

Fig. 3. VWCC courses’ students’ OWLS features’ scores

In order to compare user tracking data with students’ self-perceived learning, the
OWLS web pages were grouped according to the OWLS features. Figure 4 shows the
numbers of pageviews for these features and the corresponding ranks (in red) for those
features students rated for their learning value. Because the students viewed the
Interactive Graph and Live Camera during class time when they were directed to view
them according to the results shown in Fig. 2, they gained increased exposure to these
pages, which may have increased their rankings of these components compared to
features that they did not interact with very much. This is a potential source of bias in
the ranking of OWLS features.

Fig. 4. Number of page views from the full VWCC implementation for each OWLS feature.
The numbers on the right are the rankings of the OWLS features given students in these courses
following “real-time data” (1) and “anywhere/anytime access” (2).

zamfira@unitbv.ro
658 D.S. Brogan et al.

7 Conclusions, Limitations and Future Work

The OWLS user tracking was able to identify groups of users (who each typically used
the OWLS for no more than a few days) from courses at several different institutions
across the globe. It was also able to show how students who used the OWLS during
class time followed the actions of the instructor rather than exploring the system in an
open ended way. Furthermore, this directed use of certain components may have biased
students’ rankings of these features of the OWLS. This tracking also showed how much
time students spent on each page. However, security measures in the computer lab at
VWCC prevented the OWLS from tracking repeat visitors to know how frequently
people returned to the system. Further research is needed to resolve these issues.
Planned future OWLS tracking enhancements include addition of a user login
system with user tracking capabilities. Each user will be identified using his or her
username, and each of their requests to the server will be stored with timestamps in a
database for further analysis. These requests may include navigating to a different web
page or changing the page focus to various features within a page. Storing the tracking
information in a database of LEWAS will aid in securing student information and
addressing the security issues of proprietary products like google analytics. With this
system in place, each student in a class will be tracked while solving a particular
problem using the OWLS. The students’ trace information in the database will be
explored to identify their activity streams and their levels of engagement with the
OWLS while problem solving.
Additionally, if students’ use of the OWLS is directed by the tasks that they are
asked to complete with it, it is possible to create profiles for different groups of users
based on the ways that they use the OWLS. This can be used as training data for future
courses to predict how a particular group of users will interact with the OWLS to
complete a given task. This predictive modeling can be used as a foundation to develop
group-specific interface modifications. After the user login system is implemented, this
process can be adapted to develop a personalized learning environment for each user.

References
1. Alvarez, I.B., Silva, N.S.A., Correia, L.S.: Cyber education: towards a pedagogical and
heuristic learning. ACM SIGCAS Comput. Soc. 45, 185–192 (2016)
2. London, J.S.: Exploring cyberlearning through a NSF lens. In: 2012 Paper Presented at
ASEE Annual Conference and Exposition, San Antonio, Texas. https://peer.asee.org/21371
(2012)
3. Borgman, C.L.: Fostering Learning in the Networked World: The Cyberlearning Oppor-
tunity and Challenge. DIANE Publishing, Darby (2011)
4. Madhavan, K., Lindsay, E.D.: Use of information technology in engineering education. In:
Johri, A., Olds, B.M. (eds.) Cambridge Handbook of Engineering Education Research.
Cambridge University Press, Cambridge (2014)
5. Johri, A., Olds, B.M.: Situative frameworks for engineering learning research. In: Johri, A.,
Olds, B.M. (eds.) Cambridge Handbook of Engineering Education Research. Cambridge
University Press, Cambridge (2014)

zamfira@unitbv.ro
Insights Gained from Tracking Users’ Movements 659

6. National Academy of Engineering: NAE Grand Challenges of Engineering (2012). http://


engineeringchallenges.org/challenges.aspx
7. Brogan, D.S., McDonald, W.M., Lohani, V.K., Dymond, R.L., Bradner, A.J.: Development
and classroom implementation of an environmental data creation and sharing tool. Adv. Eng.
Educ. 5, 1–34 (2016)
8. Stark, J.S., Lattuca, L.R.: Shaping the College Curriculum: Academic Plans in Action. Allyn
& Bacon, Needham Heights (1997)
9. Fredricks, J.A., Blumenfeld, P.C., Paris, A.H.: School engagement: potential of the concept,
state of the evidence. Rev. Educ. Res. 74, 59–109 (2004)
10. Kuh, G.D.: The national survey of student engagement: conceptual and empirical
foundations. New Dir. Inst. Res. 141, 5–20 (2009)
11. Douglas, I., Alemanne, N.D.: Measuring student participation and effort. In: Proceedings of
the IADIS International Conference on WWW/Interne, p. 299 (2007)
12. Beer, C., Clark, K., Jones, D.: Indicators of engagement. Curriculum, technology &
transformation for an unknown future. In: Steel, C., Keppell, M., Gerbic, P., Housego, S.
(eds.) ASCILITE 2010 Proceedings: Curriculum, Technology & Transformation for an
Unknown Future, pp. 75–86, Sydney (2010)
13. Shernoff, D.J., Kelly, S., Tonks, S.M., Anderson, B., Cavanagh, R.F., Sinha, S., Abdi, B.:
Student engagement as a function of environmental complexity in high school classrooms.
Learn. Instr. 43, 52–60 (2016)
14. Boekaerts, M.: Engagement as an inherent aspect of the learning process. Learn. Instr. 43,
76–83 (2016)
15. Rieh, S.Y., Collins-Thompson, K., Hansen, P., Lee, H.-J.: Towards searching as a learning
process: a review of current perspectives and future directions. J. Inform. Sci. 42, 19–34
(2016)
16. Omar, L., Zakaria, E.: Clustering methods applied to the tracking of user traces interacting
with an e-Learning system. In: Proceedings of World Academy of Science, Engineering and
Technology, vol. 68, pp. 894–894 (2012)
17. Kinnebrew, J.S., Biswas, G.: Identifying learning behaviors by contextualizing differential
sequence mining with action features and performance evolution. In: International
Educational Data Mining Society (2012)
18. Baltierra, N.B., Muessig, K.E., Pike, E.C., LeGrand, S., Bull, S.S., Hightow-Weidman, B.:
More than just tracking time: complex measures of user engagement with an internet-based
health promotion intervention. J. Biomed. Inform. 59, 299–307 (2016)
19. Branch, K.J., Butterfield, A.E.: Paper presented at ASEE Annual Conference & Exposition,
Seattle, Washington. 10.18260, p. 23553 (2015)
20. Prince, M.: Does active learning work? A review of the research. J. Eng. Educ. 93, 223–231
(2004)
21. Romero, C., Ventura, S.: Educational data mining: a review of the state of the art. IEEE
Trans. Syst. Man Cybern. Part C Appl. Rev. 40, 601–618 (2010)
22. Wu, B., Chen, P.P.: Personalized recommendation research in e-Learning systems. Appl.
Mech. Mater. 433–435, 603–606 (2013)
23. Baylari, A., Montazer, G.A.: Design a personalized e-learning system based on item
response theory and artificial neural network approach. Expert Syst. Appl. 36, 8013–8021
(2009)

zamfira@unitbv.ro
Practical Use of Virtual Assistants and Voice
User Interfaces in Engineering Laboratories

Michael James Callaghan(&), Victor Bogdan Putinelu, Jeremy Ball,


Jorge Caballero Salillas, Thibault Vannier, Augusto Gomez Eguíluz,
and Niall McShane

Intelligent Systems Research Centre, Ulster University,


Derry, Northern Ireland, UK
mj.callaghan@ulster.ac.uk

Abstract. Automatic Question-Answering (QA) systems and speech recognition/


synthesis functionality and accuracy has improved dramatically over the last
decade allowing the use of voice interactions for increasingly complex tasks.
Virtual assistants, based on speech-based services are growing in popularity and
are now entering the mainstream. These services and devices come with a set of
built-in capabilities and in some instances allow the creation and addition of new
functionality and abilities facilitating their use in a range of diverse application
areas.
Practical electronic and electrical engineering laboratories for undergraduate
students are evolving incrementally driven by affordable instrumentation and
hardware kit with internet access but remain fundamentally unchanged. This
paper explores the practical use of virtual assistants and voice user interfaces in
electronic and electrical engineering laboratories to tutor and assess students while
accessing and controlling test instrumentation and circuits. The re-purposing of
existing teaching resources and material for use in this context is discussed and a
case study and practical working example of a virtual assistant enabled laboratory
demonstrating the viability of this approach is shown.

Keywords: Automatic Question-Answering (QA) systems  Speech


recognition and synthesis  Virtual assistants  Voice user interface practical
laboratories

1 Introduction

Automatic Question-Answering (QA) systems and speech recognition/synthesis func-


tionality and accuracy has improved dramatically over the last decade allowing the use
of voice interactions to automate and organize complicated tasks and directly answer
domain specific questions using natural language and have the potential to revolu-
tionize human interactions with devices and data [1, 2]. Virtual assistants, based on
speech-based services are growing in popularity and are now entering the mainstream
e.g. Apple Siri, Amazon Echo, Google Assistant and Microsoft Cortana [3]. These
services come with a set of built-in capabilities and in some instances allow the creation
and addition of new functionality and abilities e.g. the Amazon Alexa Skills Kit [4]

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_62
zamfira@unitbv.ro
Practical Use of Virtual Assistants and Voice User Interfaces 661

provides a development environment and app store where developers can publish and
distribute their voice based applications while the Amazon Web Services Internet of
Things (AWS IoT) platform allows the connection of devices to cloud based services
[5]. These flexible, highly functional development environments and backend archi-
tectures allow the use of the Amazon Echo and similar voice enabled services in a
range of diverse application areas including home automation and education.
Practical engineering laboratories for undergraduate students are evolving incre-
mentally driven by the availability of more affordable instrumentation and hardware
kit, the growth of the Internet, access to virtual and remote laboratories and a move
towards student-centered pedagogies but are at their core fundamentally unchanged [6].
This paper explores the use of virtual assistants in electronic and electrical engineering
laboratories to tutor students; guiding them through experiments; presenting supple-
mentary teaching resources when requested; accessing, controlling and configuring test
instrumentation and hardware and providing feedback through summative and for-
mative assessment. A case study and practical working example of a virtual assistant
approach is demonstrated based on the modification of an existing remote laboratory
and teaching resources for fundamental electronic and electrical engineering circuits
suitable for the first year of an undergraduate degree. The process of integrating test
instrumentation, the board under test, a switching matrix, additional teaching resources
and the Amazon Echo virtual assistant is discussed. Section 2 of this paper provides an
overview of the Amazon Echo platform and the creation of voice user interfaces using
the Alexa Skills Kit. Sections 3 and 4 discusses challenges related to the re-purposing
of an existing laboratory for voice interactions and provides a practical example of this
process. Section 5 looks at the practicalities of implementation, assessment and the
creation of a help system to provide feedback to the student. Section 6 presents the
conclusion and possible future work in this area.

2 Creating Voice User Interfaces Using the Alexa Skills Kit

Amazon Echo is a smart speaker with cloud based voice recognition and speech syn-
thesis capabilities based on natural language processing (NLP) algorithms and
speech-unit selection technology [7]. It consists of a cylinder speaker with a seven-piece
microphone array and internet connectivity. The Echo device is capable of processing a
range of voice commands and user interactions (via cloud based services) and can be
used as a home automation hub to access and control smart devices. When operating in
default mode the Echo continuously listens to all speech in its general vicinity and
responds/becomes active when it detects the use of a “wake word” i.e. “Alexa” or
“Echo”. The voice commands interactions detected after activation are sent to the cloud
for processing by the Alexa Voice Service(AVS)/developer services and relevant
responses generated (Fig. 1). Third party developers can create voice experiences/
custom skills that extend the capabilities of any Alexa-enabled device using the Alexa
Skills Kit (ASK) which is a collection of self-service APIs, tools, documentation and
code samples. User created custom skills have an invocation name which is a key word
used by the end user to initiate a set of voice interactions/responses with the Echo
device.

zamfira@unitbv.ro
662 M.J. Callaghan et al.

Fig. 1. Architecture/interactions of Amazon Voices Services with Alexa Skills Kit

The voice interactions and responses are defined by an interaction model (Fig. 2)
which manages communications between the parties involved using an intent schema,
slots and sample utterances [8]. Intents are the core functionality of your skill and are a
list of common actions your skill can accept and process. Slots are parameters or values
passed with an intent. Sample utterances specify the spoken words and phrases users
can say to invoke intents.

Action Voice User Interaction (Interaction model)

Make a request User says, “Alexa, open practical engineering laboratories”


Collect more information Alexa replies “Which laboratory?” and then waits for a
from the user response.
Provide required information User replies, “Series Parallel laboratory.”
User request in completed Alexa initializes Series Parallel laboratory and responses
“Welcome to laboratory one, the Series Parallel…”

Fig. 2. Interaction model for Alexa Skills kit

3 Re-purposing an Online Laboratory for Voice Interactions

The architecture of a typical online laboratory allows the user, located in a separate
geographical location, to access, control and conduct experiments remotely (Fig. 3).
The hardware control element is usually facilitated by the use of GPIB (General Purpose
Interface Bus) or similar communication standards connected to a switching matrix
allowing test instrumentation and experimental boards to be selected, connected and
configured during experiments using a software client-server approach [9, 10]. Figure 4
shows a range of experiments and accompanying learning outcomes/objectives for an
existing online remote laboratory which focused on teaching applied fundamental cir-
cuit theory and suitable for students on the first year of an undergraduate electronic and
electrical engineering degree.

zamfira@unitbv.ro
Practical Use of Virtual Assistants and Voice User Interfaces 663

Fig. 3. Typical architecture for online remote laboratories

The lab, related teaching resources, test instrumentation and the circuit board under
test were accessible through the web using a client/server approach where the host PC
acts as server, running scripts and hosting a database to store instrumentation config-
urations and general settings. An application running on the host computer and utilizing
the Keysight IO Libraries Suite [11] facilitated access to the physically connected
hardware (Fig. 3) and exchanged data with the client. The approach taken for this
project was to reuse the existing local communication and control protocols but to
remove the frontend web/client based UI and remote access functionality from the
laboratory (Fig. 5) and replace it with a voice driven Virtual Assistant using the
Amazon Echo hardware and built with the Alexa Skills Kit and backend Amazon web
services [12].

Fig. 4. Learning outcomes and test instrumentation for existing remote laboratory

zamfira@unitbv.ro
664 M.J. Callaghan et al.

Fig. 5. Re-purposed laboratory with Amazon Echo

4 Series Parallel Voice Assisted Laboratory

The Series Parallel laboratory (Fig. 4) was selected in the first instance to develop as a
prototype voice assisted practical to demonstrate the viability of the approach and to
understand the process of re-purposing existing teaching resources and material for use
in this context. The existing laboratory was game and time based where the student was
given values of the input voltage Vin and resistors R2/R3 and a target output voltage Vo
to achieve. The student used the formulas provided for the equivalent resistance Req
and voltage out Vo to calculate the correct value of resistor R1 to achieve the target
output voltage. A score was awarded dependent on how close the calculated value was
to the target value of Vo and the time taken to complete the calculations [13].
When the laboratory was in progress, the game user interface/client communicated
with the underlying hardware, switching matrix and instrumentation (Fig. 3) and
connected the selected value of resistor R1 to physically complete the circuit. The game
was initially designed in Microsoft Excel (Fig. 6) and has a high level of replay value
and longevity as each time it was played, different combinations of resistors R2 and R3
were selected randomly and connected from an existing bank of physical components
resulting in a range of different target values of voltage out to achieve (Fig. 7).
Using the existing laboratory as a starting point the next step in the process was to
create a structured series of interactions suitable for a voice driven experience which
included an overview of the laboratory, access to help if needed, control and config-
uration of the instrumentation and circuits in the hardware layer, assessment and
provided feedback to the student (Fig. 8). This structure was used to create the inter-
action model for the Series Parallel skill and to identify the intents, prompts, slots,
sample utterances and communications with the hardware layer (Fig. 9) which would
then be used for development and implementation using the Alexa Skills Kit.

zamfira@unitbv.ro
Practical Use of Virtual Assistants and Voice User Interfaces 665

Fig. 6. Microsoft Excel model of Series Parallel game based laboratory

Fig. 7. Hardware instrumentation and component values for Series Parallel laboratory

zamfira@unitbv.ro
666 M.J. Callaghan et al.

Fig. 8. High level overview of user interactions and hardware communication

zamfira@unitbv.ro
Practical Use of Virtual Assistants and Voice User Interfaces 667

Fig. 9. Interaction model for Series Parallel skill

zamfira@unitbv.ro
668 M.J. Callaghan et al.

5 Implementation and Student Help/Feedback System

Echo skills are hosted on Amazon web services and require an Amazon developer
account and have two main components, Amazon Lambda and the Alexa Skills kit.
Amazon Lambda is an event driven cloud computing platform which runs and executes
code on demand when triggered. Running an Echo skill requires the creation of a
Lambda function which is executed from an event source (in this case the Amazon
Echo). Skills are created using the Alexa Skills kit and require a name that will be
displayed to users of the Amazon app store and an invocation name which allows the
user to activate the skill through the Echo hardware. Skills can be written in Node.js,
Python or Java using the interaction model part of the interface to tell the skill which
intents (in a JSON structure) it supports and the sample utterances that will trigger each
intent (Fig. 10). The skill is then linked to the Lambda function using the Lambda
Amazon Resource Name (ARN) and can be tested using the service simulator or the
physical hardware device before publishing on the Echo app store.

Fig. 10. Amazon developer console

Using this approach the Series Parallel skill was created and made available on the
physical Amazon Echo device [14] which was placed in the laboratory (Fig. 11). The
student starts the interaction with the Echo using the app invocation name. The Echo
then welcomes the student to the laboratory, provides an overview of the experiment
and sets out their objectives (Fig. 9). It then guides the student through the laboratory
with a series of voice prompts e.g. help provision (formulas for equivalent resistance

zamfira@unitbv.ro
Practical Use of Virtual Assistants and Voice User Interfaces 669

Req and voltage out Vo) and the selection of the value of resistor R1 based on the given
values of resistors R2, R3 and input voltage Vin. The student responses to the voice
prompts and their responses are processed by the Echo/Alexa Skills kit based on the
sample utterances (Fig. 9).

Fig. 11. Amazon Echo, physical hardware and host PC

In the backend, the Lambda function passes/receives a range of configuration


variables to/from the server and database running on the host PC (Fig. 5) depending on
which intent is called (Fig. 9). These values are written to/read from the database by an
application (developed in C++) running on the host PC which uses the Keysight IO
Libraries Suite to communicate with the physically connected hardware e.g. the “app.
launch” intent initiates the process of powering up/configuring the instrumentation
(Fig. 7) while the “circuitcomplete” and “readytostart” intents facilitate the physical
connection of the selected values of resistors R1, R2 and R3. Supplementary visual and
teaching material to support the laboratory is presented on the host PC monitor. This
material is synchronized with voice output from the Echo using the “selectFormula”
and “provideFormula” intents.
The scoring mechanism created for the original game/laboratory was extended and
used for summative and formative assessment. This was possible as there is a direct
correlation between the score achieved, the calculated/target value of voltage out,
current flow and the resistors R1, R2 and R3 as determined by Ohms law and the
formulas for the equivalent resistance Req and voltage out Vo. The formative assess-
ment and feedback given (Fig. 12) is dependent on the score achieved and the impact
of increasing or decreasing the value of resistor R1 on current flow in the circuit.

zamfira@unitbv.ro
670 M.J. Callaghan et al.

Fig. 12. Summative and formative assessment with feedback

6 Conclusion and Future Work

This paper explored the feasibility of using virtual assistants and voice user interfaces
in campus based engineering laboratories to tutor and assess students. A practical case
study was presented based on the modification of an existing remote laboratory where
the remote access functionality was removed and replaced with a local control system.
The working example shown demonstrated how this approach could be used to guide a
student through an experiment, providing supplementary teaching resources and help
when requested while accessing and controlling test instrumentation and hardware. It
also demonstrated how voice user interfaces could be used for summative and for-
mative assessment and to provide feedback to students. This area of research is set to
grow rapidly as virtual assistants and related devices become mainstream driven by low
cost consumer hardware and cloud based services. Future work on this project will
focus on developing formal, structured approaches to the creation of virtual
assistant/voice user interfaces for engineering laboratories, extending the approach to
all of the practical experiments shown previously, investigating how this approach
could be integrated with existing and widely used remote laboratory infrastructures and
frameworks and exploring possible uses of these technologies to improve accessibility
and access to teaching resources for students with disabilities.

zamfira@unitbv.ro
Practical Use of Virtual Assistants and Voice User Interfaces 671

References
1. Bouziane, A., Bouchiha, D., Doumi, N., Malki, M.: Question answering systems: survey and
trends. Proc. Comput. Sci. 73, 366–375 (2015). ISSN 1877-0509
2. Khillare, S.A., Pundge, A.M., Mahender, C.N.: Question answering system, approaches and
techniques. Int. J. Comput. Appl. 141(3), 34–39 (2016)
3. Microsoft: Conversational intelligence (2016). https://www.microsoft.com/en/mobile/
experiences/cortana/. Accessed 30 Nov 2016
4. Amazon: Started with the Alexa Skills Kit (2016). https://developer.amazon.com/public/
solutions/alexa/alexa-skills-kit/getting-started-guide. Accessed 30 Nov 2016
5. Amazon: Amazon Web Services Internet of Things (2016). https://aws.amazon.com/iot-
platform/how-it-works/. Accessed 30 Nov 2016
6. Feisel, L.D., Rosa, A.J.: The role of the laboratory in undergraduate engineering education.
J. Eng. Educ. 94, 121–130 (2005)
7. Black, A.W., Taylor, P.: Automatically clustering similar units for unit selection in speech
synthesis. In: Proceedings of Eurospeech 1997, Athens, Greece, pp. 601–604 (1997)
8. Amazon: Amazon Echo Interaction Model (2016). https://developer.amazon.com/public/
solutions/alexa/alexa-voice-service/reference/interaction-model. Accessed 30 Nov 2016
9. Lindsay, E., Liu, D., Murray, S., Lowe, D.: Remote laboratories in engineering education:
students’ perceptions. In: Proceedings of 18th Annual Conference Association for
Engineering Education (AaeE 2007) (2007)
10. Lowe, D., Murray, S., Lindsay, E., Liu, D.: Evolving remote laboratory architectures to
leverage emerging internet technologies. IEEE Trans. Learn. Technol. 2(4), 289–294 (2009).
doi:10.1109/TLT.2009.33
11. IO Suite: Keysight IO (2016). http://www.keysight.com/en/pd-1985909/io-libraries-suite?
cc=GB&lc=eng. Accessed 30 Nov 2016
12. Amazon: Amazon web services (2016). https://aws.amazon.com/. Accessed 30 Nov 2016
13. Callaghan, M.J., McCusker, K., Losada, J.L., Harkin, J., Wilson, S.: Using game based
learning in virtual worlds to teach electronic and electrical engineering. IEEE Trans. Ind.
Inform. 9(1), 575–584 (2013)
14. Amazon Alexa/Echo demo: Practical lab for Series Parallel circuit (2016). https://www.
youtube.com/watch?v=tDpx9jZ-WKs&spfreload=10. Accessed 30 Nov 2016

zamfira@unitbv.ro
Approaching Emerging Technologies: Exploring
Significant Human-Computer Interaction
in the Budget-Limited Classroom

James Wolfer(B)

Indiana University South Bend, South Bend, IN 46634, USA


jwolfer@iusb.edu

Abstract. There has been an explosion of sensor, presentation, and


display technology available for exploration in Human-Computer Inter-
action. While much of this technology is readily available, approachable,
and/or inexpensive, such as cell phone or Web display, other technol-
ogy remains relatively expensive in the context of classroom instruction.
This work presents an approach to exposing students to the principles
encapsulated in expensive technologies using less expensive alternatives.
Arranged in four broad categories, Brain-Computer Interfacing, Haptics,
Augmented/Virtual Reality, and General interfaces, we survey a collec-
tion of devices and emerging technologies appropriate for student use in
group and individual Human-Computer Interaction projects.

Keywords: Emerging technology · Human-Computer Interaction ·


Pedagogy · Virtual Reality

1 Introduction
There has been an explosion of sensor, presentation, and display technology
available for exploration in Human-Computer Interaction. While much of this
technology is readily available, approachable, and/or inexpensive, such as cell
phone or Web display, other technology remains relatively expensive in the con-
text of classroom instruction. This is especially true for classes that are only
taught occasionally at any given institution.
One class that has particular interest in emerging technologies is the Human-
Computer Interaction course. Designed to explore the requirements acquisition,
design, development, deployment, and outcomes measurement of user interfaces
the class blends the assessment of software-driven interfaces such as web pages
an e-commerece sites with emerging hardware interfaces [1]. This work describes
a curated collection of hardware and software designed to provide a laboratory
experience exposing the principles of select emerging technologies to computing
students. By carefully choosing the devices used to cover important technologies
we provide a significant laboratory experience within budget constraints.


c Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6 63

zamfira@unitbv.ro
Approaching Emerging Technologies 673

Fig. 1. NeuroSky Mindwave

2 Brain-Computer Interfacing
One of the emerging technologies that holds potential to have impact in areas
ranging from entertainment to medicine is that of Brain-Computer Interfacing
(BCI). Historically the purvue of medical research, recently BCI capability has
become approachable at the consumer level. There are several BCI Interfaces
available commercially at this point in time. Perhaps the most prominent is the
Emotiv line of interfaces [2]. While capable, the Emotive series of devices are
relatively expensive. As an alternative we elected to work with the NeuroSky
Mindwave electroencephlograph (EEG) sensors [3].
Figure 1 shows the NeuroSky Mindwave. While limited to a single sensor,
this lightweight headset is capable of collecting raw EEG output between three
and 100 Hz with a sampling rate of 512 Hz. In addition, the Mindwave produces
EEG power spectrums for commonly defined bands such as Alpha, Beta, Delta,
etc. Quality analysis data is also available. The device interfaces to the computer
via Bluetooth, and is powered by a single AAA battery.
NeuroSky provides SDK’s for most major operating systems, and the open
source community provides several alternatives for language and device interfac-
ing [4]. In addition, there are a variety of applications that can readily capture
the raw data. One such application, for the Android system, is eegID, by Isomer
Programming, LLC [5]. This application acquires a comprehensive set of data
including signal quality, raw EEG values, EEG values in volts, NeuroSky propri-
etary data such as attention level, meditation level, and a range of EEG signal

zamfira@unitbv.ro
674 J. Wolfer

Fig. 2. eegID NeuroSky plot

bands. The data recorded by the application is stored in a comma-separated


value (csv) file which can be imported into programs or spreadsheets for addi-
tional processing. Figure 2 shows an example of the EEG voltage signal for a
short length of time. In the classroom setting, student projects using the Mind-
wave headset include projects such as processing to detect and use eye-blink
gestures to substitute as mouse clicks.

3 Haptics
Another fundamental technology for interacting with the computer, haptics,
exploits the sense of touch. Perhaps the most common experience of haptic feed-
back is that of the vibrational cellphone ring signal. More sophisticated interfaces
provide a realistic sense of touch, and can be used to model phenomoma ranging
from spring-mass demonstrations to arterial pulse palpation [6,7]. Using haptics
in an educational setting has been described in [6].
There are many haptic devices available. With six degrees of freedom, the
Sensable Phantom [8] series of devices are among the most popular for research
purposes, are very capable, but are correspondingly expensive limiting their
availability for occasional classroom use. A variety of projects have been used to
provide less expensive alternatives. For example, Carniero et al., describe a one
degree of freedom haptic device suitable for educational activities [9].
For much of our haptics work we elect to use a Novint Falcon [10], a haptic
game controller. The Novint Falcon, shown in Fig. 3 provides three degrees of
translational freedom, has a maximum force of over nine Newtons, and can
sustain a refresh rate of 1000 Hz. In addition to the Novint drivers and SDK,
there are open source alternatives that allow programming the device across a
variety of computer and operating system platforms [11]. In addition, high-level
scenegraph tools are available supporting rapid prototyping with the Falcon [12].
Student projects involving the Novint Falcon include using it as the basis for the
haptic representation of aortic surface and pressure profiles [7,13].

zamfira@unitbv.ro
Approaching Emerging Technologies 675

Fig. 3. Novint Falcon

4 Augmented and Virtual Reality

Augmented and Virtual Reality are approaching mainstream, with Oculus pro-
viding high-end hardware to the consumer market [14]. As a less expensive alter-
native for classroom use we adopt a Google Cardboard [15] approach. Leveraging
the incredible power of the modern smart phone we provide a series of experi-
ments and demonstrations of stereographic, or 3D, content.
The smart phone, mounted in an optical container such as that shown in
Fig. 4a can is used to project 3D images as well as sound from the smart phone
speakers. There are a variety of applications online that take advantage of this
arrangement to provide a VR experience at moderate cost [16]. In addition to
entertainment, projects like RADSVRx represent a serious effort to deploy this
technology for cost effective medical training [17].
Custom content creation is another aspect of such a system. Developing syn-
thetic content is relatively straightforward using standard development tools for
computer graphics. An example, rendered in red-cyan anaglyphic to make it
suitable for viewing with colored glasses, of an artery extracted from computed
tomography is shown in Fig. 5a.
Custom content is more challenging for photographic material. Recently,
small “toy” beam-splitters have become available to clip onto cell phone

zamfira@unitbv.ro
676 J. Wolfer

(a) Cell Phone VR (b) Stereographic Adapter

Fig. 4. VR examples

cameras for just such purpose as shown in Fig. 4b [18]. Figure 5b shows a side-
by-side stereographic image, along with the corresponding anaglyphic image
(Fig. 5c) of Colossus, the worlds first programmable computer at The National
Museum of Computing in the UK [19]. Viewing the side-by-side images on a
smart phone cardboard interface provides a sharp, color, image of the computer
in depth.
The smart phone also forms the basis for projects in augmented reality, com-
bining the synthetic and real environments. Often used to augment human capa-
bility, student projects include geographic and structural mapping of the cam-
pus providing interactive identification of buildings and other structures while
walking.

5 General and Gestural Devices

We now turn to some of the readily available interfaces for general interaction.
Of course there are many that are used daily and taken for granted. Examples
include the keyboard, mouse, and trackpad provided with modern computers.
Other examples include, touch surfaces on tablets and mice, as well as a variety
game controllers with impressive capabilities, such as the Microsoft Kinect. In
this section we profile a series of devices less familiar to the students.

5.1 Cell Phone

It could be argued that the cell phone is the most successful HCI device in history.
In addition to basic communication capability, the modern cell phone features
a host of sensor information readily available for subsequent analysis. Examples
include location, orientation, acceleration, rotation, pressure, proximity, light,
sound, and magnetism. In addition, many cell phones have displays with resolu-
tions rivaling computer displays. The sensor capabilites of cell phones have been
used for applications ranging from navigation to mental health assessment [20].
There are a variety of approaches to access the sensor data from cell phones.
For students adept at software development programs can be written to extract

zamfira@unitbv.ro
Approaching Emerging Technologies 677

(a) CT Artery

(b) Side-By-Side (c) Anaglyph

Fig. 5. VR examples

the data. This has the advantage of complete control of the application, with
the disadvantage of long development times. In the interest of obtaining data
quickly for post-acquistion processing and analysis we elected to use existing
phone apps to extract the data. One such app for Android, AndroSensor [21],
records all of the sensor data to a comma-separated value (csv) file suitable for
import into a spreadsheet or program for subsequent processing. A sample plot
is shown in Fig. 6.

5.2 Projective Keyboard


A laser keyboard, shown in Fig. 7a, is a small device that projects a full-size com-
puter keyboard on any flat, reflective surface using a red laser. A built in camera
detects hand and finger motion and location with respect to the individual keys
projected and activates the corresponding character. The device also emits an
audio indication that a key was “pressed,” substituting for the mechanical key-
click. The device is powered by an internal battery charged via a micro-USB
port and communicates with the computer via a Bluetooth HID protocol.

zamfira@unitbv.ro
678 J. Wolfer

Fig. 6. Cell phone acceleration plot

(a) Laser Keyboard (b) Leap Motion Controller

Fig. 7. General and gestural interfaces

When evaluating the keyboard, students liked the idea of the projection
keyboard, but they viewed it as a novelty. They found it awkward to type at
accustomed speed due to lack of expected mechanical feedback and they found
the audible key-clicks helpful, they also found them irritating with time. Thus
the projective keyboard filled a vital role demonstrating a contemporary device
with significant limitations as an object for their first critical review.

5.3 Leap Motion Controller

The Leap Motion Controller (Fig. 7b) is a gesture capture device that sensi-
tive enough to acquire the position of fingers on the hand within it’s view [22].
Leap provides an SDK to support software development on the device which
makes the device extremely useful for advanced students. Examples provided by
Leap include interactive Chess, using the hand in more-or-less natural motions,

zamfira@unitbv.ro
Approaching Emerging Technologies 679

such as pinch and release, to move individual game pieces. Student projects using
the Leap Motion Controller used the SDK to develop interactive grasping and
moving in a mouse-like interface.

6 Instructional Use
As indicated in the description of the technologies contained in the collection
supporting the HCI class, many of these technologies supported student projects
with duration ranging from a single semester to multi-semester undergraduate
re- search. In addition to their use in projects, the technology served several
ongoing instructional roles. Specifically, the served as the basis for classroom
demonstrations with interactive exploration of the strength and limits of each
technology followed by discussion on limitations and improvements.
The emerging technologies described here also formed the basis for a series of
individual assignments. For example, in HCI usability design principles form an
important component of the discipline. Encapsulating both theory and empirical
observations, these principles form an assessment basis. Over the course of a
semester each of the devices and technologies are assessed by students in terms of:

1. Visibility. How visible, or transparent, are the components of the interface.


Are any of them hidden? How much effort is required by the user to discover
them?
2. Feedback. Does the user of the interface receive appropriate feedback?
3. Constraints. Are there intentional constraints? Are these constraints neces-
sary for the proper use of the device or the safety of the user?
4. Affordance. Does the organization of the device provide intrinsic clues to its
proper operation? For example, buttons afford pressing, door knobs imply
turning to unlatch, does the technology under evaluation afford obvious
usage?

Over the course of the semester students develop a collection of reports as


they evaluate each device, which, in turn, provides an experience base for future
human-computer interface design.

7 Conclusion

This work has described the assemblage of a carefully curated set of devices
representing a selection emerging technologies appropriate to form the hardware
basis for a human-computer interface course. When combined with appropriate
design principles, targeted assignments, and sustained projects they combine to
provide significant experience at a reduced cost when compared with research-
grade equipment.

zamfira@unitbv.ro
680 J. Wolfer

References
1. Preece, J., Sharp, H., Rogers, Y.: Interaction Design. Wiley, Hooboken (2015)
2. EMOTIV Brainware. http://www.emotiv.com
3. NeuroSky. http://neurosky.com
4. Singh, S.: NeuroPy. http://neurosky.com
5. Isomer Programming: eegID. http://www.isomerprogramming.com/downloads/
android-apps/eegid
6. Restivo, M.T., Cardoso, A., Lopes, A.M.: Online Experimentation Emerging Tech-
nologies and IoT. IFSA Publishing, Barcelona (2016)
7. LeClair, A., Wolfer, J.: Haptic representation of aortic pressure waveforms using
synthetic ECG derived time intervals. In: Proceedings International Conference on
Online Experimentation (2015)
8. Sensable Phantom Omni. http://www.dentsable.com/haptic-phantom-omni.htm
9. Carneiro, F., Quintas, P., Abreu, P., Restivo, M.T.: Design and test of a 1 DOF
haptic device for online experimentation. Int. J. Online Eng. 12(4), 55–57 (2016)
10. Novint: Falcon. http://www.novint.com/index.php/novint
11. Danieau, F.: Libnifalcon. https://github.com/libnifalcon/libnifalcon
12. Open Source Haptics: H3D H3DAPI. www.h3dapi.org
13. Gordon, S.L., Wolfer, J.: Vascular-haptic interaction: a student project case study
in computer graphics. In: International Conference on Engineering and Computing
Education, March 2009
14. Oculus: Oculus rift. https://www.oculus.com/
15. Google VR: Google cardboard. https://vr.google.com/cardboard/
16. Wake County Public Schools: Google cardboard virtual reality. http://www.
wcpss.net/cms/lib/NC01911451/Centricity/Domain/3791/Google%20Cardboard
%20VR.pdf
17. Hernandez-Rangel, E.: RADSVRX, Radiology Advanced Educational System with
Virtual Reality Experience. http://www.alexandriavr.com/radsvrx
18. Amazon: 3D cellphone camera lens. https://www.amazon.com/Universal-Mobile-
Camera-iPhone-Samsung/dp/B01CP2VFPE/ref=pd lpo 107 tr t 3? encoding=
UTF8&psc=1&refRID=FQB721ASYKJZ8GPXVMZB
19. The National Museum of Computing. http://www.tnmoc.org
20. Saeb, S., et al.: Mobile phone sensor correlates of depressive symptom severity in
daily-life behavior: an exploratory study. J. Med. Internet Res. 17(7), e175 (2015)
21. Fivasim: AndroSensor. http://www.fivasim.com/androsensor.html
22. Leap Motion: Leap motion controller. https://www.leapmotion.com/G

zamfira@unitbv.ro
Touching Is Believing - Adding Real Objects
to Virtual Reality

Paulo Menezes(B) , Nuno Gouveia, and Bruno Patrão

Department of Electrical and Computer Engineering,


Institute of Systems and Robotics, University of Coimbra,
Polo II, 3030-290 Coimbra, Portugal
PauloMenezes@isr.uc.pt
http://www.isr.uc.pt/~paulo

Abstract. This article presents the idea of adding real objects represen-
tations to Virtual Reality as a way to improve the immersive experience.
To this end, a low-cost hand tracking device and an instrumented cube
based on the use of inertial measurement units is presented. Some pre-
liminary results that show the use of the hand tracker for the animation
of a virtual hand model are shown. The fusion of inertial measures with
a vision-based marker detector outputs will be performed with the help
of a Kalman filter to provide smooth and bias corrected estimates of
the object pose. The developed solutions offer the flexibility of interact-
ing either with local or remote systems, as they have been designed as
wireless Internet connected objects.

1 Introduction
Virtual Reality (VR), thanks to the recent introduction on the market of low cost
visualization devices, has attracted consumers, companies, researchers, media
producers, game developers, etc., who are seeing opportunities for the most
various application fields.
Besides the interest in content creation, there have been two main areas of
development for supporting VR, which are displays and motion trackers. On
one side the display devices try to achieve the best quality in terms graphics,
but also in terms of capturing the view point motion that result from the user
movements. As most head mounted displays (HMD) are still linked to computers
through wired connections, this restricts the movement freedom of the users. The
increasing power of portable and wearable computers are making possible to use
wireless HMDs with embedded computing capabilities, that by consequence do
not impose such movement restrictions. This has resulted in various devices
that are being released by companies such as HTC,1 Oculus,2 Samsung,3 or
Microsoft.4 These products and their predecessors have pushed researchers to
1
HTC and Valve virtual reality systems: www.htcvalve.com.
2
Oculus virtual reality systems: www.oculus.com.
3
Samsung VR: http://www.samsung.com/global/galaxy/gear-vr/.
4
Microsoft Hololens: https://www.microsoft.com/microsoft-hololens/en-us.

c Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6 64

zamfira@unitbv.ro
682 P. Menezes et al.

create Virtual Environments Applications and Toolkits, for example [9,10], for
various purposes.
Naturally, several schools and educators are enthusiastically trying to under-
stand how to explore it as an effective learning tool. In fact these technologies
may create opportunities for the students to improve or reinforce the learn-
ing of specific subjects, for example using virtual laboratories, enabling them
to practice at home before or after performing the real laboratory experiments.
Besides the new rehearsal possibilities that VR opens for preparing the students,
there are experiments which may not be performed directly due to the associ-
ated dangers, costs or impracticalities. In some of these cases simulation may
be a solution or remote control of the experiment may be the only possible or
acceptable solutions.
By coupling realtime simulations with 3D graphic engines, it is possible to
make the student visually observe the experiments evolution by mocking the
real systems. In the case of remotely controlled experiments, traditionally VR-
related devices may also enable us to explore the idea of telepresence in tele-
manipulation or teleoperation tasks. This brings the student to the center of the
task to be performed, using as one of two different approaches with respect to
the way the user perceives himself: (1) an embodied agent that sees by himself
and acts using some robotic body, or as (2) an embarked pilot that controls the
device, robot, or vehicle from the inside. Both ideas share the principle that the
user sees him/herself as being present at the remote or virtual place. The main
differences come from the way that he/she perceives the physical body and they
way its actions are controlled.
For the embodied case, the physical (robotic) or virtual body must be seen
as the own body, in a way that the nearby elements must be perceived as being
proximal to one’s body. The control of the (acquired) body must be intuitive and
automatic, or in other words for controlling it if suffices to think about doing
and not about acting upon some input (button, joystick) for doing it, i.e. not
requiring any kind of high-level reasoning. The embarked pilot case, is slightly
different in the perception it provides. The physical body is no longer perceived
as one’s body, but instead it is seen as a vehicle that the operator controls
from the inside, like a car, a plane, etc. The perception of being inside of a
vehicle, on some kind of virtual cockpit, with the possibility of looking in most
frontal directions as needed, enable the reuse of the mental models constructed
for driving cars, as an example. In these cases, the operator can see him/herself
driving the vehicle through some set of controls, which may include steering
wheels, joysticks, or other, as appropriate. Therefore, parts of the vehicle no
longer need to be perceived as the own body. But perceiving the own body
and its spacial and contact relations with the cockpit devices are required to
make use of the those mental models. In previous works developed at ISR-UC,
both embodied and embarked approaches were developed and evaluated with
convincing results [1,9], where it became clear that the need to perceive the
spatial relationships, namely the position of the controls with respect to the
hands of the operators, is of vital importance for an intuitive operation. This

zamfira@unitbv.ro
Touching Is Believing - Adding Real Objects to Virtual Reality 683

takes us to the need to the need to track the operator’s hand, and eventually
body, postures or even the related 3D point clouds to generate its representation
with more or less fidelity, but precise enough to induce the required presence
perception. Recently some new products have been introduced in the market
with the capability of being tracked and represented in the virtual environment
like the controllers like Reactive Grip5 or full body suits like HaptX Skeleton6 .
The motion tracking methods based on accelerometer, gyros or magnetic
sensors, either individually or combined, are well documented in literature, with
their use mostly restricted to orientation tracking due to the drifting problems of
position estimation using these sensors. On another side, vision-based tracking
of objects and markers has also its limitations in terms of occlusions and rate
especially when dealing with low-cost solutions that do not make use of synchro-
nous active markers and or high-speed cameras. This paper presents a low-cost
solution that makes use of the fusion visuo-inertial sensor data to enable the
tracking of instrumented objects to be included in the virtual reality scenarios.
This aims at fulfilling an important missing feature in most immersive setups,
which is the ability to touch the objects the user is interacting with.

2 Better Immersive Experiences Using Passive Haptics


Driving a remote vehicle from its interior, is more than seeing through a (real
or virtual) on-board camera whose orientation follows the orientation of the
HMD worn by the user. Besides observing the driver need to actuate by issueing
commands via some kind of input devices like driving wheels, joysticks, push-
buttons, etc. As while wearing the HMD a user only sees what is displayed in
its screens, and cannot see the surrounding physical space, the input devices, or
the own body parts. In some cases like, holding a game controller may be easy
and intuitive as typically one does not need to look at it while manipulating its
joysticks or buttons, but operating a complex panel or multiple devices may be
more complicated as moving the hand from one to the next button by guessing
its location may not be an easy task.

Fig. 1. Left: hand tracking device animating a hand model. Right: user hands repre-
sented by point cloud for the control of an underwater robot.
5
Tactical Haptics Reactive Grip available at: http://tacticalhaptics.com/products/.
6
AxonVR HaptX Skeleton available at: http://axonvr.com/.

zamfira@unitbv.ro
684 P. Menezes et al.

One solution can be the use of a cockpit that completely mapped into the
VR scene, along with a user body representation so that he may perceive himself
approaching and touch its elements. Finally, there is one final detail that needs
to be handled to fulfil all the necessary elements for a complete sense of presence,
which is the own body parts perception inside the VR representation, notably
hands and forearms. Here there are two possibilities, both represented in Fig. 1:
(1) use models to represent these body parts, or (2) add his own body parts
representations to the view. For the first case, we need to be able to track not
only the hands, but also the fingers so that we can consistently animate the
hand models . For the second approach we can use some 3D capturing device
(e.g. KinectTM or StructureTM ) to generate the necessary point cloud, that after
the appropriate segmentation may be used to represent those body parts.

3 Estimating Hand Configuration and Pose


A human hand kinematic model consists of links that represent human bones
and joints, which describe the constraints of motion between the links. Sev-
eral authors have developed more or less complex articulation models for the
hand [5] that in some cases reach 27 degrees of freedom. In our implementation,
we decided to use a simplified kinematic model composed of three joints for each
finger and one joint for the wrist, whose virtual bone structure is depicted in
Fig. 2.

Fig. 2. Left: Wireless Internet connected hand tracker device. Right: hand model with
virtual bone structure.

Direct kinematic equations are used to provide the position and orientation
of each finger tip.
Ei = b Tw × w Tf bi × f bi Tti (1)
where Ei is a matrix containing pose (position and orientation) of the fin-
gertip (i = 1,2,...,5), b Tw is the transformation between the wrist, w, and the
base, b, reference frame, which is typically the forearm, and w Tf bi is a matrix
that represents the transformation between wrist and the ith finger reference
frame, f bi , and finally f bi Tti the transformation between the ith finger base and
corresponding fingertip, ti .

zamfira@unitbv.ro
Touching Is Believing - Adding Real Objects to Virtual Reality 685

With the proposed hand model, and in order to obtain the pose of each
joint, in terms of hardware, the ideal solution would be to place an inertial
measurement unit (IMU) in each one and read the corresponding pose as the user
moves his hands around. This would result in a solution with 16 IMUs, making
the final prototype complex and probably not comfortable to use. Therefore,
in order to decrease the amount of sensors placed on the hand, a study was
performed [3] on the most common hand gestures and motions and the impact
these have on finger articulation. In this way we determined that, we are able to
use only one sensor for each finger without losing enough gestures and motions
that would disrupt the immersion, resulting in a solution with only 6 IMUs.
The developed prototype (Fig. 1) has two main goals, low production cost and
wireless capabilities (not constraining user motions). In the experimental studies
undertaken, we were able to obtain a Root-Mean-Square Error (RMSE) values
of 0.0586 (quaternion units) and 0.9585◦ (Euler angles).

3.1 Inertial Tracking

Inertial tracking systems calculate the relative change of a moving target in posi-
tion, Δpn , and orientation, Δqn , between two consecutive sampling times, from
acceleration and angular velocity given by the IMU. Several pose representa-
tions are present in inertial navigation, we use the standard Euclidian coordi-
nate system for position, p, and quaternions [2] for orientation, q. By solving
the quaternion differential equation [2] with Euler method we end up with the
quaternion update equation:
1
qn+1 = qn ⊗ (1 + Ωn δt) (2)
2
where Ω = [0, ω1 , ω2 , ω3 ]T is the quaternion representation of the angular
velocity in the moving frame, δt is the sample interval and ⊗ denotes quater-
nion multiplication. Double integration of the kinematic motion equation with
reference frame position, p, and acceleration, a, leads to the position update
equation:
1
pn+1 = pn + vn · δt + · an−1 · δt2 (3)
2
Considering the acceleration, am , is measured in the moving frame, the accel-
eration must be transformed into the reference frame, before the position update
calculation. In addition, a gravity compensation is needed since the accelerom-
eter is sensitive to it. This leads to the following acceleration update equation:

an = qn ⊗ am ∗
n ⊗ qn − g (4)

Inexpensive IMUs are usually poorly calibrated, resulting in measurements


accompanied with random and systematic errors. In order to reduce those errors,
a semi-automatic calibration presented by Pretto et al. [8] was used.

zamfira@unitbv.ro
686 P. Menezes et al.

4 Estimating Pose Using Vision and Strapped down IMU


Object tracking in three-dimensional space (3-D) requires a consistent estimation
of the object pose at a suitable sample rate. Fusion of vision-based and inertial
pose estimation, give us the capability of accurately tracking the object for long
periods of time, without a strong drift due to biased measurements from IMU,
while also solving problems like occlusion and low sample rate on video tracking.
Considering the visual tracking based on fiducial markers presented by Kato
et al. [6] the pose from the cube can be calculated using a single camera, assuming
that the dimensions of the marker are known and a non-symmetric maker pattern
is used. Theoretically, the maximum sample rate of the visual tracking is the
camera maximum frame rate, but due to occlusion and other errors the average
rate of poses is lower. On the other hand, the IMU higher sample rate, non-
sensitive to occlusions and shadowing assists the vision tracking.
The proposed sensor fusion method, is an extended Kalman filter [11] that
operates in a predictor-corrector manner, presented in Fig. 3, giving us a more
robust performance in the presence of noise. The state x holds information about
acceleration, a, velocity, v, position, p, orientation, q, and sensor bias, ab . In
the predictive state the relative changes in position, Δp, and orientation, Δq,
calculated from the inertial measurements ai and ωi , are used to update the
state estimation x̂ in the interval between two sequential measurements from
the vision tracker. In the update state the position, pv , and orientation, qv is
applied to correct the state estimation from the predictive state. In this fashion,
we get a higher sampling time, while only being dependent on the inertial tracker
between two vision tracker updates.

Fig. 3. Left: Wireless Internet connected cube with IMU, touch sensor and fiducial
markers. Right: diagram of sensor fusion.

4.1 Results
Figure 4 shows a of the fusion of the IMU and a visual marker-based pose esti-
mation. As expected the IMU tends to accumulate errors due to poor sensor
bias estimation, but can provide data at a much higher rate than a convencional
video camera. The visual-marker pose estimation is very stable during static
poses or, slow movements, but fails when the markers are occluded or become
blurred during fast movements. This represents a perfect combination between

zamfira@unitbv.ro
Touching Is Believing - Adding Real Objects to Virtual Reality 687

the two techniques as the case where one has weaknesses is the one where the
other presents the best results.

5 Seeing, Touching, Holding, and Manipulating Objects

Manipulation of objects is one of the most fundamental tasks in everyday life.


To further enhance the immersion of the user in virtual reality, objects and
their virtual representation where added, making the experience more authentic.
In order for the objects to be reached, touched and manipulated while using
HMD, the user’s real hand movements are emulated in virtual environments,
through the use of devices like the one described in Sect. 3. Visual perception,
also influences user’s ability of perceiving the object, as demonstrated in [4].

5.1 Manipulating Object Models and Induced Perception

Using the wireless touch sensitive cube, presented in Fig. 3, in a similar way as
the isometric objects, like Spacetec Spaceball TM presented in [7], the perception
of the compliance of objects presented in the virtual environment can be altered.
This perception modification can be achieved by amplifying or diminishing the
ratio between the force applied and the visual perception experienced through
the HMD, leading the subject to perceive the object as a stiffer or softer object.
The manipulation of this perception can be used in many educational appli-
cations, for example, while studding classical mechanics.

Pos. X
55
Camera
IMU+Camera

50
Pos(mm)

45

40
6 7 8 9 10 11 12 13
time(s)

Pos. Y
120
Camera
100 IMU+Camera

80
Pos(mm)

60

40

20

0
6 7 8 9 10 11 12 13
time(s)

Pos. Z
500
Camera
IMU+Camera
480
Pos(mm)

460

440

420
6 7 8 9 10 11 12 13
time(s)

Fig. 4. Pose estimation using the fusion of a visual marker pose extraction and IMU
pose estimation

zamfira@unitbv.ro
688 P. Menezes et al.

6 Conclusions and Future Directions


The present article, proposes a possible solution to push further the immersion
of the user, while experiencing and operating within virtual environments. Com-
promises where made to make the solution low-cost and user’s constrains low,
making the devices easy to use while achieving the pretended performance.
Further work will investigate how the perception can be manipulated in terms
of object characteristics such as shape, compliance, smoothness, by exploring
mixtures of haptic plus visual and even sound stimuli.

References
1. Almeida, L., Patrão, B., Menezes, P., Dias, J.: Be the robot: human embodiment
in teleoperation driving tasks. In: Ro-Man 2014: The 23rd IEEE International
Symposium on Robot and Human Interactive Communication, Edinburgh (2014)
2. Chou, J.C.K.: Quaternion kinematics and dynamic differential equations. IEEE
Trans. Robot. Autom. 8(1), 53–64 (1992)
3. Cobos, S., Ferre, M., Ortego, J.: Efficient human hand kinematics for manipulation
tasks. In: Direct, pp. 22–26 (2008)
4. Dominjon, L., Lécuyer, A., Burkhardt, J., Richard, P., Richir, S.: Influence of
control/display ratio on the perception of mass of manipulated objects in virtual
environments. In: IEEE Proceedings Virtual Reality, pp. 19–26 (2005)
5. Gustus, A., Stillfried, G., Visser, J., Jörntell, H., van der Smagt, P.: Human hand
modelling: kinematics, dynamics, applications. Biol. Cybern. 106(11–12), 741–755
(2012)
6. Kato, H., Billinghurst, M.: Marker tracking and HMD calibration for a video-
based augmented reality conferencing system. In: Proceedings 2nd IEEE and ACM
International Workshop on Augmented Reality, pp. 85–94 (1999)
7. Lécuyer, A., Coquillart, S., Kheddar, A., Richard, P., Coiffet, P.: Pseudo-haptic
feedback: can isometric input devices simulate force feedback? In: Proceedings of
the IEEE Virtual Reality Conference, pp. 83–90 (2000)
8. Pretto, A., Grisetti, G.: Calibration and performance evaluation of low-cost IMUs.
In: 18th International Workshop on ADC Modelling and Testing (2014)
9. Sanchez, J.G., Patrão, B., Almeida, L., Perez, J., Menezes, P., Dias, J., Sanz, P.:
Design and evaluation of a natural interface for remote operation of underwater
robots. IEEE Comput. Graph. Appl. 11(99), 1 (2016)
10. Sanfilippo, F., Hatledal, L.I.: A fully-immersive hapto-audio-visual framework for
remote touch, November 2015
11. Welch, G., Bishop, G.: SCAAT: incremental tracking with incomplete information.
In: Computer Graphics and Interactive Techniques (SIGGRAPH), pp. 333–344
(1997)

zamfira@unitbv.ro
The Importance of Eye-Tracking Analysis
in Immersive Learning - A Low Cost Solution

Paulo Menezes(B) , José Francisco, and Bruno Patrão

Department of Electrical and Computer Engineering,


Institute of Systems and Robotics, University of Coimbra,
Polo II, 3030-290 Coimbra, Portugal
PauloMenezes@isr.uc.pt
http://www.isr.uc.pt/~paulo

Abstract. This article discusses the use of Virtual Reality as a tool


for supporting learning and some of its opportunities. The importance
of using gaze tracking in immersive learning setups is also discussed.
This serves as a motivation for the construction of an low-cost eye
tracker adapted to an head mounted display (Oculus Rift DK2), which
is described. The algorithm for eye tracking as well as the calibration
procedure is described, with some results presented.

1 Introduction

Immersive learning systems are attracting the interest of may people and natu-
rally that of teachers and students. The most appealing features of these systems
are, on the one hand, the interactive level of control that the trainee/student has
on the virtual environment, and on the other hand, the amount of information
that it can provide about the user, which can be recorded for later analysis or
processed in real-time if needed. To acquire such information a number of sen-
sors may need to be installed in the surrounding space, on the user’s body, or
both. For the systems where the user can freely move in space, a combination of
body tracking system and head mounted display (HMD) is typically required,
to provide accurate data of user’s body and head movements and poses. For
other systems where the user is sitting down performing the experiment, only
the HMD’s information may be enough provide data related with the evolution
of the user’s head position and orientation along time. However, in order to track
the user’s gaze direction, special purpose modifications to the HMD are required
and this is normally a challenge due to spatial constrains, illumination artifacts
caused by the displayed scenes, etc. In most learning scenarios it is very impor-
tant to track the user’s gaze direction inside the virtual environments, because of
its importance to the instructor/teacher, as gaze patterns may help to determine
if the trainee/student struggled to understand the subject, and act accordingly,
or if he just payed attention to the important details in the right time. Some
examples of the use of this can go from adjusting the displayed information, i.e.
giving extra information to users who seem to be struggling [7], to providing the

c Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6 65

zamfira@unitbv.ro
690 P. Menezes et al.

instructor/teacher with information of what areas of the text, or visual assets,


need to be reworked in order to improve their comprehensibility [3,4]. In the
literature we can find many studies on the work of eye-tracking devices, but still
there are very few dedicated to HMD-based immersive systems. Only recently
companies like SMI and NVIDIA, among others, have worked on this subject
but mostly with the purpose of improving the visualization experience through
new techniques like foveated rendering [5].
In this paper we address the importance of eye tracking in learning envi-
ronments and experimentation, followed by the presentation of a low cost
eye-tracking solution for HMDs, which includes both hardware and algorithm
description, followed by the calibration procedure and presentation of results.

2 Virtual Reality in Learning and Experimentation

The use of Virtual Reality as a learning tool is becoming increasingly accepted


given the advantages and possibilities it introduces. Using it, the concept of
virtual laboratory may gain another dimension, where the student or trainee
beyond learning to control the experiment or system, may feel the sensation of
being in the presence of it.
Training of tower crane operators was studied before, and is an example
of the cases that may have clear benefits from the use of this technology to
avoid financial losses resulting from accidents that may occur during the training
process of novices [6].
Virtual labs, also introduce the possibility of extending practice beyond
school time, enabling the students to prepare in advance some experiments,
or repeat them to later at home. Certain types of experiments may require the
presence of a specialist to verify that the appropriate manipulations are being
done properly, or just for the sake of safety of the participants, or even to prevent
damaging some equipment or expensive materials. Here simulated experiments
although not providing the same type of guidance, solve intrinsically the prob-
lems related with personal, equipment and material safety.
In what concerns simulations the most common is to have them visualised
via the computer screen as plots, charts, tables, or animations, but with the
availability of VR/AR systems we will see certainly a change towards their use,
and there are very good justifications for that. The first is that they make possible
to train manipulation abilities, in order to learn not only the sequence of the
procedure, but also train the exact gestures to be performed. Examples are the
preparation of medicine students to gain suturing or palpation skills. This can
be done using a VR system composed of an head mounted display (HMD) and
some haptic devices that enable student to not only see but also sense the forces
and vibrations of the procedure, as if he/she was handling the needle holder.
Another reason to use VR or AR for learning is linked with the known advantage
of learning by doing versus learning by seeing. In reality this create opportunities
for the student to (virtually) touch, or observe in place by exploring the subject
from every possible viewing angle as in the real world [2].

zamfira@unitbv.ro
The Importance of Eye-Tracking Analysis in Immersive Learning 691

3 Learning to Look at the Right Things at the Right


Times as Part of Some Training Processes

It is common sense that vision plays a central role in any interaction activities
developed by humans and most animals. One of the reasons is that through
vision we can learn very different types of information about a subject without
reaching and touching it. In addition it enables us to sneak a glance at some
interesting object and continue performing some main task without interruption.
Let’s consider an example: While driving a car, we keep our eyes on the road,
but as some speed limit sign appears we need to check if our current speed is
below the limit or not. For this we peek at the speedometer on the dashboard to
get the intended information, but return immediately the attention to the road.
The design of panels and dashboards is normally guided by the knowledge
of the human vision related cognitive processes. This influences the choice of
the distribution of instruments, or the way to attract the attention to the right
information at the right moment.
In complex systems the number of gauges, together with the complexity of the
task itself, may render the operator activities very stressful and demanding. In
many of such cases it is not viable to have beeps or blinking lights to attract the
attention to the information displays, as they tend to turn into noise that may
contribute to degrading the driver performance, and lead to a desensitisation
process that makes the operator to stop noticing them.
For this reason during the training process the operators/pilots must learn
some mandatory procedures which include the periodic verification or reading of
some displays or gauges in predefined sequences. This is typically important to
confirm if the related quantities are within acceptable limits or if some corrective
operation must be executed.
During normal training sessions, the instructor may observe directly if the
trainee does, as expected, the periodical checking of the important displays, or
consults the appropriate one prior to do some action or manoeuvre, as a way to
choose the appropriate commands or verify the related safety.

3.1 Immersive Learning Systems

It is known that immersive systems based on HMDs may be an important learn-


ing tool, given that it introduces the possibility of repeating at will the training
in simulated situations, without physical, financial or health risks, neither for the
trainee or for third parties. Using this type of systems, the training process, that
typically is done in the presence of an instructor, may now also be done individ-
ually, eventually at home. Nevertheless, these systems are still some limitations,
like: (1) The resolution of the display devices and field of view, although having
been improved a lot recently, are still below the human capabilities. For this
reason and to have sufficient detail to enable the easy reading of the elements,
either their minimal size or maximum distance has to be limited. This has the
consequence of reducing the visible area of the dashboard for a given static head

zamfira@unitbv.ro
692 P. Menezes et al.

pose, unlike in the real world, where a pilot or operator can see most, if not
all, of the dashboard, just by moving the eyes left and right. The use of current
HMDs still imposes the rotation of the head to visualise the remaining parts, as
a result a trainee may easily forget about reading what is out of sight. (2) The
HMD occludes the eyes of the trainee, so the instructor cannot see them. This
indeed does not allow an instructor to know where the trainee is looking at. As
a result it is not possible to know if the appropriate instrument readings were
done when they were supposed to happen.
The first mentioned limitation could be overcome from using higher resolution
displays, but these impose higher demands on the graphics unit and required
bandwidth. As already applied in some HMDs for reducing the cybersickness
effects, the detail of the peripheral regions is reduced. But the peripheral region
is defined with respect to the display center and not to the gaze direction. This
is called foveated rendering and some companies have been working in it [5], still
considering very high resolution displays receiving full resolution video, even if
a great part of it is blurred. An interesting solution would be to reduce the
dimensions of the generated rasterizations and vary their position on the screen
so that they are always centred on the user gaze direction. From this it becomes
obvious that the introduction of an eye-tracking mechanism would serve as a
basis for solutions of both of these problems. Hereafter we will focus on the
second case.

4 Eye-Tracking in Learning

An eye-tracker suitable for use inside an HMD can be a valuable tool for VR-
based training processes. For sessions conducted by an instructor, he/she can
have a view of what is the focus of attention of the trainee in every instant in
time. This can be used, for example, to understand if there are some elements
that take longer for the student to understand that require some intervention or
explanation from the instructor [1].
With the help of this, the instructor can also observe, either directly, with
the help of heat maps, or another method, if the trainee has done the required
readings of the instruments at the appropriate times or in the right sequence
before some particular actuations, as is defined on the procedures to be learnt.
In the cases that the system is to be used to practice in an unsupervised
manner, the procedures to be learnt may be used to generate a set of rules
that will serve to evaluate the trainees performances. Here, the gaze information
can be used to automatically check if the verifications were done properly, and
if the required gauges were read as expected prior to some actuations. This
enables the identification of different cases: (1) The trainee did not read the
necessary gauges/displays and by consequence did not perform a required action,
performed the action not correctly, or even performed the action correctly by
chance. (2) The trainee did the required readings but has chosen the wrong
actions or incorrect way of doing them. (3) The trainee did the required readings
and used them to choose the appropriate actions.

zamfira@unitbv.ro
The Importance of Eye-Tracking Analysis in Immersive Learning 693

5 A Low-Cost Eye-Tracking Solution for Oculus Rift


DK2

Several researchers and developers have acquires Oculus Rift pre-release devel-
opment kits DK1 or DK2, to anticipate the development of new applications for
these devices or research purposes. When compared with those that preceded
them, these devices have very interesting characteristics in terms of field of view
and development support. The configuration of the device, in terms of available
space and the detachable lenses, creates a perfect opportunity for introducing
the necessary modifications to support an eye tracking solution.

5.1 Customization of the HMD for Eye Tracking

A preliminar analysis of the characteristics of the device and the typical usage
conditions was necessary to elaborate a solution that respects the requirements
of introducing none or minimal interference to the user during usage.
To get an adequate view of each of the user eyes, a pair of cameras was
installed under the respective lenses, in a position that minimises the occlusion
of the viewed scene. And in order to provide good illumination to the user’s eyes,
infrared illumination was chosen, that facilitates the segmentation procedure by
increasing the contrast between the pupil and the iris (dark pupil effect). To place
the cameras on the HMD a spacer was designed and produced using 3D printing.
Figure 1 shows the assembled the lens, spacer and the infrared illumination ring,
and the details of the parts used. It should be noted that the chosen cameras
are low cost endoscopy-like ones, which as in most cameras on the market, came
with an IR suppressing filter. This had to be removed, and in turn replaced with
one that attenuates the visible light reflected on the eyes, while allowing the IR
light to pass.

Fig. 1. Left: Camera board and sensor with 90◦ bent connections; Lens; IR filter (film);
Lens on spacer with IR ring; Spacer with camera below; Right: Oculus Rift DK2 with
left lens out, showing spacer and camera installed; on the right side we can see the
illumination ring in mounted on the respective lens.

zamfira@unitbv.ro
694 P. Menezes et al.

The resulting setup can be seen on the right side of Fig. 1, where the left
illumination ring and lens was left out to show the respective spacer and camera
in place. It can also be seen the cabling of the cameras and illumination ring.

5.2 Algorithmic Support for the Eye-Tracking

Using the installed hardware it’s now possible to have clear near-IR images
of each of the user’s eyes. These images are initially filtered using a low pass
Gaussian filter. A common approach is to perform pupil segmentation applying
a binarization on grey-level images, which tend to compress to the same range,
distinct zones that may produce different responses on the various colour chan-
nels of RGB cameras but tend to have the same luminance levels. This typically
leads to the generation of artifacts that need to be removed using morphological
operations or other.
A careful analysis of histograms obtained from the 3 colour channels showed
that (as expected) the red channel could provide the best segmentation results,
due to the proximity to the IR band.
After the segmentation, the contours of the pupil are extracted by finding
the chain of connected border pixels. Due to the eye movements the shape of
the pupil area can vary from a circular to an elliptic one. Therefore the use
of an ellipse fitting algorithm enables the extraction of the pupil center at the
intersection of the two axes. As no instantaneous variations of the threshold
value are expected in normal conditions, failure to detect the elliptical region
indicates the occurrence of a blink. The basic algorithm is described in Fig. 2.

Fig. 2. Top: Block description of the processing algorithm; Bottom: Example result of
detection of eye pupil and ellipse fitting

5.3 Calibration Procedure

As the interest of the use of this type of eye tracker is to know to which region
of the scene the user is looking at, there is a need to obtain a mapping between
coordinates of the observed pupil center and the screen coordinates of the point
the user is looking at.

zamfira@unitbv.ro
The Importance of Eye-Tracking Analysis in Immersive Learning 695

For obtaining correspondences between screen coordinates and tracker coor-


dinates, the typical procedure is to show some dot or cross on the screen and
ask the user to look at it, and subsequently register the two pairs of coordinates.
In order to improve the stabilization of the user gaze during calibration,
instead of displaying a static target we opted to add a shrinking one, whose
movement attracts the attention of the user towards its center.
This calibration has to be performed typically for each usage, as the position
of HMD with respect to the eye may vary slightly. For this a set of 9 points are
shown sequentially on the “screen” and pupil center coordinates are recorded for
each of them. This is performed first for the left eye and after for the right eye,
using a dark screen for the opposite eye. From the obtained pairs of coordinates,
a mapping function f may be estimated depending on the type of mapping chosen
(in most cases linear or polynomial). The mapping is then performed as

S = f (P), (1)

where S = (Sx , Sy ) are screen coordinates of the point the user is looking at and
P = (Px , Py ) are the corresponding image coordinates of the pupil center. Note
that different mapping functions will be obtained for the left and right eyes.

6 Calibration Results and Analysis

The calibration of the eye tracker for obtaining a mapping function between
eye- and screen-coordinates was done using both a linear and a polinomial
approaches. The linear mapping consists in reducing the above function to
S = FP, where F is a 3 by 3 matrix, whose coefficients can be estimated using a
least squares solution. The calibration errors were then compared for the linear
global, linear per quadrant, and polynomial cases. The following tables shows
the obtained values for the errors in pixels between the target locations and
those obtained via the mapping functions.

Method Mean error Abs max error


Linear global 27.3331 58.9140
Linear per quadrant 8.3500 27.8960
Polynomial 2nd order 4.9010 16.2896
Polynomial 3rd order 3.4066 11.3599
Polynomial 4rd order 2.4244 9.1497

The difference between the linear and the linear per quadrant is that the
for the second the display region is divided in 4 quadrants producing 4 separate
mapping matrices.

zamfira@unitbv.ro
696 P. Menezes et al.

These results show as expected that the global linear mapping presents the
worst results, but when aplied locally (per quadrant) can improve substancially.
The polynomial approach shows as expected much better results. The reason is
clearly understood from the fact the iris locations are obtained from the projec-
tion of a point from an approximately spherical surface onto a plane through a
projective projection.
Using the obtained mapping functions it is now possible to infer the region
of the scene the user is looking at, as can be seen in the example shown in Fig. 3.

Fig. 3. Example of the output of eye tracker (red cross) mapped onto a plane cockpit
shown to the user.

7 Conclusion and Future Work


The paper presents the development of an eye tracker for inclusion in an Oculus
Rift DK2, in terms of both mechanical construction and image processing algo-
rithms. Preliminary results in terms of mapping of the estimated gaze onto the
screen image plane was shown.
Future work includes testing other types of mapping functions will be eval-
uated, namely polinomial approximations. Development of method for selection
of the zones the user is looking at, at the screen level in an initial stage and
in the 3D scene as a later one. For the 3D case it implies the development of
models for the gaze directions based on light beams given the uncertainty of the
estimates and the discrete nature of the rasterization process.

References
1. Ha, J.S., Byon, Y.J., Baek, J., Seong, P.H.: Method for inference of operators’
thoughts from eye movement data in nuclear power plants. Nuclear Eng. Technol.
48(1), 129–143 (2016)
2. Menezes, P., Chouzal, F., Urbano, D., Restivo, T.: Augmented reality in engineering.
In: 19th International Conference on Interactive Collaborative Learning and 45th
International Conference on Engineering Pedagogy (2016)
3. Móro, R., Bieliková, M.: Utilizing gaze data in learning: from reading patterns detec-
tion to personalization. In: UMAP Workshops (2015)

zamfira@unitbv.ro
The Importance of Eye-Tracking Analysis in Immersive Learning 697

4. Oglesby, J.L.M.: An adaptive visual learning approach for waterborne disease pre-
vention in rural West Africa. Master’s thesis, W. Kentucky University (2016)
5. Patney, A., Kim, J., Salvi, M., Kaplanyan, A., Wyman, C., Benty, N., Lefohn, A.,
Luebke, D.: Perceptually-based foveated virtual reality. In: SIGGRAPH Emerging
Technologies (2016)
6. Patrão, B., Menezes, P.: An immersive system for the training of tower crane oper-
ators. In: Exp.at 2013: The 2nd Experiment@International Conference, Coimbra,
Portugal (2013)
7. Song, J.C., Nam, S., Song, K.S., Park, S.Y.: Development of a personalized learning
system using gaze tracking system. WSEAS Trans. Comput. 14, 264–271 (2015)

zamfira@unitbv.ro
Simulation

zamfira@unitbv.ro
Augmented Reality-Based Interactive
Simulation Application in Double-Slit
Experiment

Tao Wang1, Han Zhang1, Xiaoru Xue1, and Su Cai1,2(&)


1
Faculty of Education, School of Educational Technology,
Beijing Normal University, Beijing 100875, China
caisu@bnu.edu.cn
2
Beijing Advanced Innovation Center for Future Education,
Beijing Normal University, Beijing 100875, China

Abstract. Experimental teaching is an essential link in teaching and learning


activities, holding an important position in the modern education. However, it is
impossible or difficult for some physical phenomena to be carried out in the
classroom. With the advantages of portability and combining both the real and
virtual world, mobile device and Augmented Reality (AR) technology are
having a positive influence on the creating of cognitive tools. In this paper, we
develop DSIAR, an AR-based interactive application on mobile devices, to
simulate a physical experiment, double-slit experiment. DSIAR allows students
to control and interact with a set of 3D models of laboratory apparatus through
markers, to change the parameters to observe the dynamic variable phenomenon
which is not easy to observe in the real world. The results of pilot testing show
that DSIAR can have a positive impact on assisting teaching and learning,
attracting students’ attention and stimulating their interest, suggesting significant
potential for this learning application in practice.

Keywords: Augmented reality  Simulation experiment  Mobile device 


Double-slit experiment

1 Introduction

Due to limited opening hours and lacking physical materials, it is difficult to access to
relevant references from the laboratory in the middle or high school. As a consequence,
many experiments cannot be carried out, especially in the teaching of physics, such as
the double-slit experiment. Therefore, students in such school learn physics still
through the traditional learning models, including attending a lecture, taking notes on
what the teachers write on the blackboard, memorizing facts from books or slides.
However, abstract concepts such as optical wave are formidable to high school stu-
dents, and their imaginative abilities are limited, so it is a challenge for them to imagine
the experimental progress just based on the information they got from books or slides.
This could be a barrier to in-depth understanding of the subject matter. Yet, these
problems could be solved by introducing alternative teaching resource such as Aug-
mented Reality-based learning tools (Jamali et al. 2015).

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_66
zamfira@unitbv.ro
702 T. Wang et al.

Augmented Reality (AR), an extension of Virtual Reality (VR), creates an


enhanced reality with bridging virtual and real worlds. With the coexistence of virtual
objects and real scenes around them, AR allows learners visualize complex spatial
relationships and abstract concepts, observe phenomena which is not easy to observe in
the real world, interact with the virtual objects in the most natural way, like interacting
with the interposed virtual objects just by moving the marker. Augmented Reality,
therefore, can enhance students’ interest and motivation, as well as, learning experience
(Gausemeier et al. 2003; Nincarean et al. 2013).
Mobile learning based on smart device is a new learning method. Besides, with the
integration of Augmented Reality technology and mobile device, a new trend of
applying AR to disciplinary teaching has appeared.

2 Literature Review

AR-based application in teaching and learning is most applicable in the following two
cases: (a) when the phenomenon is not easy to be simulated in reality, such as
inquiry-based micro-particles interactive experiments (Cai et al. 2014); (b) when real
experiment is limited by various factors which is hard to deal with, such as the convex
imaging experiment (Cai et al. 2013), as it is dangerous to keep a lighted candle in a
classroom.
Creating a mixed and enhanced reality, AR has compelling features for educational
purposes, such as learning content in 3D perspectives, offering learner with senses of
presence and immersion and visualizing the invisible (Wu et al. 2013). Additionally,
these features coincide with ideas in education theories. For instance, the theory of
situated learning insists that the actual and complete knowledge is acquired in real
learning situation, which AR technology could create by bridging virtual and real
worlds. Behaviorism which holds learning is the result of association formed between
stimuli and responses is another example. Within an AR-based learning environment,
learners could receive corresponding feedback immediately as they interact with the
environment or objects in it, while stimulus-response ties are forming and corre-
sponding knowledge is grasped. Besides, in an AR-based learning environment,
learners could gradually construct their recognition structures by conducting various
activities, which satisfies both Piaget’s assumption and practice of “bring laboratories
into classed” and the argument of constructivism that “learning is embedded in
authentic social experiences” (Cai et al. 2013).
A considerable number of AR-based learning and teaching tools are developed and
used in physics teaching (Castillo et al. 2015; Cai et al. 2013; Cai et al. 2016; Kauf-
mann and Meyer 2008), what’s more, a great number of researchers have designed
suitable activities to test the influence generated by using these tools in students’
learning performance (Akcayir et al. 2016; Cai et al. 2016; Wang et al. 2014).
Kaufmann and Meyer (2008) had introduced an AR-based application in teaching
mechanics. They developed a computer game to simulate experiments in the field of
mechanics. Involved in the 3D virtual world created by this application, students
engaged themselves in their own experiments. What’s important is that this application

zamfira@unitbv.ro
Augmented Reality-Based Interactive Simulation Application 703

offered students a considerate number of tools to measure mass, force and other
physical property of an object, during and after the experiments.
In the convex imaging experiment (Cai et al. 2013), learners need to (1) operate
2D-code cards to change the object distance and the distance between the object and
the lens; and (2) imagine that the 2D-code cards are the experimental facilities. The
learning effects could have been compromised due to the increased cognition load
caused by the information migration. The experiment would have been more inter-
esting if not only the virtual objects were integrated into a real scenario with AR, but
also the learner’s interactive operation behaviours were the same as the real experi-
mental condition.
AR-based application has been also developed for teaching magnetism. Cai et al.
(2016) implemented an AR and motion-sensing learning technology to teach the fields
of magnetic, where the magnetic model and magnetic induction line are simulated and
presented in real time. It demonstrated that the AR-based motion-sensing software can
improve students’ learning attitude and learning outcome.
As mentioned above, AR technology can improve development of simulation
systems and foster students’ learning of science. Therefore, our research targets
double-slit experiment, which phenomenon is not to be observed and is difficult to
carry out in most high schools. It is for this reason that we decide to develop an
AR-based interactive simulation application. With video recording the process of such
experiment, learners just can observe the phenomenon instead of interacting with them
by changing relevant parameter. Furthermore, Constructivism advocates that “knowl-
edge originate from activities and recognition starts from practice”. In the proposed AR
environment, learners can change relevant parameter with markers to observe the
variable phenomenon, furthermore, comprehend the process of such experiment. Our
research aims to design and develop a physical AR cognitive tool named DSIAR for
double-slit experiment and measure its reliability and usability.

3 Augmented Reality-Based Interactive Simulation


Application

3.1 DSIAR Overview


DSIAR integrates double-slit experiment with AR on mobile devices. The development
can be divided into three phases: capture real scene, track and compare the marker, and
compositing rendering, as shown in Fig. 1. We build a fluorescent screen model, a
point source of light and a slits model according to double-slit experiment using 3DS
Max. Then we plant the models into Unity3D environment and adjust the coordinate
system and the interactive mode between users and the models. Through the AR
software development kit (SDK) Vuforia, DSIAR can render virtual models and the
real scene to create a mixed real-time interactive environment.
DSIAR consists of three pieces of cards (as shown in Fig. 2(a), (b) and (c)) and a
mobile smart device (cellphone or tablet) (as shown in Fig. 2(d)). Three cards are used
as markers to represent the corresponding virtual models. The mobile smart device is

zamfira@unitbv.ro
704 T. Wang et al.

Camera embedded
in mobile divece
Capture and track

Co rend
marker, and get the

m er
information of it

po in
sit g
in
g
Compare
Adjust the model
Database according to
of original infomation
markers

Fig. 1. DSIAR overview

(a) (b)

(c) (d)

Fig. 2. Markers of DSIAR

used to capture and present the real world and the virtual models while the camera
embedded in it detects markers.
DSIAR runs on an android mobile smart device (cellphone or tablet). In particular,
it focuses on in-depth understanding of relation between phenomenon and relevant
parameters (including distance between slits, distance between slits and fluorescent

zamfira@unitbv.ro
Augmented Reality-Based Interactive Simulation Application 705

screen, wavelength), and visualizing the changing of phenomenon with operation of


such parameters.

3.2 User Operation


With DSIAR, double-slit experience could be directly simulated using three different
cards to replace point source of light, slits and fluorescent screen. 3D models of point
source of light, slits and fluorescent screen and the values of relevant parameters will be
displayed on the mobile device’s screen as camera captures all three cards.
Assuming distance between slits as d, distance between slits and fluorescent screen
as L, wavelength as k, and the spacing of the fringes as Dx. According to the formulate
of double-slit theory Dx ¼ kL d:

(1) when d, L are constant, Dx increases with the increasing of k; otherwise, Dx


decreases with the decreasing of k, as shown in Fig. 3(a).
(2) when k, d are constant, Dx increases with the increasing of L; otherwise, Dx
decreases with the decreasing of L, as shown in Fig. 3(b).
(3) when k, L are constant, Dx decreases with the increasing of d; otherwise, Dx
increases with the decreasing of d, as shown in Fig. 3(c).

(a) (b) (c)

Fig. 3. The phenomena with operation of cards (adjusting relevant parameters)

3.3 Pilot Testing


User pilot testing was conducted for measuring the reliability and usability of this
application. 1 teacher and 3 students who had never experienced AR-based application
before were interviewed. First, a brief introduction and training of the system’s function
was conducted. The teacher and students were exposed and familiarized with the
simulation application for around 30 min, while they did a simple teaching and
learning activity with it, as shown in Fig. 4. Then, we had an interview with them,

zamfira@unitbv.ro
706 T. Wang et al.

Fig. 4. Teacher and students are using DSIAR

expecting them to share their feelings as well as comments on this application in the
experimental operation.
From the interview, we can draw the following conclusions:
(1) The changing of phenomena with the operation of cards (adjusting relevant
parameters) is conform to reality.
After experiencing this AR-based application, the teacher expressed a wish to apply
it in her class. “It’s a wonderful teaching aids for this chapter. With it, the content will
not be dull and abstract.” which suggests that the simulation matches reality.
(2) All students felt that this application is very novel and interesting.
In the pilot testing, we offered the students with the AR-based simulation appli-
cation which they never experienced before. For them, therefore, this is very innovative
and interesting. They expressed that “Great, it’s a brand-new interaction way that I
can operate the object on the screen with my hands”; “I have never learning anything
with such application, if it were applied in class, my friends will be very interested!”
Students were impressed by the application, and it attracted their attention.

4 Conclusion

In this paper, an AR-based experiment simulation with the integration of augmented


reality technology and mobile smart devices (cellphone or tablet) was developed.
Double-slit experiment using augmented reality, assisting teaching and learning, could
be performed at any time, any place just with a mobile smart device.

zamfira@unitbv.ro
Augmented Reality-Based Interactive Simulation Application 707

Based on preliminary results of pilot testing, DSIAR can have a positive influence
on assisting teaching and learning, attracting students’ attention and stimulating their
interests. Although the development of Augmented Reality-based interactive simula-
tion application is finished, however, the sample size is not large enough. In order to
further explore the effect of AR-based simulation application, future work will involve
a large sample under rather more naturalistic conditions to collect enough data to verify
the effect and potential of AR-based simulation application, while an inquiry-based
learning activity will be designed.

Acknowledgments. This work is supported by the National Natural Science Foundation of


China (grant no. 61602043).

References
Akcayir, M., Akcayir, G., Pektas, H.M., Ocak, M.A.: Augmented reality in science laboratories: the
effects of augmented reality on university students’ laboratory skills and attitudes toward science
laboratories. Comput. Hum. Behav. 57, 334–342 (2016). doi:10.1016/j.chb.2015.12.054
Castillo, B., Iván, R., Sánchez, C., Guadalupe, V., Villegas, V., Osiris, O.: A pilot study on the
use of mobile augmented reality for interactive experimentation in quadratic equations. Math.
Probl. Eng. 2015, 1–13 (2015). doi:10.1155/2015/946034
Cai, S., Chiang, F.-K., Wang, X.: Using the augmented reality 3D technique for a convex
imaging experiment in a physics course. Int. J. Eng. Educ. 29(4), 856–865 (2013)
Cai, S., Chiang, F.K., Sun, Y., Lin, C., Lee, J.J.: Applications of augmented reality-based natural
interactive learning in magnetic field instruction. Interact. Learn. Environ. 25(6), 1–14 (2016)
Cai, S., Wang, X., Chiang, F.-K.: A case study of augmented reality simulation system
application in a chemistry course. Comput. Hum. Behav. 37, 31–40 (2014). doi:10.1016/j.
chb.2014.04.018
Gausemeier, K., Bruseke, U., Wortmann, R.: Virtual and augmented reality in education and
training: an interactive, multimedia training and information system for use in an exhibition.
In: Third International Conference on Virtual Reality and Its Application in Industry,
vol. 4756 (2003). 304-313428
Jamali, S.S., Shiratuddin, M.F., Wong, K.W., Oskam, C.L.: Utilising mobile-augmented reality
for learning human anatomy. Procedia Soc. Behav. Sci. 197, 659–668 (2015). doi:10.1016/j.
sbspro.2015.07.054
Kaufmann, H., Meyer, B.: Simulating educational physical experiments in augmented reality. In:
Applications of Mixed Reality, pp. 1–8 (2008)
Nincarean, D., Ali, M.B., Halim, N.D.A., Rahman, M.H.A.: Mobile augmented reality: the
potential for education. In: 13th International Educational Technology Conference, vol. 103,
pp. 657–664 (2013). doi:10.1016/j.sbspro.2013.10.385
Wang, H.-Y., Duh, H.B.-L., Li, N., Lin, T.-J., Tsai, C.-C.: An investigation of university
students’ collaborative inquiry learning behaviors in an augmented reality simulation and a
traditional simulation. J. Sci. Educ. Technol. 23(5), 682–691 (2014). doi:10.1007/s10956-
014-9494-8
Wu, H.-K., Lee, S.W.-Y., Chang, H.-Y., Liang, J.-C.: Current status, opportunities and
challenges of augmented reality in education. Comput. Educ. 62, 41–49 (2013)

zamfira@unitbv.ro
Developing Metacognitive Skills for Training
on Information Security

Jesus Cano1,2(&), Roberto Hernandez1, Rafael Pastor1, Salvador Ros1,


Llanos Tobarra1, and Antonio Robles-Gomez1
1
Department of Communication and Control System, UNED, Madrid, Spain
jscano@yahoo.es
2
San Pablo CEU University, Madrid, Spain
jesus.canocarrillo@ceu.es

Abstract. This paper aims to describe a change in teaching practice of infor-


mation security in college by presenting metacognitive aspects. From a con-
structivist view, students have to find a way for themselves and teachers have to
serve as guidance in that teaching-learning process. Here we show some
strategies in a framework to achieve in the field of cybersecurity based on our
experience.

Keywords: Information Security  Education  Cyber security 


Meta-cognitive  Didactic  Instructional  Motivation

1 Introduction

We would wonder it is good to think aloud, work in small groups, or preferably two
students, to encourage team complicity, to participate with other groups on time, to
exchange opinions maybe by thoughtlessly opening up their creative ideas, to share
experiences in the same sense, to discuss solutions and to propose critical thinking. But
all this we do not have usually covered by a lesson list or concept contents or in a
laboratory practice agenda for our subject (tell yourself Information Security or another
subject), that we teach in the classroom.
Metacognition is known as the educational phenomenon of thinking about thinking,
which provides a way to explore reasoning behind concepts. Each person has different
metacognitive abilities that are acquired while someone grows and matures in life
(Willingham 2007). Some of these skills are to monitor one’s own learning, to know
what one does not know, to predict performance, to plan, to manage study time and to
cognitive resources to achieve a successful outcome. So, in order to maximize learning,
students need to know how metacognitive aspects can improve their skills on a subject
including content, procedural and conditional knowledges.
Studies on metacognition and cognitive monitoring point out that young students
have a very limited knowledge of their own cognitive processes. Metacognitive
knowledge is thought to consist of the thoughts that each has of oneself, but of others
too, in the process of learning. This is can be said in the results of any kind of intellectual
activity. Thus, metacognitive experiences are conscious experiences. Then a nuclear

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_67
zamfira@unitbv.ro
Developing Metacognitive Skills for Training 709

idea is to meet the challenge of finding effective ways to teach metacognitive knowledge
and cognitive monitoring skills. Here, we would like to deal with metacognition as an
emerging boom of scientific researchers in recent years, but applied to education in
Information Security.
Our pretensions with this paper are double. First, to study the relevance of
metacognition and its interest in information security education. Secondly, once this
research context has been established, it will provide a framework for educational
intervention that will allow us to advance a little in this line of research. To achieve this
in the remainder of the text, we introduce some related works that we have considered
relevant in Sect. 2, the research methodology in Sect. 3, to move to Sect. 4 with the
results and discussion, a case of study in Sect. 5 and finally conclusions.

2 Related Works

In related work, we would suggest to take into account three points of reference in
order to focus this paper: what about training on security of information, about
classroom didactics and what is related to metacognition.
Around our subject, there are inquiries, such as Puhakainen and Siponen (2010),
which suggests that training in information security needed educational approaches
based on theory and not excessively falling into anecdotal or superficial contents. It was
considered to employees as a measure to solve a known problem related to training, but
we can consider these reflections to a certain extent in university environment to
properly plan a course didactically.
In addition, in information security courses, it is significant practical tasks to learn
by doing and to participate actively in classroom, even further in working out
non-technical learners or young people learning first information security. Really
cybersecurity is a concern into a classroom, but also outside: at home and on the streets.
It is a cross-cutting issue that involves all groups in society and almost all aspects of
human activity with the rapid spread of information technologies.
Some authors have emphasized that it is possible to bring technical topics closer to
several collectives by designing a relevant instructional and motivation course and
assessing it in the area of social sciences (non-technical skills). As Cano et al. (2016)
have analyzed, the nature of this transcends, per se, the engineering branch and coexists
with Law, Business management, Sociology, Psychology and Criminology, among
others.
Nowadays, a key to keep in mind is that students who begin their career are really
digital natives. In another way but that came to mean something similar, Prensky (2001)
said students are all native speakers of the digital language of computers, video games
and Internet. This study had classified students into two groups: people born before or
after 1980, in order to from this way to accept that there are digital native or only
immigrants. An interesting debate on digital natives has been very illustrative on two
academic levels about it (Bennet et al. 2008). Firstly, on the questioning of the existence
of a generation of digital natives and how to define it including age range and other
characteristics; and secondly, how education should change given these evidences.

zamfira@unitbv.ro
710 J. Cano et al.

On another front, since the first seminal studies about metacognitive capabilities
mentioned inin Flavell (1976; 1979) as well as Brown (1978), scientific interest in
metacognitive development has increased enormously in recent years. As a matter of
fact, according to them, metacognition deals with knowledge of processes and products
of cognition applied to oneself, through an active supervision of cognition, i.e., on
learning-relevant properties of information or data. And as they said, there is
metacognition if we notice that we have more problems learning one thing than
another, note to revise something before finally accepting it, write down something
before I can forget it, and so on. A lot of these metacognitive activities are needed in
problem solving techniques. The concern to self-inquire about the state of our
knowledge of something, in the process of solving a problem is essential, in situations
of a laboratory, in schools, or in daily life, as Brown mentioned. To get here,
metacognition demands introspection of one’s own performance to learn and discern
between our point of view and that of others. It should include the ability to make
efficient problem-solving decisions and, in this way, to be able to predict the limitations
of capacity that one has, to know heuristic routines and their usefulness that can help us
in the learning process, to identify and characterize the problem that is addressed, to
plan and to program the appropriate problem, to monitor and to supervise how effective
routines are, and to dynamically evaluate our cognitive activities to face success or
failure in a strategically-timed completion. Accordingly, it could be said a bunch of
decision-making keys to resolve problems.
Some years later, we can find a study of the US Army (Geiwitz 1994) that proposes
a conceptual model for the training of metacognitive abilities that highlights the rela-
tionship between these abilities and cognitive performance as well as suggestions for
their evaluation. It has also been interesting to us by the review of the advances in the
matter particularly in its decade back and its justification by means of an experiment
with questions on physics, tension and acceleration. In (Reed and Giessler 1996)
another experience can be seen applied to a software development course. For this,
authors present twelve graduated students and they intend to set up a comparison
between metacognition and experience for working hypermedia environments.
As for our preferred line on the subject of Information Security, especially at a
non-graduate level, it should be emphasized that traditional lecture approach has been
observed as a controversy about how teachers choose right topics and awaken the
interest of learners who play a more passive role in the classroom. But in spite of being
the typical way to teach Information Security, other cognitive strategies can be given
validly, such as Tutorial approach for self-learning, Scribes approach assigning
student-scribes for taking careful notes and subsequently presentation, Mentor
approach of having experts instruct, Project approach which can include an activity at
the end, an experiment or another classroom preparation, Synergy Approach in which
research and teaching attempt to attract the student’s attention, and Attack/defend
approach where students are divided into offensive teams and defensive teams with
different goals, as shown in Yurcik and Doss (2001).
A leading author on this subject, M. Bitshop in her “Education in Information
Security” (Bitshop 2000), described the objective of university education in Informa-
tion Security like learning general principles, how to apply them, and at best, teachers
could take case studies and generalize them to understand.

zamfira@unitbv.ro
Developing Metacognitive Skills for Training 711

However, with all these studies in the mixer, we still feel that there is a gap between
metacognition and educational aspects in Information Security, which has in addition
the complexity of being a multidisciplinary subject.

3 Hypothesis and Methods

At the beginning, our methods exposed in this work start with doing a comprehensive
study or review of the literature regarding metacognitive abilities. That is why we have
decided to seize the initiative and introduce a drawn scheme facilitating a suitable
understanding of the context where we move. This has been important for us since, as
far as we know, there is insufficient research on the metacognitive aspects of education
in Information Security issues, covering both technical and non-technical aspects,
which may be influenced by multiple disciplines like Computer Science, Telecom-
munication, Law, Business Management or Criminology, among others.
Our first hypothesis was that it can be built a scenario for knowing how significant
has been scientific research on Metacognition over time and what is the current state. It
seems reasonable that the right environment should provide a context where we, or
another teaching team, can find out some guidelines and activities for students to be
able to develop a positive learning experience.
Secondly, our hypothesis is that should be possible to integrate somehow into
Information Security a parallel metacognitive curriculum. We planned the hypothesis in
the sense of training students, included undergraduate ones, and it should incorporate
metacognitive learning together regard of cognitive content. Moreover, we believe that
practical activities could be a catalyst to reach these skills, in any way empowering
students to be responsible for their own learning and internalizing the importance of
developing metacognitive processes. Asking ourselves matters covered by this per-
spective, we can design instructional and motivational activities and strategies to allow us
to implement metacognitive processes. Then we could sum up a bunch of metacognitive
practices in education in order to contribute to the curricula but also for lifelong learning.
The methodology of the instructional design consists of three blocks. First of all, an
activity is presented with an introduction for capturing the attention and enthusiasm of
students themselves. Course objectives and a personal and professional motivation
should be presented. Secondly students are divided into pairs or three-member groups
if any, facilitating cooperative work. Talking out loud is advisable if they do not speak
more than two at a time or do not cross conversations, but in any case, do not interrupt
if teacher speaks to class crowd. Finally, each student should individually make a
results-based evaluation of the activity.
Metacognitive instruction will essentially consist of taking the time each student
takes at each moment during the teaching-learning process. For this, in the line of time
monitoring and the temporary awareness of the process of thinking, they must give a
description as well as spending minutes of each small enough piece of activity that they
might consider to be the proposed activity.
This way of learning to do the activities have a clear didactic vision. We aim to
approach learning techniques based on problems (ABP) or case studies (ABC).
The strategy pursues autonomous learning with necessary teaching guide proposing a

zamfira@unitbv.ro
712 J. Cano et al.

similar exercise in which can be found in the real world. With these techniques based on
problems, knowledge of cybersecurity as content is reached. But also, meta-cognitive
skills such as creative thinking, meaningful learning and learning to learn.
At the spatial level, learning monitoring will consist of making drawings or graphic
diagrams, which can help with solving problem, such as assembly, disassembly,
construction, list of steps to do, and much better taking pictures to remember something
that it could be difficult to write down. This one reinforces the idea that a picture is
worth more than a thousand words in many cases. Given the digital environment we
see in our classrooms, we could think it is easy to find our universities plenty of
students with smart phones with camera today.
Practical activities on computer security actually emphasize active learning
dynamics. These processes are difficult to perform through lectures and seminars
(understood as an oral presentation of a topic by a teacher in classroom). In terms of
skills, it is beneficial to be able to make practices in a lab format, using a laboratory for
just this, or at least an upgraded computer room for that purpose. That allows students
to develop a non-formal learning environment which facilitates especially significant
innovations in such a dynamic way. We believe that teaching improves if we try to
insert task through simulations, case studies, practical problems, workshops and, within
this situation, computer forensic experiments. In short, that may give a meaningful
experience to students.

4 Results and Discussion


4.1 From the First Hypothesis of Metacognition
To get an idea of the emergence of research on these issues, a search for research papers
under the word “metacognition” has been carried out. A result has been set up for
querying on this topic into articles titles, abstracts and keywords using the peer-reviewed
scientific database of Scopus. Figure 1 shows a number of them, in totally 4743 hits.

Fig. 1. Evolution of researching on metacognition in peer-reviewed literature

zamfira@unitbv.ro
Developing Metacognitive Skills for Training 713

As we could appreciate, the maximum impact on metacognitive issues has been


reached in this area for three years, but in particular slightly exponential in the last
decade.
Since the measurement was taken in the absence of the month of December by the
end of 2016, it is foreseeable that the maximum quotas in this matter will be main-
tained, at least as it has been for three years. Therefore, to date, it is remarkable that
fifty percent of scientific production in metacognition has been produced for five years
recently.
As a further matter, it would be very good to know who is interested in inquiring
about metacognition and which branches of knowledge bring the greatest advances
here. In view of being an emerging topic in science, we hoped to find that Psychology
and Pedagogy would be postulated as leaders in this field. But never further from
reality, Psychology was there with around forty-six percent of literary output. Psy-
chology with that important portion was followed closely by the Social Sciences
(41%), Medicine (23%), Art and Humanities (15%) and Computer Science with almost
12% in fifth place. Additionally, others such as Neuroscience has 9.5%, Engineering
with over five percent, join to Mathematics or Business Management with three percent
approximately, to name but a few. A representation in Fig. 2 has been shown in the
figure to see and compare it.

Fig. 2. Interesting Subjects on metacognition in literature

However, the findings may come to be intellectually curious the intensity of


competition between Social Science and Psychology, within a very short distance. This
small difference had made us reflect on the motives of this tendency, intuiting that
indeed it was really a tendency.

zamfira@unitbv.ro
714 J. Cano et al.

To address this concern, we were selecting the top nine reporting specialties today
in order to have a historical view of the relevance that each one has had over time. For
this, we have compiled a series of data, with the intention of observing if we find some
change in evolution that allows us to make a valuation. This can be seen in two parts,
firstly taking note of the proportion of our topic metacognition in scientific articles for
each of the last five years (period 2010–2015). Secondly, we wrote down taking a
journey back in 5-year steps until the 80s, i.e. milestones for 2005, 2000, 1995, 1990
and 1985. Below this compilation is expressed in a bar graph of Fig. 3.
In the Fig. 3 it is possible to see that there are empty cells when there are no data to
show. Thus, when cumulative sums of percentages are greater than one hundred per-
cent, there are articles that deal with several disciplines at a time.
Gradually the academic interest in metacognitive issues has been linked to Psy-
chology and Social Sciences, effectively from the early years in the 80 s. At the end of
that decade Medicine as well as Arts and Humanities are beginning to wake up. Also,
Neuroscience and Engineering in small proportion. This has been a fact that will
remain until today.

Fig. 3. Historical percentage data in various disciplines (2016–1985)

zamfira@unitbv.ro
Developing Metacognitive Skills for Training 715

As can be seen in Fig. 3, it was not until the mid-1990s that computer science
became aware of the concept. With the advent of the year 2000, IT issues appear with
greater force than Engineering itself and curiously in the early years of the decade
(2010) is positioned behind the pioneering branches of Psychology, Social Sciences
and Medicine, even Humanities.
All these results make us confirm that Computing is experiencing an emergent
creativity to apply metacognitive concepts, at least in the light of the results of the
articles compiled from literature.
But at this point, IT field is broad enough to allow us to ask for Information
Security. Actually, we can see that it is great to have an instrument to position our-
selves to what extent it is. Furthermore, from this relevant knowledge on metacognition
finding, there is a gap to study that would be interesting to develop in academia and
among practitioners.

4.2 From the Second Hypothesis of Instructional Design


Some key outcomes from the previous section justify the startup effort to find a
framework in the security area we are dealing with. Although, as we have seen before,
literature invites us to know some experiences, but concretely as far as we can see there
is a lack of metacognitive approach in this subject.
We have to take into account that conceptual and procedural contents that make up
cognitive learning processes are our main target. Students should pass the course taught
having learned a series of competences and knowledge that certainly have applicability
in their future professional area. But in turn, effective learning for everyday life is
relevant, even though metacognition discussed above depends on the maturity of
students’ personal development.
It is appropriate to include instruction in metacognitive skills that facilitate strate-
gies to involve monitoring, modeling and control of cognition, because we think it will
always be positive to know one’s own abilities and to know how to schedule tasks to
make the best use of resources.
In the Table 1, our lesson list to concentrate the subject contents for Information
Security is shown. The list of curriculum lessons is based on, with slight minor
adaptations, the agenda of subject shown in (Cano et al. 2014).
Alike on this curricular context, cognitive activities should be designed. A valid
objective model for core instructional design (without metacognition) could be found
following the orientated scheme of Cano et al. (2014).
Setting up a framework for instructional design in general, we want to give a view
of teacher organization as seen in Fig. 4. Our course for IS training.
Our course is composed of thematic units, as has been seen before in the Table 1.
Then each unit has a series of instructional artifacts, in the sense of educational objects,
that are designed methodologically to become learning activities, as has exposed in
Sect. 2. In this way, a training activity is composed and participates in the following
items: Content learning, Practical activities, Cognitive motivational aspects, and
Instructional design process characteristics, identified as cognitive elements; as well as
Strategies, Monitoring, Metacognitive motivation aspects, and Self-assessment.

zamfira@unitbv.ro
716 J. Cano et al.

Table 1. List of lessons in Information Security subject


Units Description
Unit 1 Principles of IS
Unit 2 Policies, plans and procedures
Unit 3 Human factor
Unit 4 Vulnerabilities, threats and malware
Unit 5 Cyberterrorism, cyber-espionage and criminal organizations
Unit 6 Incident response and computer forensics
Unit 7 Access to information and applied cryptography
Unit 8 Network security
Unit 9 Wi-Fi
Unit 10 Electronic payment
Unit 11 Computer crimes
Unit 12 Forgery, fraud and phishing
Unit 13 Cryptography and data protection

Fig. 4. Representation of framework for IS training

Each activity is determined by these eight variables, motivation being a common


element in the background but formally oriented to motivate on knowledge (the first
one) and on how the student is learning knowledge (the second one).
Focusing on the metacognitive block, we define an activity in order to incorporate
two clear metacognitive strategies. The first in the temporal plane, pointing out each
piece of activity and taking note of what it takes to do it. The second in the spatial
plane, diagramming or taking photograph of the situation of the things in each step of
the development of the activity. Both are instructed by teacher, so that a student will
learn to perform the tasks proposed of metacognition. It also refers to the motivation
aspects of these metacognitive tasks. Thus, the more the course advances, the more

zamfira@unitbv.ro
Developing Metacognitive Skills for Training 717

autonomous students become. The monitoring will be executed according to the close
observation of these strategies until their completion.
Our activity within the frame of work is rounded with a self-assessment which
implements each student, really the small group (2 or 3 members) to which it belongs.
The assessment is compared to classmates, at least two of them, so that they can
compare the execution and monitoring of metacognitive strategies and cognitive
outcomes.

5 A Case of Study

A practical case corresponding to the 2015–2016 course is presented where a “Hard


Disk Extraction” activity is performed. As additional complexity, a list of thirty specific
elements that a computer has to be located and described (power supply, memory,
motherboard, clock battery, disk cable, etc.). This activity corresponds to Lesson 6
(Table 1) on incident management and computer forensics.
A case study is presented corresponding to the 2015–2016 course where an activity
of “Extraction of a hard disk for forensic analysis” is carried out. The activity corre-
sponds to lesson # 6 (Table 1) on incident management and computer forensics and is
framed in a set of activities related to the forensic analysis of a computer and recording
information to maintain the proper chain of custody over gathered evidence.
The sample consisted of students quantitatively N = 29, with 23 girls and 6 boys,
taking the third academic year in college. There was a modal average corresponding to
22 years old aprox. All of them enrolled in the San Pablo CEU University in Madrid
(Spain). The environment is a laboratory composed of a class of computers with
resources specifically prepared for cybersecurity.
Well beginning the activity, all groups were starting formed in pairs preferably,
although three-member groups were allowed too. Next, the issue of incident man-
agement is introduced and the need for a forensic analysis to capture the interest. The
teacher presents the objectives and mechanics of the two metacognitive strategies from
our framework: on the times and on the spatial situation for things they do.
In this way, they should first think about how they were going to divide the whole
activity into smaller blocks of tasks, to measure the time taken by them and to describe
(taking photos if any). This activity had a special motivation, since each student is
given a tool box (mainly screwdrivers) for disassembly and subsequent assembly which
can be seen in two photos corresponding to Fig. 5 (disassembled) and Fig. 6
(assembled).
At the end, each team performs a self-assessment and compares it with peers they
prefer to exchange points of view on what is completed or not. Before completing the
class, teacher checks that reassembled computer is working, for which it is required to
turn it on. The correctness of methodology in the process carried out should be taken
into account by teacher but being no considered as a time competition.

zamfira@unitbv.ro
718 J. Cano et al.

Fig. 5. Two students working in the lab

Fig. 6. A teacher guiding a student to assembly

To get an idea of the evaluation performed in the activities, 5 out of the 29 students
were excellent and 5 were poor, but in general they have got an averaged score of 7.7 in
the whole of classroom. Figure 7 shows the general results obtained graphically.

zamfira@unitbv.ro
Developing Metacognitive Skills for Training 719

Fig. 7. Final evaluation of whole group

6 Conclusions

In summary, students may feel that they are facing real problems related to Information
Security and Communications in a guided environment. Knowledge learning is
important and a demanding goal, but responding to activities we have developed
meta-cognition strategies that enrich the fact of active learning by doing. The strategies
we have set out to be aware of how we learn, have been satisfactory and we feel that
they complete and perfect our educational mission over and above other considerations.
That is, not only content is developed, but metacognitive skills that the students are
going to take for their long life and even more can serve as a model to meet other
challenges.
Finally, our contribution really begins a path where we have tried to cover a gap
with regard to metacognition in a field as multidisciplinary as Information Security.

References
Bennett, S., Maton, K., Kervin, L.: The ‘digital natives’ debate: a critical review of the evidence.
Br. J. Educ. Technol. 39(5), 775–786 (2008)
Bishop, M.: Education in information security. IEEE Concurrency 8(4), 4–8 (2000)

zamfira@unitbv.ro
720 J. Cano et al.

Brown, A.L.: Knowing When, Where, and How to Remember: A Problem of Metacognition
(1978)
Cano, J., Hernández, R., Ros, S.: Bringing an engineering lab into social sciences: didactic
approach and an experiential evaluation. IEEE Commun. Mag. 52(12), pp. 101–107 (2014)
Cano, J., Hernández, R., Ros, S., Tobarra, L.: A distributed laboratory architecture for game
based learning in cybersecurity and critical infrastructures. In: 2016 13th International
Conference on Remote Engineering and Virtual Instrumentation (REV), Madrid, pp. 183–185
(2016)
Geiwitz, J.: Training Metacognitive Skills for Problem Solving (No. ASC-TR-051-3). Advanced
Scientific Concepts Inc., Pittsburgh (1994)
Flavell, J.H.: Metacognitive aspects of problem solving. Nat. Intell. 12, 231–235 (1976)
Flavell, J.H.: Metacognition and cognitive monitoring: a new area of cognitive–developmental
inquiry. Am. Psychol. 34(10), 906 (1979)
Puhakainen, P., Siponen, M.: Improving employees’ compliance through information systems
security training: an action research study. MIS Q. 34(4), 757–778 (2010)
Prensky, M.: Digital natives, digital immigrants: part 1. Horiz. 9(5), 1–6 (2001)
Reed, W. M., Giessler, S.F.: Prior computer-related experiences and hypermedia metacognition.
Comput. Hum. Behav. 11(3), 581–600 (1996)
Willingham, D.T.: Critical thinking. Am. Educ. 31, 8–19 (2007)
Yurcik, W., Doss, D.: Different approaches in the teaching of information systems security. In:
Proceedings of the Information Systems Education Conference, November 2001

zamfira@unitbv.ro
Optimization of the Power Flow in a Smart Home

Linfeng Zhang ✉ and Xingguo Xiong


( )

Department of Electrical Engineering, University of Bridgeport, Bridgeport, CT 06604, USA


lzhang@bridgeport.edu

Abstract. With the IT technology, the traditional power grid is being upgraded
to the smart grid (SG) with two-way communication and power flow between
utilities and customers. In addition, SG includes new technologies in distributed
energy generation (DEG) and distributed energy storage (DES), advanced meas‐
urement and sensing, controls, cyber security, consumer-side energy manage‐
ment, and environment protection. Thus, it shows the advantages in efficiency,
reliability, and security. A smart home is a mini power system with the renewable
energy resources and the local energy management. Therefore, the emission and
the power consumption can be reduced while the system efficiency will be
improved. In this paper, particle swarm optimization is used to manage the power
flow in a smart home with the objectives in the minimum cost and maximum
comfort. Results from two homes with different size of PV systems are compared
and discussed. The PV size for a stand-alone home is determined.

Keywords: Demand response · Distributed generation · Electric vehicle

1 Introduction

With the IT technology, the traditional power grid is being upgraded to the smart grid (SG)
with two-way communication and power flow between utilities and customers. In addi‐
tion, SG includes new technologies in distributed energy generation (DEG) and distrib‐
uted energy storage (DES), advanced measurement and sensing, controls, cyber security,
consumer-side energy management, and environment protection. Thus, it shows the
advantages in efficiency, reliability, and security. DEG plays a more and more important
role in SG through peak demand reduction, congestion alleviation, and reliability. It is
mainly from solar, hydro, wind, biomass, and geothermal renewable energy, and 13% of
electricity was from the renewable energy in 2015. In the U.S., 37% of electricity is
consumed in residences, where the heating and cooling account for about 48% of the
utility bill in the home [4]. Similar to power grid, the residential buildings are being
upgraded to smart homes with incorporated devices to achieve the goals of the homes,
such as energy consumption, comfort, security, and home-based health care [1]. A smart
home typically includes HVAC system, electrical vehicles (EVs), home backup battery,
and other appliances. It also includes the DEGs with renewable energy except hydro and
biomass. Due to the possible noise of the wind turbines and particular regions required by
the geothermal, they both show the restriction for the installation close to residential build‐
ings. Solar energy, the only one left, is clean and sustainable with little maintenance for the

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_68

zamfira@unitbv.ro
722 L. Zhang and X. Xiong

photovoltaic (PV) system. Although the PV panels are not effective in cold regions, their
advantages still make them as a viable source of electricity generation. Furthermore, with
the technology development and consumer awareness, the cost of the residential installed
PV project fell from $5.71/W in 2010 to $3.09/W in 2015 and the national residential PV
capacity increased from 250 MW in 2010 to 2,250 MW in 2015 [2–4].
Besides the development in renewable energy, there is a significant change in the
transportation. Today, more than 180, 000 electric vehicles (EVs) are on the roads
worldwide and they are attractive to the families due to their high efficiency and low or
zero emission. On the market, there are Tesla Model S, Honda Fit, Ford Focus, and
Nissan Leaf and their miles per gallon equivalent (MPGe) rating is around 100 according
to the U.S. Environmental Protection Agency [5–7]. These EVs can be Hybrid Electric
Vehicles (HEV), Plug-in Hybrid Electric Vehicles (PHEV), and pure Battery Electric
Vehicle (BEV) [8]. With the enough spare generating capacity in the current power grid
infrastructure, the EV number is expected to be over 20 million by 2020 [9, 10]. At a
smart home, EV charging in the night is convenient and doesn’t need to go to the gas
station. A vehicle-to-grid (V2G) car with fully charged batteries (85 kWh as the capacity)
could provide the 10 kW grid for 4.25 h for 50% discharge. Thus, a large amount of DES
can supply power to the critical facility in response to power shortage due to storms and
other disasters [11].
For the residential DEG, the outputs from the stochastic PV systems strongly depend
on the weather and have significant negative impacts on grid voltage and frequency
stability. Therefore, the PV system has no scheduling freedom. In order to compensate
for the negative impact of DEG on the grid, techniques in DES have been developed to
mitigate this variability for short periods. DES includes rechargeable battery, super
capacitor, and hot water tank technologies. They have different ratings for power and
discharge time [12, 13]. For a smart home, home backup batteries are possible choices
to mitigate the power-quality related issues by scheduling the batteries’ charging and
discharging. On the market, the home backup batteries available is $3,500 for a 10 kWh
batteries and $25,000 for a 100 kWh [11]. When combined with rooftop photovoltaic
(PV) system, the utility cost can be completely eliminated. In addition, the vehicles stay
in garages for most of time and the EVs’ large batteries can also be used as distributed
energy storage (DES) to smooth out the fluctuating production of electricity and to
improve stability of the power system. Similar to DEG, DES has its own behavior pattern
and the control strategies for DES are based on such behavior as well as the energy
conversion efficiency. For instance, to charge rechargeable battery at constant current,
the charge voltage increases and its capacity is affected by ambient temperature, etc.
The power from the grid, DEG, or DES will be consumed by different devices and
appliances at a smart home. Their demand on electricity is a random variable with a
probability distribution in an operation time window and it can be categorized into
different types or patterns based on whether it is can be separated, shifted, interrupted,
decreased, or cancelled [14]. However, such pattern of a demand may change. For
example, the charging of an electrical vehicle can be interruptible and shiftable with in
the midnight. But this charging can be non-interruptible and non-shiftable if a long
distance trip is planned for the next early morning. With the advanced sensing and
metering, residential energy management systems can adjust consumption levels as

zamfira@unitbv.ro
Optimization of the Power Flow in a Smart Home 723

demand response to respond the electricity price, correct voltage sags and flickers, or to
help stabilize the system frequency [15]. This energy management can also help utility
companies and power plants reduce cost through reductions in peak demand.
In this paper, Sect. 2 is focused on the physical and mathematical model of a smart
home and the details in the optimization of the power flow are provided. Section 3 shows
the simulation results and the discussion of the power flow and Sect. 4 includes the
conclusion.

2 Physical and Mathematical Models

The structure of a smart home is shown in Fig. 1. This home consists of several rooms.
In addition, there are a roof-top PV system, a set of home backup battery, an AC system,
and an electric vehicle in the garage. One central control panel is used for the energy
management in the house.

Fig. 1. The structure of a smart house

For the temperature control in each room, there are at least one electronic grill
damper, a thermometer, a motion detector, and one RF transceiver. The damper is a
valve or plate that regulates the flow of air inside a duct to the room. Thus, there are
multiple zones in a smart home.
Among different devices, heating, cooling, and lighting counts for over 50% of the
utility bill. While a 2.5 ton central unit (about the size for a typical 1,500 to 2,000 square
foot home) uses about 8.7 kW. For a 5 ton, the power is around 17 kW. Two stage
cooling, Two-stage cooling means the air conditioner or heat pump has a compressor
with two levels of operation: high for hot summer days and low for milder days.
Comfortable summer room temperature 25 °C and the winter room temperature 23 °C.
Zoning allows resident to precisely control the temperature in every room of the
home. With a thermometer in each room, an automatic air vent grill damper in the duct
is used to control airflow to that room. Motion detectors will help the central controller
to determine whether the room is occupied or not. Thus, the temperature is differentiated
and energy is saved. Most large homes feature multi-zone heating and air conditioning
systems with one single furnace or one central air conditioner. With electronic dampers
and thermostats, the rooms will be maintained at different temperatures. Thus, the energy
is saved as unoccupied parts of the home are kept at a temperature setback. In fact, most

zamfira@unitbv.ro
724 L. Zhang and X. Xiong

traditional multi-zone duct systems don’t save energy and their reliability is low. If the
residents are away, the heating or ac should not be shut down in order to maintain the
temperature and control humid to minimize the damage to furniture and the wall
compounds in Sheetrock.

2.1 Dynamic Price


The house is connected to the power grid and there are two-way information and power
flows between the home and the grid. The information only includes the dynamic elec‐
tricity prices: one is on the grid and the other one from the house. Figure 2 shows the
day-ahead hourly price on the grid and it should be higher than the one from the house
due to the extra cost in power transmission and distribution. In order to minimize the
cost, the two-way power transmission should below.

0.25
Price ($/kWh)

0.20

0.15

5 10 15 20 25
Time (hour)

Fig. 2. The price of the electricity from the grid

2.2 Loads

In this model, there are two types of loads: flexible and fixed. The power supplied to the
fixed load is guaranteed but not to the flexible load. Thus, the flexible loads can be
reduced or completely cancelled and its original profile is shown in Fig. 3. In this work,
there are two kinds of the fixed loads implemented, one is also shown in the Fig. 3 and
it is due to the devices, such as router and home security system, which always run. The
second one is shiftable loads and it can be regarded as separated and fine-grained tasks.
Each task will be completed in one time slot. This kind of loads includes the dish washer,
washer, and dryer. The EV battery is also a shiftable load for charging with a specific
time window. In out calculation, the daily total usage is around 27.8 kWh, which is close
to the average daily American home electricity consumption [20]. There are two peaks
in the load profile at 7AM and 6PM similar to ref [21].

zamfira@unitbv.ro
Optimization of the Power Flow in a Smart Home 725

2.0
desired fixed
desired flexible
1.5

Load (kW)
1.0

0.5

0.0
5 10 15 20 25
Time (h)

Fig. 3. The daily profiles of the fixed and flexible loads

In this work, only AC unit operation is discussed in the summer but not household
heating in winter because nearly half of houses use natural gas [22]. Power demand from
the AC unit is a combination of the fixed and flexible loads. If the room is occupied, the
temperature has to be maintained as the desired temperature and this demand is fixed.
For the room unoccupied, the temperature can be kept at a little bit higher than the desired
temperature and this load is flexible.

2.3 Energy Management


In order to evaluate the performance with the consideration of electricity price, and
comfort, the comfort is converted to uncomfortableness and an overall performance
index Q is the weighted total of cost index, C, and discomfort index U [27, 28]:

Q = w1 C + w2 U (1)

Here, C is calculated through the price and the power flow between the grid and the
home. This power can be positive if it flows from the grid to the home and negative if
it flows from the home to the grid. U is proportional to the shortage of the power supplied
to the flexible loads. w1 and w2 are weighting factors and the sum of them is 1. The
objective of the control is to minimize the Q and the room temperature settling time.
In this paper, an optimal energy management strategy, through particle swarm opti‐
mization (PSO), is to minimize Eq. (1) for the most cost-effective and comfort-aware
service to a residential consumer. In the implementation, there are one array for the
response of the flexible loads, one array for the power flow between the home and the
grid, and two arrays for charging and discharging strategy of the home backup battery
and the EV battery. The particles, a potential solution to the problem, consist of these
four arrays as the positions. In case there is no home backup battery, the element in the
corresponding array is set to be zero.

zamfira@unitbv.ro
726 L. Zhang and X. Xiong

3 Results and Discussion

Figure 4 shows the load profiles on the summer solstice. Compared with the Fig. 3, there
is difference between the scheduled fixed load and the real load on the summer solstice
because of the extra power for the central AC unit for cooling. The peak values in both
cases are around 3.25 kW, which is 1.5 kW higher. While the real flexible loads are the
same in the day time but not in the night time. More power is supplied to the flexible
load at the home with 100 m2 PV system.

6 45
shiftable load
flexible load (a)
5 fixed load 40
ambient temperature
4

Temperature (oC)
35
Load (kW)

3
30
2

25
1

0 20
5 10 15 20 25
Time (hour)

6 45
shiftable load (b)
5 flexible load 40
fixed load
ambient temperature
4
Temperature (oC)

35
Load (kW)

3
30
2

25
1

0 20
5 10 15 20 25
Time (hour)

Fig. 4. Loads and the ambient temperature on the summer solstice for the houses with 50 m2 (a)
and 100 m2 (b) PV systems

In the summer, the temperature of the occupied rooms is set as 23 °C. Otherwise, it
is set as 27 °C. In Fig. 5, the power for cooling to the bed rooms and the other rooms is
the same before 20 O’clock in both cases. Since the size and the heating properties of
the rooms are close, the temperature is maintained at 27 °C. With the same dead band

zamfira@unitbv.ro
Optimization of the Power Flow in a Smart Home 727

as 1 °C, the temperature of the rooms fluctuate simultaneously. After 20 O’clock, the
living rooms and the kitchen are occupied, the temperature is kept at 23 °C, the power
increases. However, the temperature of the bed rooms are still maintained 27 °C. After
21 O’clock, the temperature set point of the bed rooms are changed to 23 °C and the
temperature in the other rooms is set at 27 °C.

0.3 bed room


living room
(a)
Power (kW)

0.2

0.1

0.0
30
28
Temperature (oC)

26
24
22
20
5 10 15 20 25
Time (hour)

bed room
0.3
living room (b)
Power (kW)

0.2

0.1

0.0
30
28
Temperature (oC)

26
24
22
20
5 10 15 20 25
Time (hour)

Fig. 5. The power and the temperature in the bed room and the living room 50 m2 (a) and
100 m2 (b) PV system

Figure 6 shows the power flow from the homes with a 50 m2 and 100 m2 PV system
on the summer solstice. This day is the longest with the sunrise at 5 o’clock and sunset
at 18 o’clock. The PV system generates electricity from 5 o’clock to 18 o’clock. Since
the ambient temperature is high, the peak power is only 5 kW and 10 kW for the two
cases. The power from the PV system is supplied to the loads and the backup battery.
For the home with a 50 m2 PV System, the PV output power is supplied to the loads and

zamfira@unitbv.ro
728 L. Zhang and X. Xiong

the EV battery after 15 o’clock. Between the sunset at 18 O’clock and 21 O’clock, battery
starts to be discharged. Then, power from the grid, with the peak between the prices of
$0.136/kWh and $0.139/kWh, is the only source to meet the demand of the home. For
the home with a 100 m2 PV system, the peak PV output power is 10 kW between 10 and
15 O’clock and the power flow is similar to that in the home with 50 m2 PV system.
After sunset, the backup battery is discharged and the power flows to the load and the
EV battery. Different to the first case, there is no power needed from the grid. Thus, a
home with a 100 m2 PV system can be a stand-alone system.

20
EV battery
backup battery
15 shiftable loads
flexible loads
fixed loads
10
PV
Power (kW)

grid
5

-5
(a)
-10
5 10 15 20 25
Time (hour)

25
20
15
10
5
Power (kW)

0
EV battery
-5 backup battery
-10 shiftable load
flexible load
-15 fixed load
-20 PV (b)
grid
-25
5 10 15 20 25
Time (hour)

Fig. 6. The power profiles on summer solstice with the PV size as 50 m2 (a) and 100 m2 (b)

zamfira@unitbv.ro
Optimization of the Power Flow in a Smart Home 729

4 Conclusions

In this paper, particle swarm optimization is used to manage the power flow in a smart
home with the objectives in the minimum cost and maximum comfort. Results from two
homes with different size of PV systems are compared and discussed. The fixed loads
are guaranteed with the power supply and the temperature of the occupied room is
maintained. The PV size for a stand-alone home is determined.

References

1. Pedrasa, M.A.A., Spooner, T.D., MacGill, I.F.: Coordinated scheduling of residential


distributed energy resources to optimize smart home energy services. IEEE Trans. Smart Grid
1(2), 134–143 (2010)
2. Solar Energy Industries Association: Solar Industry Data. http://www.seia.org/research-
resources/solar-industry-data
3. National Renewable Energy Laboratory: US Photovoltaic Prices and Cost Breakdowns: Q1
2015 Benchmarks for Residential, Commercial, and Utility-Scale Systems. http://
www.nrel.gov/docs/fy15osti/64746.pdf
4. National Renewable Energy Laboratory: Residential, Commercial, and Utility-Scale
Photovoltaic (PV) System Prices in the United States: Current Drivers and Cost-Reduction
Opportunities. http://www.nrel.gov/docs/fy12osti/53347.pdf
5. Tesla Model S. http://www.teslamotors.com/models
6. Chukwu, U.C., Mahajan, S.M.: V2G electric power capacity estimation and ancillary service
market evaluation. In: 2011 IEEE Power and Energy Society General Meeting, San Diego,
CA (2011)
7. Compare Side-by-Side. http://www.fueleconomy.gov/feg/Find.do?action=sbs&id=32557
8. Wang, Z., Paranjape, R.: An evaluation of electric vehicle penetration under demand response
in a multi-agent based simulation. In: 2014 IEEE Electrical Power and Energy Conference
(EPEC), Calgary, AB, pp. 220–225 (2014)
9. Mets, K., et al.: Optimizing smart energy control strategies for plug-in hybrid electric vehicle
charging. In: 2010 IEEE/IFIP Network Operations and Management Symposium Workshops
(NOMS Wksps), Osaka, pp. 293–299 (2010)
10. Rigas, E.S., Ramchurn, S.D., Bassiliades, N.: Managing electric vehicles in the smart grid
using artificial intelligence: a survey. IEEE Trans. Intell. Transp. Syst. 16(4), 1619–1635
(2015)
11. Ke, B., Shuhui, L., Huiying, Z.: Battery charge and discharge control for energy management
in EV and utility integration. In: 2012 IEEE Power and Energy Society General Meeting
(2012)
12. Conejo, A.J., Plazas, M.A., Espinola, R., Molina, A.B.: Day-ahead electricity price
forecasting using the wavelet transform and ARIMA models. IEEE Trans. Power Syst. 20(2),
1035–1042 (2005)
13. You, S.: Developing Virtual Power Plant for Optimized Distributed Energy Resources
Operation and Integration, in Department of Electrical Engineering. Technical University of
Denmark, Lyngby (2010)
14. Nguyen, H.K., Song, J.B., Han, Z.: Distributed demand side management with energy storage
in smart grid. IEEE Trans. Parallel Distrib. Syst. (99), 1–13
15. Demand Response Discussion for the 2007 Long-Term Reliability Assessment (2007)

zamfira@unitbv.ro
730 L. Zhang and X. Xiong

16. Mohsenian-Rad, A.H., et al.: Autonomous demand-side management based on game-


theoretic energy consumption scheduling for the future smart grid. IEEE Trans. Smart Grid
1(3), 320–331 (2010)
17. Mohamed, F.A., Koivo, H.N.: System modelling and online optimal management of
MicroGrid using mesh adaptive direct search. Int. J. Electr. Power Energy Syst. 32(5), 398–
407 (2010)
18. Wang, L., Singh, C.: Stochastic combined heat and power dispatch based on multi-objective
particle swarm optimization. Int. J. Electr. Power Energy Syst. 30(3), 226–234 (2008)
19. Wu, Z., Gu, W., Wang, R., Yuan, X., Liu, W.: Economic optimal schedule of CHP microgrid
system using chance constrained programming and particle swarm optimization. In: 2011
IEEE Power and Energy Society General Meeting, San Diego, CA, pp. 1–11 (2011)
20. US Energy Information Administration: How much electricity does an American home use?
https://www.eia.gov/tools/faqs/faq.cfm?id=97&t=3
21. InsideEVs: Average Hourly Electric Usage – EV Households Versus Non EV Households. http://
insideevs.com/average-hourly-electric-usage-ev-households-versus-non-ev-households/
22. US Department of Energy: Tips: Heating and Cooling (2016). http://energy.gov/energysaver/
tips-heating-and-cooling
23. Gajda, J.: Energy Use of Single-Family Houses With Various Exterior Walls. Portland
Cement Association and Concrete Foundations Association (2001)
24. US Energy Information Administration: Homes show greatest seasonal variation in electricity
use. https://www.eia.gov/todayinenergy/detail.cfm?id=10211, http://www.fao.org/docrep/
x0490e/x0490e07.htm, http://www.wcc.nrcs.usda.gov
25. Zhang, L., Gari, N., Hmurcik, L.: Energy management in a microgrid with distributed energy
resources. Energy Convers. Manag. 78, 297–305 (2014)
26. Zhang, L., Xiang, J.: The performance of a grid-tied microgrid with hydrogen storage and a
hydrogen fuel cell stack. Energy Convers. Manag. 87, 421–427 (2014)
27. Eberhart, R.C., Shi, Y.: Particle swarm optimization: developments, applications and
resources. In: Proceedings of the 2001 Congress on Evolutionary Computation, Seoul, pp.
81–86 (2001)
28. Dong, Y., et al.: An application of swarm optimization to nonlinear programming. Comput.
Math. Appl. 49(11–12), 1655–1668 (2005). http://www.nrel.gov/gis/solar.html

zamfira@unitbv.ro
A Virtualized Computer Network for Salahaddin
University New Campus of HTTP Services
Using OPNET Simulator

Tarik A. Rashid1 ✉ and Ammar O. Barznji2


( )

1
Department of Computer Science and Engineering,
University of Kurdistan-Hewler, Kurdistan, Erbil, Iraq
tarik.ahmed@ukh.edu.krd
2
Computer Science and IT Department, Salahaddin University-Erbil, Erbil, Kurdistan, Iraq
ammar.hasan@su.edu.krd

Abstract. Factors of any computer network must be studied carefully and the
specialty of any type of computer network must be taken into consideration to
design an effective and a competent computer network and to avoid future dead‐
locks that might take place. Computer network factors are many as they can show
its performance and capabilities. The examples of these factors are time delay,
throughput, bandwidth, transfer of data, packet transfer, packet delay, http
transfer, congestion, collision, Ethernet specifications, VoIP etc. This paper is
depended on OPNET for developing a computer network simulation for Sala‐
haddin University-Erbil measuring HTTP (Hyper Text Transfer Protocol) service
values. Moreover, the simulation is made without high speed medias and devices
to make the design reliable and with superior performance sufficient to satisfy the
university users’ requests.

Keywords: Virtualized computer network · OPNET modulator · Computer


network performance · HTTP

1 Introduction

This research starts with a practical and virtual example for designing a computer
network that will be installed using the Optimized Network Engineering Tool (OPNET)
simulator for the new university Campus. The Core switch (Cisco 6500) is used to
interconnect all other switches that are fixed in each college. The switch can only connect
LAN’s of the same type (Ethernet to Ethernet, FDDI to FDDI, or Token Ring to Token
Ring).
The factors that are mentioned above must be considered to achieve maximum
workable efficiency and non-stoppable productivity. In addition, a network without any
collision and with less congestion can be achieved. This project is conducted through
the OPNET simulator (the version of modular 14.5). The duration of the simulation is
set to 20 min. This period is sufficient to the network devices to get stabilized after the
addresses were distributed to all workstations.

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_69

zamfira@unitbv.ro
732 T.A. Rashid and A.O. Barznji

This is clearly demonstrated from the obtained results in this paper. The results are
practical, almost like real, and especially are depended on the devices and medias which
are used for interconnecting the network. The OPNET provides a complete environment
which supports the distributed systems and the modeling of communication networks.
Both behavior and performance of modeled systems can be analyzed by performing
discrete event simulations. The environment of OPNET integrates different technologies
and it can be used for different aspects in education systems. This covers simulation,
data collection, designing a model, and data investigation.
OPNET allows the selection of any device or component needed to be set and used
in the design, then configures and programs each one individually per the application
simulated in the design. There is also capability to address these devises for establishing
connections among them with all other nodes then providing troubleshooting to all the
network through the RUN command containing all its devices and medias. And finally,
the OPNET monitors the communication to get feedbacks for determining the perform‐
ance of the network. By using the simulation of OPNET, the designer can choose devices
and components for different and multiple models, categories, vendors, and factories to
determine the best components for each computer network. This work also aims at
examining the used components in the design and implementing them to avoid deadlocks
and congestions in the network. Each device can be chosen and configured individually
on the OPNET. Also, all the connections or links in the design must be considered by
the designer of the network per their technical specifications.

2 Background

This research is depended on the OPNET to design a computer network for Salahaddin
University-Erbil Campus measuring HTTP service values to reach reliable performance
sufficient to the users. The researchers in [1], designed a simulation via the OPNET for
Mosul University. Three applications were added to test the network design. These appli‐
cations were FTP (File Transfer Protocol), HTTP (Hyper Text Transfer Protocol) and VoIP
(Voice over Internet Protocol). The results showed that their proposed model had a positive
efficiency on designing and managing the targeted network and can be used to view the data
flow. The researchers in [2] presented a design and an implementation of hybrid network for
different IP routing protocols in low load campus network. In this network model, generic
LANs models and WLANs were used. They created a simulated environment where many
applications are in use at a time. The network model was based on the OPNET IT GURU
Academic Edition. The OPNET was used to develop a new model suitable for Campus/
University environment. The model was tested against various types of applications (FTP,
ATM, and Remote Login) in Hybrid Networks. Two routing protocols RIP and IGRP were
used to check the performance of Hybrid network for different applications. The OPNET
simulation showed the impact of IP routing protocol for hybrid networks for different types
of applications. The researchers in [3] designed a computer network infrastructure to
support various administrative and academic activities for Tarumanagara University. The
built infrastructure covered all areas of the building and the floor. With increasing range of
services, Tarumanagara University needed to enhance local networks to accommodate their

zamfira@unitbv.ro
A Virtualized Computer Network for Salahaddin University 733

needs. The new network infrastructure that was built must guarantee the quality of serv‐
ices. The services are reliable, scalable and support future expansions. The network design
was the most important and critical parts before developing a new network infrastructure.
Analysis of user and network requirement was done to design the network.
The above research works demonstrate that there is an actual need to design the network
efficiently then simulate it via the OPNET [4, 5] before starting to build it on practical
fields. From the obtained results, several decisions can be taken to make the design more
efficient and reliable regarding the used media and devices, switch categories, transmission
technologies and protocol types.

3 Compus Network Interconnection

To explain the network interconnection, it is important to count its hardware components


as follows: 17 switches, each with 64 ports or RG 45 suited to connect with STP cables.
Each switch is fixed in its specified college location to distribute services to the entire
college using STP cable (100 Base T).
Also, all these switches are connected via 1000 Base X fiber optic cable with the
core switch located at the center of the university campus. This represents the backbone
of the network. Besides, six core fiber optic cables (single mode KSA made) are
connecting the college switches to the core switch. The core switch is connected to a
server to get the internet services like VoIP, email and HTTP. The main switch is
connected to the router to isolate the university network from the outside cloud which

Table 1. The components which are used in the project.


Component Description Pieces (Qt.)
1 Total no. of computers Personal computers 342
2 Core switch Ethernet connections at the CISCO 20
Catalyst switch 6509 specified data rate
(10, 100, 1000 Mbps)
3 Workstation Personal computers with Ethernet LAN 240
card, each of 100 Mbps and RJ-45
termination connected with 64 switch by
100 Base 100 STP cat6
4 Router The Cisco device (Cisco 7507) Ethernet to 1
Ethernet, FDDI to FDDI
5 Firewall Ethernet connection 1
6 IP Cloud Device name: 1
eth64_sl64_atm16_fr16_adv
7 1000 base x fiber optic Sufficient for 1000 m and speed of 5000 m
cable 1000 Mbps, 4 Twisted paired single wires
8 100 Base 100 STP Sufficient for 100 m and speed of 100 Mbps 24 rolls each with
cat6 4 Twisted paired STP 305 m
9 16 port Switch Cisco2948 data rate (10, 100, 1000 Mbps) 2

zamfira@unitbv.ro
734 T.A. Rashid and A.O. Barznji

provides the internet service to the network project. The server can be connected to a
router to achieve more security and to protect entire network from the intruders. Firewall
is used to protect internal network computers from the intruders and hackers. The capa‐
bility of this design to add wireless distribution services is in such an easy way via just
replacing the college switches by a wireless router to distribute wireless services that
maintain the speed and connectivity. Table 1, shows the used components in this project.
The cloud is used to provide services for the hosts of the entire network, and then the
network is configured to produce many services HTTP, VoIP, packet transferred per second
and delay in the profile definition. Similarly, each computer that we wish to get result should
be configured to simulate that service, and then get the results in the end.

4 The College Network

The computer network is installed in an area of 3000 m2 or (3 km2), the colleges are distrib‐
uted around the central building of the university. The presidency location assumed to be as
one of the college networks (See Figs. 1, 2 and 3). Then, 16 college networks can be
obtained, it is sufficient for the Salahaddin university hierarchy configurations.

Fig. 1. The campus network overall topology.

zamfira@unitbv.ro
A Virtualized Computer Network for Salahaddin University 735

Fig. 2. The college computer network topology.

Fig. 3. Topologies of some adjacent college network.

In this way, it is noticed that each college switch is of 900 m distance from the central
switch, and this fulfills the cable media specification of physical layer. Fiber optic cable
1000 BaseX can connect computers disparities of 950 m by each other.
The college networks consist of 20 computers distributed around each college loca‐
tion. The college computer network should be wired by STP cable interconnecting each
computer with the central switch of 64 Port switch ready for future expansions as shown
in the Fig. 2.
All the college switches are connected to the main switch by a fiber optic cable 1000
Base X. This is connected to the main router and the servers of firewall, and the appli‐
cation by a switch 5.3. The Main Router Profile Definition in the profile Definition is
configured the application that we wanted to simulate in this network project. This
application should be configured on each node (PCs), or else, the Pc cannot get benefit
from that specific configured application. Also, the server should be configured to get
this application profiles.
Addresses can be configured to all devices in the layer 3 (network). In this project,
IP addresses are configured to each device in the network project.

zamfira@unitbv.ro
736 T.A. Rashid and A.O. Barznji

5 Results and Discussion

This paper determines several results about the ability of using the OPNET to simulate the
Salahaddin University-Erbil campus by all its sites (locations) and type of services like HTTP
and FTP with measuring the network parameters of time delay for Ethernet and the
throughput. The Ethernet Delay was decreased slightly until it was finally stabilized at
0.00007 s as shown in Fig. 4. The Cloud Switch queuing delay for the point-to-point from
the cloud to switch starting by 0.0000078 as shown in Fig. 5. The Global statistics Traffic
Sent Byte/sec reached 50000 byte/sec as shown in Fig. 6. The voice traffic sent stabilized
at 480000 Byte/sec. The Global statistics VoIP Traffic Sent Byte/sec reached 500,000 Byte/
sec. It is noticed from the curves obtained that the file transfer HTTP data Traffic received
response which was started with zero, then increased reaching 85000 byte/sec, after 4 min
from the beginning, and the simulation stabilized at approximately 17000 byte/sec as shown
in Fig. 7. The Global statistics RIP HTTP data Traffic Sent reached 105 bit/sec for maximum
value, then decreased after an interval of 20 min reaching only 2 bit/sec as the network
stabilized after obtaining IP addresses by each of the connected devices (See Fig. 8). The
Server to Switch point to point throughput in bit/sec for the network speed reached
3000000 bit/sec, then stabilized at 2700000 bit/sec as shown in Fig. 9. This high bit rate was
sufficient to provide the services to all the university users, besides, the cable capability to
carry this load as the specification mention in Table 1, in addition to what mentioned in the
Sect. 3, the cable (router and switch) that used in this network portion has the capacity to
handle or carry a bit rate of 100000000 bit/sec. Global statistics web browsing heavy Server
HTTP Traffic received bit/sec for web browsing stabilized at 4000 Byte/sec (See Fig. 10).
All these determined values are sufficient to serve the users of the university for several
applications like HTTP. Thus, it is important for future work to study, and make simula‐
tions for other applications such as FTP, VoIP etc.

Fig. 4. The Ethernet Delay is decreased slightly until it gets stabilized at 0.00007 s.

zamfira@unitbv.ro
A Virtualized Computer Network for Salahaddin University 737

Fig. 5. The Cloud Switch queuing delay for the point-to-point from the cloud to switch Starting
by 0.0000078.

Fig. 6. The global statistics Traffic Sent Byte/sec reached 50000 byte/sec.

zamfira@unitbv.ro
738 T.A. Rashid and A.O. Barznji

Fig. 7. The global statistics web browsing HTTP traffic received Byte/sec reached
17000 Byte/sec.

Fig. 8. The network stabilized after obtaining IP addresses by each of the connected devices.

zamfira@unitbv.ro
A Virtualized Computer Network for Salahaddin University 739

Fig. 9. The Server to Switch point to point throughput in bit/sec for the network speed reached
3000000 bit/sec then stabilized at 2700000 bit/sec.

Fig. 10. Global statistics web browsing heavy Server HTTP Traffic received B/S for web
browsing stabilized at 4000 Byte/sec

6 Conclusion

In this paper, different factors are studied and examined to determine a computer network
performance and its capabilities. Examples of these factors are time delay, throughput,
bandwidth, transfer of data, packet transfer, packet delay, http transfer, congestion,
collision, Ethernet specifications, VoIP etc. The OPNET is used for developing a
computer network simulation for Salahaddin University-Erbil measuring HTTP service
values. In addition, the model is made without high speed medias and devices to make
the design dependable and satisfactory for handling the university users’ requests. The

zamfira@unitbv.ro
740 T.A. Rashid and A.O. Barznji

OPNET can produce more and better abundant, comprehensive and accurate results for
networks (and its components such as routers, switches, etc.) than other programs which
are used for topological representation for these networks. In other words, the OPNET
takes into consideration the specifics of technical features for each component used for
connecting the networks. For example, the way of connecting each component or device
via what type of cable or what features the cable has or the kind of pros and cons this
cable has.

References

1. Hammoudi, M.: Building model for the University of Mosul computer network using OPNET
Simulator. Tikrit J. Eng. Sci. 18(2), 34–44 (2011)
2. Sharma, S.: Design and implement the hybrid network for different IP routing protocols and
comparative study thereof. Inf. Assur. Secur. Lett. 1, 035–040 (2010)
3. Mulyawan, B.: Campus network design and implementation using top down approach: a case
study Tarumanagara University. In: Proceedings of the 1st International Conference on
Information Systems For Business Competitiveness (ICISBC) (2011)
4. Jaswal, K., Jyoti, K.V.: OPNET based simulation and investigation of Wimax network using
different Qos. Int. J. Res. Eng. Technol. 3(5), 575–579 (2014)
5. Mehta, P., Baghla, S.: Performance evaluation of heterogeneous networks for various
applications using OPNET modeller. Int. J. Recent Innov. Trends Comput. Commun. 3(6),
4003–4006 (2015)

zamfira@unitbv.ro
Online Engineering

zamfira@unitbv.ro
GIFT - An Integrated Development and Training System
for Finite State Machine Based Approaches

Karsten Henke ✉ , Tobias Fäth, René Hutschenreuter, and Heinz-Dietrich Wuttke


( )

Ilmenau University of Technology, Ilmenau, Germany


{karsten.henke,tobias.faeth,rene.hutschenreuter,
dieter.wuttke}@tu-ilmenau.de

Abstract. At the Ilmenau University of Technology’s “Integrated Communica‐


tion Systems” Department a main teaching concept deals with the design of digital
control systems. Different lectures from the 1st to the 8th semester are using Finite
State Machines (FSM) as a specification technique to realize different design
tasks. During undergraduate studies the basics of Finite State Machines and their
usage within the design of digital control systems are taught. To conceptualize
more complex digital systems, as required in higher courses, it is necessary to use
powerful toolsets. One example of such a toolset is the GIFT (Graphical Inter‐
active Finite State Machine Toolset) system, developed by the Integrated
Communications System Group at the Ilmenau University of Technology. With
this toolset we want to extent our remote lab GOLDi and implement new tech‐
niques for a web-based development system for Finite State Machines.

Keywords: Control engineering education · Web-based design tools · Remote


laboratories

1 Introduction

With our hybrid online lab GOLDi (Grid of Online Lab Devices Ilmenau), described in
several papers [1–6], we support the design process of digital control systems, which
usually consists of the conceptual formulation and the design of the control algorithm
to finally achieve a validated control (see Fig. 1). For the functional description, we offer
different specification techniques by using noncommercial development tools for
various web-based control units in the remote lab:
• a Finite State Machine (FSM) based design on the basis of digital automata - executed
within a client-side FSM interpreter
• a software-oriented design in C or assembler executed on microcontrollers
• a hardware-oriented design in hardware description languages or schematic block
design by using FPGA’s
Simulation and visual prototyping help to identify functional errors before starting
practical work on real physical systems (the electro-mechanical models) in the lab room
with one of the selected control units.

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_70

zamfira@unitbv.ro
744 K. Henke et al.

Fig. 1. GOLDi design flow

While several professional or semi-professional design tools are available for soft‐
ware and hardware oriented approaches, the FSM based design is only partially
supported. That’s why we enhanced the functionality of our GOLDi system by the GIFT
(Graphical Interactive Finite State Machine Toolset) system which will be described in
detail within this contribution. The GIFT system is an integral part of the GOLDi remote
lab infrastructure. The first conceptual ideas for this GIFT system were presented during
REV2016 [7].
In the following, we will demonstrate how students can use the GIFT system to:
• specify the given task by using Finite State Machines
• simulate their design within the GIFT system to achieve a faultless solution and
• export the generated next-state and output equations to the GOLDi remote lab infra‐
structure to test their solution under real environmental conditions with electrome‐
chanical hardware models.

2 GIFT within the GOLDi Remote Lab Infrastructure

Based on the flexible grid structure of the GOLDi system an experiment consists of two
components: on the one hand there are various control units (e.g. FSM, microcontroller,
FPGA). On the other hand, there are the electro-mechanical physical systems (e.g.

zamfira@unitbv.ro
GIFT - An Integrated Development and Training System 745

elevator, 3-axis or warehouse models). A detailed description of the whole GOLDi


architecture as well as different working modes can be found in [1–4].
As shown in Fig. 2, the GIFT system is part of the tool chain which is executed at
client side within a web-browser. Any client side tools will be downloaded from the
GOLDi cloud [12]. Two terms are used to classify the client side tools:
• Design Environment (GIFT, Atmel Studio, Quartus Prime)
• Execution Environment (ECP- Experimental Control Panel)

Fig. 2. GIFT within the GOLDi architecture

2.1 Design Environment

The Design Environment is used to specify the user developed design by using a spec‐
ification technique determined by the control unit. For FSM based designs, this Design
Environment is the GIFT system which provides methods for:
• general FSM administration
• input the FSM as automaton graph or transition table
• handling of transition conditions and coding of states
• simulation of single and parallel automata
• generation of the next-state (z) and output (y) equations
• export of z/y equations to the ECP of the GOLDi system
• generation equations for D and JK flip-flops
Other examples for Design Environments are Atmel Studio [8], in case of microcon‐
troller based control unit for a software-oriented implementation, or Quartus Prime from
Altera [9], in case of FPGA based control unit, for a more hardware-oriented design. The
Design Environment is not limited to specification - it provides also support for syntax check
of the design in the specification language as well as automatic tools for verification and
validation. An important feature is also the possibility to generate output data (e.g. GIFT
export data or Atmel *.hex, Altera *.pof binary files), which can finally be loaded into the
Execution Environment.

zamfira@unitbv.ro
746 K. Henke et al.

2.2 Execution Environment


The Execution Environment is used to execute the user design within the GOLDi architec‐
ture. This is done by using the Experiment Control Panel (ECP). The ECP supports any of
the available control units within the GOLDi architecture and is dynamically configured on
startup to fit the user’s choice in control unit/electro-mechanical physical system.

Fig. 3. I/O monitor (e.g. sensor signals) of the ECP for the 3-axis portal

By using the ECP [7], students can:


• upload the synthesized/compiled designs to the corresponding control units (e.g.
microcontroller, FPGA) in the lab room
• import designs previously exported from the GIFT and execute them in the ECP’s
integrated FSM interpreter
• watch the experiment by manipulating environmental variables inside an I/O monitor
(Fig. 3 shows an example to monitor the sensor signals of the 3-axis portal) or by
observing the control of the physical system directly via a webcam
• handle the experiment (e.g. start, stop, reset)
• use the interactive debugging features (break on sensor/actuator changes or special
conditions)
• single step processing, by pausing the execution on every sensor/actuator change
• change environmental variables if necessary

zamfira@unitbv.ro
GIFT - An Integrated Development and Training System 747

• choose an individual initial situation for the experiment by manipulating the physical
system via mouse or keypad

3 FSM Based GOLDi Design Flow

By using Finite State Machines as specification technique to realize various control tasks, the
graphical interactive FSM toolset GIFT can be used directly within the GOLDi design flow
(see Fig. 4).

Fig. 4. GIFT integrated GOLDi design process

3.1 Control Task Example

As an example, to introduce Finite State Machines for students in the first semester, we will
discuss a design task by using the electro-mechanical model “3-Axis portal” from the
GOLDi remote lab infrastructure:
“On one spindle of a 3-Axis Portal crane, a tool carriage can be moved to the right and
to the left. Limit switches provide input information on the left end position (xl) as well as the
right end position (xr) of the tool carriage (xr, xl).

The motion can be controlled via the output variables (yl, yr) between
• motion to the left (yl = 1, yr = 0),
• motion to the right (yl = 0, yr = 1) and
• stop (yl = yr = 0).
An additional input variable xs signalizes
• stop motion (xs = 0) or
• movement (xs = 1) to the left or right.
After a possible break, the movement in the original direction should be continued.”

3.2 GIFT FSM Design

In the following we demonstrate how students can specify their design based upon the given
task by using Finite State Machines, simulate their design within the GIFT system to achieve
a faultless solution step by step and can finally export the generated next-state and output
equations to the ECP within the GOLDi remote lab infrastructure.

Design of the Automaton Graph


Such kinds of tasks are solved by developing a formal description of the control algo‐
rithm on the basis of an automaton graph and the corresponding Boolean equations.

zamfira@unitbv.ro
748 K. Henke et al.

Figure 5 shows two possibilities of corresponding automaton graphs by using the graph
editor of the GIFT system:

Fig. 5. Mealy and Moore automata for the spindle control task

• a Mealy automaton graph with two states as well as


• a Moore automaton graph with four states.

Simulation of the Design


The student can proceed to the simulation process using graphical controls. That leads
to appropriate state transitions caused by the changed set of input variables. Via wave‐
form simulation (see Fig. 6) the temporal sequence of input and output variables and the
internal state variables (coding the states) of partial automata can be shown.

Fig. 6. Simulation tool of the GIFT system

zamfira@unitbv.ro
GIFT - An Integrated Development and Training System 749

3.3 Export to ECP


Most of the students are highly motivated in realizing such design tasks in a real labo‐
ratory because they can see the results of their design immediately. That’s why the GIFT
system offers the possibility to export the generated next-state and output equations
directly to the ECP of the GOLDi system (see Fig. 7).

Fig. 7. GIFT: ECP export functionality

Because the design is typically made with user-defined variables (e.g. xl, yr) the
student must adapt them to the real sensor and actuator interface notations (e.g. xr must
be replaced by x00 of the 3-Axis Portal sensor signals, which means “X-Axis: Crane
position right”). This can be done interactively for each input and output variable within
the ECP (see Fig. 8).

Fig. 8. ECP: GIFT import functionality

zamfira@unitbv.ro
750 K. Henke et al.

3.4 Design Execution


After uploading the next-state and output functions from GIFT or inserting these equa‐
tions manually to the ECP, students are able to start the lab procedure and watch the
behavior of their implemented design (see Fig. 3).
In this case, the electro-mechanical model will be controlled via the Internet “from
a distance” through the interpreter, running inside the student’s client PC. No additional
control units in the remote lab are necessary. In this case, only the input and output
signals between the virtual control and the physical system are transferred via Internet.
In case of a selected virtual physical system the real physical system is represented
by a simulation component within the ECP running on the client machine. No Internet
connectivity to the laboratory and the physical systems in the remote lab is necessary
for this kind of experiment. This mode is especially suitable for face-to-face lectures in
a classroom where each student can work with the same virtual physical system simul‐
taneously as well as in situations in which a stable network connection cannot be guar‐
anteed (e.g. during travel or while using mobile networks).

4 Further Applications of the GIFT System

In this chapter, further applications of the GIFT tool beside the standard GOLDi devel‐
opment process mentioned above will be explained. These applications are used along‐
side our teaching models at Ilmenau University of Technology and have been proven to
be successful in teaching in depth knowledge of techniques to achieve flawless designs
based of Finite State Machines.

4.1 Teaching by Demonstration

Besides teaching the basics of FSM’s the GIFT system will be used in undergraduate
lessons to demonstrate different, unexpected side effects which can be occur when a
FSM based specification is incomplete and/or contains contradictions. The GIFT system
can demonstrate the resulting faulty behavior of the design task and gives information
how to avoid them.
Due to the intuitive approach for the design of automata graphs it could be possible
that students.
• forget to specify all next-state conditions which lead to an incomplete design or
• specify inconsistent next-state conditions which lead to a contradictory design.
In both cases it is not recommended to realize such faulty designs because the
behavior of the implemented design is normally completely different as the desired
behavior. Figure 9 (left) gives an example for a faulty Moore automaton of the spindle
design task with contradictions in states Z1 and Z3. The GIFT system automatically
indicates an incorrect design by highlighting the faulty state transitions in red and calcu‐
lates the behavior of the resulting design based on the faulty next-state functions (Fig. 9,
right). The correct Moore automaton graph is shown in Fig. 5, right.

zamfira@unitbv.ro
GIFT - An Integrated Development and Training System 751

The idea is to stop the motion during the “motion to left” (state Z1) or “motion to
right” (state Z3) to switch to the two “stop” states Z0 or Z2 by deactivating the xs variable.
The motion to left or right will be continued by activating xs again. Due to the intuitive
design, students often ignore, that it is only possible to change the motion direction
(between Z1 and Z3 or vice versa) or stay in these states when xs is activated.
This will otherwise result in the mentioned contradictions. Although the resulting
design can be activated to drive to the left or right direction from the two “stop” states
Z0 or Z2 it is never possible to stop the motion again (see Fig. 9, right). In the worst case
the motion will not be stopped in Z3 if xs is deactivated and the right end position is
reached – which finally could damage the electro-mechanical model (or the hardware
model in the f2f laboratory without any further protection mechanisms).

Fig. 9. GIFT: Incorrect FSM design (left) and resulting automaton graph (right) for the given
spindle control task

Fig. 10. ECP: Termination of the design execution due to design errors

zamfira@unitbv.ro
752 K. Henke et al.

If students decide to ignore the automatic design checks of the GIFT system and
export faulty next-state and output functions to the ECP of the GOLDi system and start
the execution, the implemented protection unit will finally terminate the execution to
avoid any damages of the electro-mechanical hardware models (which is shown in
Fig. 10).

4.2 GIFT as Interactive Training System


Complementing the theoretical knowledge taught during lectures, students can use the
GIFT system as an interactive training system to deepen their knowledge in specification
techniques or for exam preparation. For that, students will get predefined automata
designs:
• to derive the next-state and output functions and
• to check the given automaton regarding completeness and contradictions and can
compare their solutions with the calculations of the GIFT system.

4.3 Preparation for Hands-on Lab Sessions

In preparation of the hands-on laboratories students can enter and simulate their FSM
design for the given lab task. The GIFT assisted preparation process for hands-on lab
sessions is shown in Fig. 11. In contrast to the automated GIFT integration into the
GOLDi infrastructure and the execution of the design in the ECP, for hands-on lab
sessions the students must realize their design by manually connecting integrated circuits
(e.g. AND/NAND, OR/NOR Gates, D/JK flip- flops) by wires.

Fig. 11. GIFT assisted preparation for hands-on lab sessions

Fig. 12. GIFT: Export of minimized D Flip-Flop and JK Flip-Flop equations (left) for hands-on
lab session (right)

zamfira@unitbv.ro
GIFT - An Integrated Development and Training System 753

Equations for D and JK flip-flops (Fig. 12, left) as well as output equations can be
generated by the GIFT system. During the hands-on laboratory students will build up a
sequential circuit on the basis of these next-state and output functions to control simple
technical facilities. While the results achieved within the GOLDi infrastructure are self-
assessed, in hands-on lab sessions this is done by a teacher/tutor (Fig. 13, right).

Fig. 13. GIFT: Automaton graph for the water level control task (left) and the specification
scheme (right)

4.4 Design of Parallel Automata


Beside the design of single automata, students can use the GIFT system to design parallel
automata. In principle, there are two possibilities:
• the automatic decomposition of a single automaton in several parallel automata or
• the intuitive design of a set of single automata which will be interact finally as a
parallel system.

Automatic Decomposition of a Single Automaton into Parallel Automata


This mode will be used mainly for teaching purposes to demonstrate different aspects
for the design of parallel automata. Based on the design of a single automaton, students
can choose the number of parallel automata and the state coding of the parallel automata.
Figure 13 gives an example for a single automaton for a water level control task.
Although GIFT detects contradictions in two states (Z2 and Z3, marked in red), it is
possible to use this design because these contradictions are part of the don’t care condi‐
tions of the water level system.
The water level control task (see visual model in Fig. 14) in detail:
“The water level control experiment consists of a tank, two pumps, four water level sensors and
a consumer represented by a drain valve at the bottom of the tank. The pumps are used to raise
the water level and the drain valve is used to consume the water in the tank. The task is to control

zamfira@unitbv.ro
754 K. Henke et al.

the pumps in a way that the water level remains between the lowest and the highest. If only one
pump is active, they should be alternatively be used.”

Fig. 14. GIFT: Decomposition into two parallel automata with 3 and 2 states

The number of active pumps as well as the alternation is described in the specification
scheme (Fig. 13, right).
One example for decomposition into two parallel automata is shown in Fig. 14. The
first parallel automaton (Fig. 14, left) is responsible for the number of active pumps:
• Z0: both pumps are inactive
• Z1: one pump is active – depending the actual state of the second parallel automaton
• Z2: both pumps are active
The second parallel automaton (Fig. 14, right) switches the states Z0 and state Z1
according to the specification scheme (shown in Fig. 13, right).

Fig. 15. ECP: Execution of parallel water level control design in virtual mode

zamfira@unitbv.ro
GIFT - An Integrated Development and Training System 755

This approach of decomposition is often used in lectures and seminars to discuss


special aspects with the students on the fly. To use the ECP execution for a number of
students it is necessary to switch to the “virtual mode” – shown in Fig. 15.

Intuitive Design of Parallel Automata


In special courses the design of complex digital control systems will be taught to students
of the 8th semester in form of a so called “project class”. The students have to solve a
complex design task and document all the steps of their work in great detail. During the
development process, a collection of design variants will be created; serving as case
studies and provided via the Internet. Examples for such design tasks are amongst others
controls for technical facilities like elevators, production cells and high storage ware‐
house systems. Solutions of such controls are available e.g. as a set of parallel automata
(designed with the GIFT system) within the GOLDi remote lab infrastructure.
Figure 16 gives an impression of the execution of a control task for the Production Cell
– realized with 12 parallel automata.

Fig. 16. ECP: Execution of a control task for the “Production Cell” with 12 parallel automata

5 Conclusion

The Integrated Communication Systems Group at the Ilmenau University of Technology


is an expert in the field of Internet-supported teaching of digital system design and is
well experienced in the area of integrated hard- and software systems. Students have to
pass hands-on examinations in a lab to complete the learning outcomes by own experi‐
ences. For all students, hands-on experiences are important to deepen their knowledge
about topics they learned during lectures. Therefore they can use the described GIFT

zamfira@unitbv.ro
756 K. Henke et al.

system to generate D or JK flip-flop equations to realize their schematics in the hands-


on laboratory.
One of the great advantages of the remote laboratory concept to our highly-motivated
students is the possibility to test their theoretical knowledge in a real life environment
at any time, allowing them to manage their daily schedule more individually and effi‐
cient. The GIFT system offers the possibility to export the generated next-state and
output equations directly to the Experimental Control Panel of the GOLDi remote lab
system, and therefore is an integral part of the GOLDi remote lab architecture as well
as the students’ learning process. With this possibility to execute their design tasks
within our GOLDi infrastructure, we want to offer the students a working environment
that is as close as possible to a real world laboratory. Under real laboratory conditions
disturbances can appear and lead to failures of the control algorithm that cannot be
detected under virtual lab conditions.

Acknowledgment. The authors would like to acknowledge the work of Tobias Vietzke, Andrey
Yelmanov, Lisa-Marie Schilling, Nicole Ponischil, Lennart Planz, Stephen Ahmad, Bastian
Hellweg and David Sukiennik for their work within the GIFT and GOLDi framework.
This work was supported in part by the European Commission within the program “Tempus”,
“ICo-op – Industrial Cooperation and Creative Engineering Education based on Remote
Engineering and Virtual Instrumentation”, Grant No. 530278-TEMPUS-1-2012-1-DE-TEMPUS-
JPHES [10] as well as “DesIRE - Development of Embedded System Courses with
implementation of Innovative Virtual approaches for integration of Research, Education and
Production in UA, GE, AM”, Grant No. 544091-TEMPUS-1-2013-1-BE-TEMPUS-JPCR [11].

References

1. Poliakov, M., Wuttke, H.-D., Larionova, T., Henke, K.: Automated testing physical models
in remote laboratories by control event streams. In: International Conference on Interactive
Mobile Communication, Technologies and Learning, San Diego, CA, USA, October 2016
2. Henke, K., Vietzke, T., Wuttke, H.-D., Ostendorff, S.: GOLDi – Grid of Online Lab Devices
Ilmenau. Int. J. Online Eng. (iJOE) 12(04), 11–13 (2016). Vienna, Austria, April 2016, ISSN
1861-2121
3. Henke, K., Vietzke, T., Wuttke, H.-D., Ostendorff, S.: GOLDi – Grid of Online Lab Devices
Ilmenau. In: Demonstration of Online Experimentation exp.at 2015 International Conference,
São Miguel Island, Azores, Portugal, June 2015
4. Henke, K., Wuttke, H.-D., Vietzke, T., Ostendorff, S.: Using interactive hybrid online labs
for rapid prototyping of digital systems. Int. J. Online Eng. (iJOE) 6, 57–62 (2014). Vienna,
October 2014
5. Henke, K., Ostendorff, S., Wuttke, H.-D., Vietzke, T., Lutze, C.: Fields of applications for
hybrid online labs. Int. J. Online Eng. (iJOE) 9, 20–30 (2013). Vienna, May 2013
6. Henke, K., Ostendorff, S., Vogel, S., Wuttke, H.-D.: A grid concept for reliable, flexible and
robust remote engineering laboratories. Int. J. Online Eng. (iJOE) 8, 42–49 (2012). Vienna,
December 2012
7. Henke, K., Vietzke, T., Hutschenreuter, R., Wuttke, H.-D.: The remote lab cloud “goldi-
labs.net”. In: 13th International Conference on Remote Engineering and Virtual
Instrumentation REV 2016, Madrid, February 2016
8. Atmel Corporation. http://www.atmel.com

zamfira@unitbv.ro
GIFT - An Integrated Development and Training System 757

9. Altera Corporation. http://www.altera.com


10. ICo-op project Website. http://www.ICo-op.eu
11. DesIRE project Website. http://tempus-desire.thomasmore.be
12. GOLDi-labs cloud Website. http://goldi-labs.net

zamfira@unitbv.ro
A Web-Based Tool for Biomedical Signal Management

S.D. Cano-Ortiz1 ✉ , R. Langmann2, Y. Martinez-Cañete3, L. Lombardia-Legra3,


( )

F. Herrero-Betancourt3, and H. Jacques2


1
CENPIS, Universidad de Oriente, Santiago de Cuba, Cuba
scano@uo.edu.cu
2
Duesseldorf University of Applied Sciences, IFAD, Duesseldorf, Germany
R.Langmann@t-online.de
3
Department of Informatics, Universidad de Oriente, Santiago de Cuba, Cuba
{ymartinez,lienys,fatima}@uo.edu.cu

Abstract. The paper deals with the implementation and development of web-
based platform, named WebSA 2.0, oriented to management of biomedical
signals within a database. It comes from the need to create a space which makes
easier sharing of biomedical signals from different sources whose are under digital
processing for supporting biomedical research. The use of web technology with
that purpose permits to enlarge the scopes of the system, as well as to add valuable
services of any I+D environment. Four types of biomedical signals are consid‐
ered: cry signal, electroencephalogram signal (EEG), electrocardiogram signal
(ECG) and electroculogram signal (EOG). The performance of the collaborative
web-based system was tested within the intranet and for several Windows stand‐
ards with satisfactory results. The WebSA 2.0 system could be useful for any
research situation in which the digital processing of different biomedical signals
be involved.

Keywords: Biomedical digital signal processing · Web technology

1 Introduction

The Neurosciences and Image and Signals Processing Study Center (CENPIS) of the
Universidad de Oriente has been working on projects related to the processing of
different biomedical signals to apply in the medical clinic (neonatal diagnosis, anesthesia
monitoring, devices detection in ECG signals, etc.). These projects require biomedical
signal samples which must be stored on servers arranged for that purpose to guarantee
the access to them. Nowadays the implementations of web sites that address this speci‐
alized subject are rare in the World Wide Web (WWW). This is evident in the absence
of any web site today which dedicates its efforts to the storage of biomedical signals of
different kind with possibilities of free access and exchange of experiences and infor‐
mation among specialists.
The CENPIS particularly works on the acquisition and analysis of biomedical
signals, such as: infant crying signal (linked to the extraction of relevant acoustic features
for determination of pathologic diseases in newborn babies), electrocardiogram signals
(ECG) and electroencephalogram signals (EEG) associated to the detection of artifacts

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_71

zamfira@unitbv.ro
A Web-Based Tool for Biomedical Signal Management 759

and robustness to noise, as well as electroculogram signals (EOG) with a pretty hard
impact on the diagnosis of hereditary ataxia. Based on the signal samples provided by
specialized hospitals for each signal or the recordings done by the center staff, several
software products have been developed to support the study and diagnosis of different
diseases. As this information doesn’t have a recollection protocol and a unified access
within the center, it avoids the possibility to manage the signals and information by the
researchers. This situation is worst when young researchers start working because they
lack of a unified tool where they can easily obtain biomedical signals, that are the basis
of the DSP (Digital Signal Processing) techniques that are carried out by the CENPIS
researchers [1–3]. Moreover this situation is applied to allied research groups that
support their work in DSP not only in our country but in the rest of the world. So while
the work is aimed to solve this absence in the CENPIS its usefulness extends beyond
our institution.
The aim of this work is to develop a web platform that integrates and facilitate access
and management of biomedical signals from various sources as well as the resources
sharing among DSP researchers around the world.

2 Materials and Methods

2.1 Useful Biomedical Signal Information

Four biomedical signals are held in the system which is used by the CENPIS researchers:
newborn cries, electroencephalograms (EEG), electrocardiograms (ECG) and ocular
movements (EOG).
Crying: crying signal consists of WAV file with 12 s. cry recording, with all the
newborns placed in a cubito supine position and a pinching in the calcaneus as a consis‐
tent pain stimulus. Associated with each signal a basic clinical information regarding
neurophysiologic status of baby at birth taken is properly collected from their clinic
profile [2].
Electroencephalogram and Electrocardiogram: These two signals in the system are
treated as one, since both are contained in one data record in which 19 channels corre‐
spond to electroencephalogram and one channel to electrocardiogram, thus counting the
overall record with 20 channels. They are obtained from a MEDICID device1, which
digitally emits a set of five files with different extensions and structure; and depending
on each extension specific information is provided.
The file extensions that are generated in each record are: .PAT, .INF, .PLG, .MRK
and .CDC.
.PAT: It contains clinical patient data, such as name, sex, age, etc. It is a text file.
.INF: It contains information on the characteristics of the record: frequency cuts of
filters, sampling period, etc.
.PLG: It contains signals recorded in units of A/D converter.

1
Medical equipment developed by the Cuban Institute for Digital Technologies, and located in
many Cuban hospitals.

zamfira@unitbv.ro
760 S.D. Cano-Ortiz et al.

.MRK: It contains states, trademarks and other information inserted in the record. It
is an ASCII file (with characters and integers (32 bits)).
.CDC: It contains calibration values and DC level for each channel recorded. It is a
file of floating-point values (single 8087) with the following structure:
Cal (1) DC (1) Cal (2) DC (2) … Cal (n) DC (n)
where each Cal (i) DC (i) is the calibration value and DC level of the i-th channel.
In order to obtain any value in microvolt, simply perform the following operation:

AD(i,j) ∗ Cal(i) − DC(i)


𝜇V(i,j) = (1)
10
where:
• μV(i,j) is the value in micro volts of the i-th channel, j-th sample.
• AD(i,j) is the value in converter units of the i-th channel, j-th sample. This value is
stored in the file with extension .PLG
• Cal(i) It is the calibration value of i-th channel.
• DC(i) is the DC level of the i-th channel.
It is required to divide by ten since Cal(i) and DC(i) are stored in tens of microvolt
to avoid losing precision.
Ocular movement (oculogram): This signal has the same composition as the EEG
and ECG signals considering the number of files and structure of the same, only the
number of channels is 4 to 6 depending on the signal recording required.

2.2 Criteria for Design

The criteria for the system design were based on several factors: type of code, degree
of development, market position, usability, accessibility, download speed, functionality.
The client/server architecture was chosen as architecture in which client and server can
act as a single entity or as separate entities, adding the possibility of belonging to the
same platform or to different platforms as well [4, 5]. For the implementation of the
system Apache was used as a web server, MySQL-Server was used for database
management and Joomla as content management system (CMS) [2, 5–11].
Figure 1 shows the layout of the client/server architecture used in this work:

Fig. 1. System architecture

The user’s web browser sends a request to the web server who retrieves the file and
passes it to the PHP engine for processing. The PHP engine analyzes the script, opens
a connection and sends the query to the MySQL server, which is responsible for
processing it and returns the result to the PHP engine that finishes executing the script,
depending on the case involves application HTML to format the results, and finally

zamfira@unitbv.ro
A Web-Based Tool for Biomedical Signal Management 761

returns it to the web server. The web server returns the HTML code to the user’s browser
for its viewing.

2.3 Domain Model


The domain model has eight conceptual classes, attributes, multiplicity and relationships
among them, which provide information necessary to understand the domain in the
context of the current requirements.
At this work the software requirements were determined through a complete descrip‐
tion of the behavior of the system to be developed, defining the set of use cases (func‐
tional requirements) and the determination of the restrictions on the design and imple‐
mentation (non-functional requirements). At the same time dependence and possible
relationships with other software was also established.

2.4 Proposal of the WebSA 2.0 System

Given the problematical situation of the research, automation object and the concepts
presented in the foregoing sections, a 3-layer distributed system based on a web interface
was implemented, a system able to manage new signals, their own samples and the access
to them by users or customers (e.g. researchers from or related to the CENPIS). This system
was implemented with the CMS Joomla not only to keep informed the scientific community
about the current work of CENPIS but also to enable communication among their members
through the configuration of different services as: news, articles, forums, blog, private
messaging, etc. With the implementation and deployment of this system a solution to the
research problem is given in compliance with aims of work.

3 Results and Discussion

In this section the performance of the web system resulting from analysis and design
phases stated above is described. This system manages four biomedical signals needed
by the CENPIS research staff to carry out their investigations. For the design of the
interface all functionalities provided by the CMS were considered, using a template
designed to fix the style of the interface. In that sense some features from the design of
the previous platform (version 1.0) are included.
The system was evaluated in terms of CENPIS intranet and with the participation of
researchers working with the four biomedical signals described, All the aspects
concerned with the data management section within the corpus were successfully tested,
where the user was able to access by simply clicking on one of the four options of the
type of signal present in the Corpus. Moreover the user also has got the option to access
other services such as: online chat (e.g. instant communication with administrator or
other remote user), blogging to discuss ideas, new procedures, etc., e-mail for commu‐
nication with users within Intranet or Internet, repository of technical information, links
to other related sites, etc.

zamfira@unitbv.ro
762 S.D. Cano-Ortiz et al.

Finally all the proposed functionalities were tested with satisfactory results. Tests
showed compliance with requirements in the design stage as:
Usability: This system can be used by all researchers, doctors, students, interested in
analyzing the signals offered on site (showed consistency in its simple interface,
limited technical language, and guiding ease of getting around on the site).
Performance: Quick access to the pages, thanks to a right use of resources that are
available in the Client/Server model, and the speed of queries in the database.
Portability: The system is able to run on Microsoft Windows operating systems (e.g.
run successfully for Win 98, XP, Vista, 7 and 10).
Security: different roles were defined according to the activity that corresponds to the
user in the system, leaving well marked privileges corresponding to each of the role
user, and taking into account the vulnerability of data provided [12].
Appearance and Interface: The interface of the system is through a Web browser (with
a simple and explicit design that allows the user to interact with the system without
needing a deep training).
Help and online documentation: The system has a help page of the site, which is offered
to online user, with basic instruction on how to navigate the site (also has a site map
and a page which provides information about site developers).
Dependencies and relations with other software: The WebSA 2.0 platform is inserted
into collaborative environment developed software that integrates different DSP-based
tools developed by CENPIS researchers2. WebSA 2.0 also fixes some problems
presented in the initial version of WebSA on Cry3 [13].

4 Conclusions

In this research a system with web technology aimed to facilitate a remote access and
management of a Corpus data with biomedical signals of different nature (acoustic
newborn cry signal, ECG signal, EEG signal, and EOG signal) is developed, linked to
a research environment where digital processing of biomedical signals is used (e.g.
CENPIS). Incorporating web technology system adds useful features to disseminate,
share and manage information (through chat services, blog, e-mail). The system proved
all its functions successfully in the CENPIS intranet and for different configurations of
Windows. The use of open source (free software), object-oriented standards and porta‐
bility make it a useful tool for other users or potential users of other scientific or academic
institutions that manage biomedical signal processing. Next step in the work will be a
real-time performance in the Web which allows the participation of foreign academic
institutions in the evaluation of the WebSA system.

2
There is a link to MediCry 1.0 (MySQL database with all clinical and biomedical information
of newborn babies whose cries were previously recorded), and to the CryTrainer 1.0 (it is a
web-based trainer oriented to learn how to read spectrograms of infant cry signal with potentials
for newborn diagnosis).
3
WebSA on Cry (or WebSA 1.0) was developed in January 2007 by CENPIS researchers for cry
signals only.

zamfira@unitbv.ro
A Web-Based Tool for Biomedical Signal Management 763

Acknowledgements. Part of this research was made thanks to the financial support derived from
the Webbasierte FuE-Plattform Zur Signalanalyse Project (WebSA) in collaboration with the
University of Applied Sciences from Dusseldorf, Germany and the INAOE (México).

References

1. Vázquez-Seisdedos, C.R., León, A.A.S., Neto, J.E.: A comparison of different classifiers


architectures for electrocardiogram artefacts recognition. In: Ruiz-Shulcloper, J., Sanniti di
Baja, G. (eds.) CIARP 2013, Part II. LNCS, vol. 8259, pp. 254–261. Springer, Heidelberg
(2013). Lectures Note on Computer Sciences 8259, p. 487 ff
2. Duvergel, F., Guerra, S.: WebSA on cry: tecnología web aplicada al análisis de llanto infantil.
Tesis de Diploma aspirante al título de Ingeniero en Telecomunicaciones. Universidad de
Oriente, Santiago de Cuba, Cuba (2007)
3. Vázquez-Seisdedos, C.R., Neto, J.E., Valdés-Pérez, F.E, Limao de Oliveira, R.C.:
Procesamiento y análisis del ECG ambulatorio: algunos problemas y soluciones, Revista
Ciencia en su PC: Num 1, 57–66 (2010)
4. Booch, G., Rumbaugh, J., Jacobson, I.: El lenguaje unificado de modelado. Addison Wesley,
London (2000)
5. Hansen, W.G.: Diseño y Administración de Bases de Datos. Ediciones Prentice Hall, 2da
edición (1997)
6. Silberschatz, A., Korth, H.F., Sudarshan, S.: Fundamentos de base de datos. Cuarta Edición,
McGRAW, Madrid (2002)
7. Larman, C.: UML y Patrones. Introducción al análisis y diseño orientado a objetos. Segunda
edición. Prentice Hall (1999)
8. White, S.: Manual del Usuario 1.0. x por Joomla (2006)
9. MySQL AB: MySQL 5.1 Reference Manual. Mai (2007)
10. Pressman, R.: Ingeniería de Software. Un enfoque Práctico. Quinta Edición. Ciudad de la
Habana, Cuba (2005)
11. Sæther-Bakken, S., Aulbach, A.: Manual PHP Online Posting 10 (2002)
12. Cano-Ortiz, S.D., Duvergel, F.V., Guerra, S., Bordies, O., Escobedo, D.I., Subert, A., Reyes,
C.A.: WebSA on cry: web-based technology applied to the cry analysis. In: Proceedings of
International Congress on Remote Engineering and Virtual Instrumentation REV 2008,
Duesseldorf, Germany (2008)
13. Stein, J.Y.: Digital Signal Processing: A Computer Science Perspective. Wiley, New York
(2000)

zamfira@unitbv.ro
Optimization of Practical Work for Programming Courses
in the Context of Distance Education

Amadou Dahirou Gueye1 ✉ , Pape Mamadou Djidiack Faye2, and Claude Lishou3
( )

1
Alioune Diop University, Bambey, Senegal
dahirou.gueye@uadb.edu.sn
2
Virtual University, Dakar, Senegal
papedjidiack.faye@uvs.edu.sn
3
Cheikh Anta Diop University, Dakar, Senegal
clishou@ucad.sn

Abstract. To respond to the emergence of new technologies training, Moocs


designers and their platforms are more worrying about learners with emphasis on
practical work requirements essential for any technical training. Currently, solu‐
tions of practical work as a plug-in are available to extend the functionality of
distance learning platforms. However these solutions, while integrating video
features, audio, chat, screen sharing and audio are generic while the requirements
to achieve practical work may differ depending on the specialty. For literary
disciplines, learners just need to see and hear the teacher while for others like
computer science, teachers and learners need to implement computer programs.
Despite the existence of the Screen Sharing feature in the virtual classroom solu‐
tions, distance learning platforms do not offer the ability to properly carry out
practical work in programming courses. For the latter, it is not only to have visi‐
bility into the work of a participant but to create an interactive environment
between the participants. This interactivity cannot be managed with screen
sharing solutions that consume much bandwidth. Thus, in this paper, we propose
an optimization solution of practical work that easily integrates into a distance
education platform. The proof of the relevance of our approach has been demon‐
strated through the implementation of a practical work programming led by a
tutor and learners from remote workstations. Our solution not only have a global
view of the whole teaching of practical work of the participants but also to interact
with each participant while allowing others to monitor these interactions and
intervene as necessary. This solution should also allow a participant to make a
compilation and/or execution of code that is visible to other participants.

Keywords: Distance learning · Distance learning platforms · Optimization ·


Practical work online · Computer programming · Compiler

1 Introduction

The global context of higher education is marked by a significant increase in student


numbers at the country level and international student flows. Major universities are
unanimously committed to the competition to capture these flows. They are more than

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_72

zamfira@unitbv.ro
Optimization of Practical Work for Programming Courses 765

ever the driving force behind this so-called knowledge societies and particular emphasis
globally on the development of STEM (Science, Technology, Engineering and Mathe‐
matics). Science education is all about handling, experimentation and practical work.
On the other hand, distance education has grown rapidly in recent years, offering a
massive number of students from all over the world the best courses in the best univer‐
sities.
However, distance learning designers and their platforms are more and more worrying
about learners focusing on the need for practical work indispensable in certain types of
education. Hence the obvious interest to anticipate the needs, expectations and potential
problems of learners in the design of online training in order to meet the challenges of
distance, isolation or remoteness between the different actors [1, 2]. To do so, many
distance learning platforms exist and incorporate self-assessment tools and online assess‐
ment [3]. This is the case of Moodle, which is used by many universities [4]. Moreover,
we find that the main platforms of MOOCs have already taken the trouble to provide
answers to these challenges: in a few months, Coursera [5], EDX [6] and Udacity [7] have
evolved in the direction better ergonomics of navigation and especially adding features to
fully participate in interactions which, even in academic MOOCs, are learning. On the
other hand, we also realize that Google and Openfire use WebRTC API to provide tools to
communicate in real time from browsers. These Google tools could be used to enable
learners and teachers to communicate in real time. The authors in [8] proposed a virtual
classroom solution easily integrated into platforms as distance learning and mathematics
allows students to have no limits for practical work by allowing them the mathematical
formulas entry.
However, we should think more to extend the functionality of learning platforms
with the aim of increasing the possibilities of collaboration focusing on achieving prac‐
tical work of scientific disciplines. The field of Science Technology Engineering and
Mathematics (STEM) retained in this paper mainly computer programming.
Currently, solutions of practical work as a plug-in are available to extend the func‐
tionality of remote training platforms [6, 8]. These solutions, while integrating video
features, audio, chat, screen sharing, audio, are generic while the requirements to achieve
practical work may differ depending on the specialty. For literary disciplines, learners
just need to see and hear the teacher while for others like computers, teachers and learners
need to implement computer programs. Despite the existence of the Screen Sharing
feature in the virtual classroom solutions, remote training platforms do not offer the
ability to properly carry programming during practical work. For the latter, it is not only
having visibility into the work of a participant but to create an interactive environment
between the participants. This interactivity cannot be managed with screen sharing
solutions that consume much bandwidth. In this paper, we propose an optimization
solution of practical work in programming, easily integrated in a distance education
platform. The rest of the paper is organized as follows: in Sect. 2, we first present the
state of the art on Node.js, WebRTC technology and architecture of a standard compiler.
Then we present in Sects. 3 and 4 the integration architecture of our solution in a training
platform and some scenarios of operation. Section 5 describes the presentation of results.
The conclusion summarizes the results and perspectives.

zamfira@unitbv.ro
766 A.D. Gueye et al.

2 Related Work

First, we present the Node.js and WebRTC technologies used in our approach and the
architecture of a standard compiler.

2.1 Node.js

Node.js is one recent framework to implement the event model through the entire stack.
Developed in 2009 by Ryan Dahl, Node.js (or just Node) is a single-threaded server-
side JavaScript environment implemented in C and C++ [9]. Node.js architecture makes
it easy to use as an expressive, functional language for server-side programming that’s
popular among developers [9]. Node.js use the JavaScript V8 engine, developed by
Google [10], a fast and powerful implementation of JavaScript [11] that helps Node
achieve top performance.
Unlike other modern environments, a Node process doesn’t rely on multithreading
to support concurrent execution; it’s based on an asynchronous, event driven I/O model
[11]. Its event driven, non-blocking I/O model makes it lightweight and efficient, ideal
for data intensive real time applications that run across distributed servers [12].
For better or worse, JavaScript is the world’s most popular programming language.
If you’ve done any programming for the web, it’s unavoidable. JavaScript, because of
the sheer reach of the web, has fulfilled the “write once, run anywhere” dream that Java
had back in the 1990s [13, 14]. Around the time of the Ajax revolution in 2005, Java‐
Script went from being a “toy” language to something people wrote real and significant
programs with. Some of the notable firsts were Google Maps and Gmail, but today there
are a host of web applications from Twitter to Facebook to GitHub. Since the release of
Google Chrome in late 2008, JavaScript performance has improved at an incredibly fast
rate due to heavy competition between browser vendors (Mozilla, Microsoft, Apple,
Opera, and Google). The performance of these modern JavaScript virtual machines is
literally changing the types of applications you can build on the web.2 A compelling,
and frankly mind-blowing, example of this is jslinux,3 a PC emulator running in Java‐
Script where you can load a Linux kernel, interact with the terminal session, and compile
a C program, all in your browser [13, 14].
Node uses V8, the virtual machine that powers Google Chrome, for server-side
programming. V8 gives Node a huge boost in performance because it cuts out the
middleman, preferring straight compilation into native machine code over executing
byte code or using an interpreter. Because Node uses JavaScript on the server, there are
also other benefits [9, 11]:
Its goal is to offer an easy and safe way to build high performance and scalable
network applications in JavaScript. Those goals are achieved thanks it’s architecture
[13, 14]:
Single Threaded:
Node use a single thread to run instead of other server like Apache HTTP who spawn
a thread per request, this approach result in avoiding CPU context switching and massive
execution stacks in memory. This is also the method used by Nginx and other servers
developed to counter the C10K problem [12].

zamfira@unitbv.ro
Optimization of Practical Work for Programming Courses 767

Event Loop: Written in C++ using the Marc Lehman’s libev library, the event loop
use epoll or kqueue for scalable event notification mechanism [13, 15].
Non-blocking I/O: Node avoid CPU time loss usually made by waiting for an input
or an output response (database, file system, web service) thanks to the full-featured
asynchronous I/O provided by Marc Lehmann’s libeio library [13].
These characteristics allow Node to handle a large amount of traffic by handling as
quickly as possible a request to free the thread for the next one.
Node has a built-in support for most important protocols like TCP, DNS, and HTTP
(the one that we will focus on). The design goal of a Node application is that any function
performing an I/O must use a callback. That’s why there is no blocking methods provided
in Node’s API [9].
The HTTP implementation offered by Node is very complete and natively support
chunked request and response (very useful since we are going to use the twitter streaming
API) and hanging request for comet applications. The Node’s footprint for each http
stream is only 36 bytes (source) [12, 13]. Figure 1 shows the Node.js architecture.

Fig. 1. Node.js architecture

In our solution, Node.js acts a very important role. It allows us to manage commu‐
nications between the web client and the compiler. Thus the messages received by the
server Node.js are transformed into messages understandable by the compiler. Message
formats will therefore be defined between the client and the server Node.js.

2.2 WebRTC (Web Browsers with Real-Time Communications)

WebRTC is an open source project introduced by Google in 2011 [16] which ensures
communications in real time via a JavaScript API. [17] The project aims to develop a
technology allowing web browsers to support interactive communications point to point
and offer data exchange synchronous mode [18, 19]. The WebRTC is intended to give
browsers the ability to offer audio, video or written, file transfer, screen sharing and
remote control of computers.
The main components of the WebRTC API defined by the W3C working groups
(World Wide Web Consortium) and IETF (Internet Engineering Task Force) are [20, 21]:
MediaStream: allows a browser to access the camera and microphone;
RTCPeerConnection: Enables audio and video calls;
RTCDataChannel: allows browsers to send data in a peer-to-peer connection.

zamfira@unitbv.ro
768 A.D. Gueye et al.

Figure 2 shows, first WebRTC C++ API implemented in some browsers and also
the web WebRTC API allowing web developers to integrate services offered by the
WebRTC into their applications.

Fig. 2. WebRTC architecture

In our solution the WebRTC will be used to enable audio communication between
participants.

2.3 Standard Compiler

A compilation is usually implemented as a sequence of transformations (SL; L1); (L1;


L2); ::: ; (Lk; TL), where SL is the source language and TL is the target language. Each
language Li is called an intermediate language. Intermediate languages are conceptual
tools used in decomposing the task of compiling from the source language to the target
language. The design of a particular compiler determines which (if any) intermediate
language programs actually appear as concrete text or data structures during compila‐
tion [22].
Generally, we note that the program source code is translated into machine language
by a compiler that is built on local machine and inputs/output are also directed locally.
That is why, for a practical work, each participant must include for locally a compiler.
This is a problem in relation to distance education, where the tutor and students are not
in the same room. And even if they were in the same room it would be difficult for the
tutor to browse each student to have hand on his work. There are screen sharing solutions
that allow a remote user to share their screen (Google, Openfire) with other users. But
these solutions consume much bandwidth. Figure 3 shows the case of a standard
compiler.

zamfira@unitbv.ro
Optimization of Practical Work for Programming Courses 769

Fig. 3. Architecture with a standard compiler installed on each client

3 Presentation of the Solution

Our solution is to provide a compiler hosted in a remote server to allow multi-user access
and redirect input output (I/O) to the client and not on the computer hosting the compiler.
In other words, the proposed compiler will be able to run a received remote source code
and redirect I/O to the client. The client gets to compile his program remotely via the
server Node.js (Fig. 4).

Fig. 4. Architecture with a centralized compiler

3.1 Components

Client + browser: The client accesses the application through a web browser.
Node.js server and WebRTC: offer video conferencing functionality between clients
and also allows communication between clients and the compiler.
The compiler: it is the key element of our solution. It is composed of three entities:
1. Sockets: allow remote communication with clients.
2. The code replacer: replace the input and standard output functions in code, with
input redirection functions outputs.

zamfira@unitbv.ro
770 A.D. Gueye et al.

3. Executor hot code: This is the element that will be in charge to run the client source
code.
4. Database: It allows the compiler to temporarily store the last code compiled by a
client to avoid him to return the code to run. This is for optimizes bandwidth.

3.2 Operation
Clients communicate with each other via the Node.js. So they can chat by audio or chat.
To communicate with the compiler, the client passes by the Node.js server. Thus we
define compilation commands between the server Node.js and the compiler. Between
the client and the Node.js server, chat messages are used to send commands. Therefore,
the Node.js server that will be responsible for translating these messages understandable
format by the compiler then send them to the latter.
The compiler will compile and store the source code in a database with a unique
identifier. This identifier will be sent to it by the Node.js server during an execution
request to identify the program. Whenever the user makes a request to execute a program,
the Node.js server will send the identifier to the compiler and ask him to start the the
program execution. During the execution of the program the inputs and outputs will be
redirected to the client. That is, if the program has to display information on the screen,
this information will be transferred to the user interface and if the program has to wait
for a value, this value must be expected from the client.
At each execution, the compiler executes the last result of the compilation of the
program by receiving as a parameter the identifier of this compilation.
Customers who share the same salon will all stages of compilation and program
execution. The Node.js server will be responsible for running the run state for all other
clients.
With this solution, it’s just the code in text format that is shared in real time instead
of capturing the entire screen of the user that is likely to be too heavy, thus not facilitating
interactivity.

3.3 Communication Messages Between Entities

For the dialog between the Node.js server and the compiler there are 4 types of
commands:
ASK_COMP: This is a compilation request message.
ASK_EXEC: execution request.
ASK_INPUT: Request a value to enter.
ANSWER_INPUT: Message to send a value to the compiler following a request
message between ASK_INPUT.
END_PROG: end of program.

zamfira@unitbv.ro
Optimization of Practical Work for Programming Courses 771

3.4 Platform Integration Architecture


To communicate with the server of practical work that integrates the remote compiler
platforms must provide a URL containing a login and a salon ID. All clients who connect
with this same salon ID will be in the same lab. Each training platform manages its own
conference IDs (Fig. 5).

Fig. 5. Integration architecture in remote training platforms

Example URL: http://192.168.1.4:3000/?login=djidiack&room=134 allows to


connect in the workroom 134 with the login Djidiack.

4 Some Scenarios of Operation

In order to better explain this architecture we use operating scenarios by representing


them in the form of sequential diagrams.

4.1 Request for Compilation

1. The client sends a compilation request to the Node.js server. This request contains
the client’s source code.
2. The Node.js server is responsible for forwarding the message to the compilation
server in a format that can be understood by the server.
3. The compilation server uses code replacement functions to replace the standard I/O
functions with new functions that will redirect the input to the client.
4. The server uses the code executor to verify that the code does not contain an error.
5. The code is stored in the database with a unique identifier in case there is no error,
in order to prevent the client from returning the code during execution.
6. The result of the compilation is returned to the client via the server Node.js (Fig. 6).

zamfira@unitbv.ro
772 A.D. Gueye et al.

sequence compilation Compilateur

User NodeJS Server Socket server compiler IO replacer Executor and checker Database

compilation request

disencapsulation message

sending compilation request


input/output replacing

new source code


source code checking

Source code ok

saving source code

done
notification

response encapsulation

sending response

Fig. 6. Request for compilation

4.2 Execution of a Program that Calculates the Square of an Integer

1. The client sends a run request that contains the identifier of the execute code.
2. The Node.js server sends the request to the compiler.
3. The compiler connects to the database to load the code based on the identifier.
4. The compiler uses the code executor to start the program.
5. The executor executes instructions 1, 2, 3 and 4.
6. The code executor executes instruction (5). As is an output instruction, it sends a
message to the server Node.js DISPLAY_OUTPUT This type of message displays
a message on the client browser.
7. The executor starts instruction (6). As this is an input instruction, it sends an
ASK_INPUT message to the Node.js server to let it know that it expects a value
from the client.
8. The client sends the value to the Node.js server, which in turn transfers it to the
compiler (Code Executor).
9. The executor launches instruction (7), which is a calculation instruction.
10. The executor executes instruction (9). As this is an output statement, it sends a
DISPLAY_OUTPUT message to the Node.js. This type of message is used to
display a message on the client browser.

zamfira@unitbv.ro
Optimization of Practical Work for Programming Courses 773

11. To show the client the end of the program, the compiler sends a message
END_EXEC (Fig. 7).

Compilateur
DiagrammeSequence_1

User Socket server compiler Executor Database


Nodejs server

execution request

disencapsulation message

sending execution request


loading source code

source code

launch program
execution instruction1,2 ,3 and 4

execution instruction 5
DISPLAY_OUTPUT
tranfer request to nodejs
execution instruction 6
Affichage sur le navigateur

transfer request to nodejs ASK INPUT( ask integer numberr)


waiting integer

sending number transfer number


integer entry

execution instruction 7

execution instruction 8
DISPLAY_OUTPUT
tranfer message
END_EXEC
displaying on browser transfer message

end program

Fig. 7. Execution request

zamfira@unitbv.ro
774 A.D. Gueye et al.

5 Realization

In this part, we show the realization of a practical work of programming in language C


which calculates and displays the factorial of a positive or zero integer. This program is
written, compiled and executed from a browser. We have here two users present on two
different machines.
1. The user of Djidiack name starts writing a part of the source code (Fig. 8).

Fig. 8. Code written by user Djidiack

Fig. 9. Code written by user Dahirou

zamfira@unitbv.ro
Optimization of Practical Work for Programming Courses 775

The user Dahirou, present on another post, continues the writing of the source code
initiated by the user Djidiack (Fig. 9).
The user Dahirou compiles the program which is visible to all (Fig. 10).

Fig. 10. Compilation of the program by the user Dahirou

The user Djidiack executes the program which is also visible to all (Fig. 11).

Fig. 11. Program execution

6 Conclusion

In this paper we have proposed an online practical work solution which can be integrated
into a distance learning platform for computer programming courses. Access to a session
of practical work is done at a distance, via the web and several participants can

zamfira@unitbv.ro
776 A.D. Gueye et al.

simultaneously access the same session of lab. These participants share the same
compiler hosted in a remotely accessible server.
To illustrate our solution, we chose the programming language C by communicating
through a protocol that we defined our central compiler with the server Node.js which
ensures the link with the client. Thus the inputs/outputs of a program executed by the
central compiler are redirected to the client. This means that a student will no longer
need to install a compiler on his machine to participate in a practice session because it
must go through a centralized compiler that makes it possible to make visible the steps
of writing, compilation and the execution of a program to all participants in the session.
Our solution enables teachers and participants to feel at ease by providing a flexible
and interactive environment. On the one hand, it allows a learner to share the source
code or the result of a computer program with other learners in a single click; and on
the other hand, for a teacher, to manage the course of a lab with the possibility of giving
or withdrawing the hand to a learner or several learners at the same time. Our solution
also enables the teacher to ensure the team work of the learners and thus save time.
In the future, we intend to implement our solution in other programming languages
and test its integration in remote laboratories.

References

1. Hikolo, A.M.: Analyse, conception, spécification et développement d’un système multi-


agents pour le soutien des activités en formation à distance, thèse Université de Franche
Comté, soutenue le 16 octobre 2003
2. Kukeneh, S.S., Shahbahrami, A., Mahdavi, M.: Personalized virtual university: applying
personalization in virtual university. In: 2011 2nd International Conference on Artificial
Intelligence, Management Science and Electronic Commerce (AIMSEC), pp. 6704–6706, 8–
10 August 2011
3. Aydin, C.C., Tirkes, G.: Open source learning management systems in e-learning and Moodle.
In: 2010 IEEE Education Engineering (EDUCON), pp. 593–600, 14–16 April 2010
4. Martín-Blas, M., Serrano-Fernández, A.: The role of new technologies in the learning process:
Moodle as a teaching tool in physics. Comput. Educ. 3, 35–44 (2009)
5. Coursera. www.coursera.org. Accessed 2016
6. Edx. www.edx.org. Accessed 2016
7. Udacity. www.udacity.com. Accessed 2016
8. Faye, P.M.D., Gueye, A.D., Lishou, C.: Proposal of a virtual classroom solution with
WebRTC integrated on a distance learning platform. In: Proceedings of the 19th International
Conference on Interactive Collaborative Learning (ICL2016), Clayton Hotel, Belfast, UK,
pp. 1217–1232, 21–23 September 2016
9. Tilkov, S., Vinoski, S.: Node.js: using javascript to build high- performance network
programs. IEEE Internet Comput. 14(6), 80–83 (2010)
10. Google Javascript V8. http://code.google.com/p/v8/
11. Ratanaworabhan, P., Livshits, B., Zorn, B.: JSMeter: comparing the behavior of JavaScript
benchmarks with real web applications. In: USENIX Conference on Web Application
Development (WebApps), June 2010
12. Node.js: Eventdriven Concurrency for Web Applications, GaneshIyer, BTech. (Computer
Engineering), SVNIT
13. Cantelon, M., Harter, M., Holowaychuk, T.J.: Node.js in Action (2014)

zamfira@unitbv.ro
Optimization of Practical Work for Programming Courses 777

14. Professional Node.js: Building javascript based scalable software, Pedro Teixeira (2012)
15. Node.js for PHP Developers, Daniel Howard (2013)
16. Elleuch, W.: Models for multimedia conference between browsers based on WebRTC. In:
2013 IEEE 9th International Conference on Wireless and Mobile Computing, Networking
and Communications (WiMob), pp. 279–284, 7–9 October 2013
17. Zeidan, A., Lehmann, A., Trick, U.: WebRTC enabled multimedia conferencing and
collaboration solution. In: Proceedings of World Telecommunications Congress 2014, WTC
2014, pp. 1–6, 1–3 June 2014
18. Vogt, C., Werner, M.J., Schmidt, T.C.: Leveraging WebRTC for P2P content distribution in
web browsers. In: 2013 21st IEEE International Conference on Network Protocols (ICNP),
pp. 1–2, 7–10 October 2013
19. Hinow, F., Veloso, P.P., Puyelo, C., Barrett, S., Nuallain, E.O.: P2P live video streaming in
WebRTC. In: 2014 World Congress on Computer Applications and Information Systems
(WCCAIS), pp. 1–6, 17–19 January 2014
20. Sredojev, B., Samardzija, D., Posarac, D.: WebRTC technology overview and signaling
solution design and implementation. In: 2015 38th International Convention on Information
and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 1006–
1009, 25–29 May 2015
21. W3C Editor’sDraft. http://w3c.github.io/mediacapturemain/getusermedia
22. Waite, M.M., Goos, G.: Compiler construction, Karlsruhe, 22 February 1996

zamfira@unitbv.ro
Enabling the Automatic Generation of User
Interfaces for Remote Laboratories

Wissam Halimi1(B) , Christophe Salzmann2 , Hagop Jamkojian1 ,


and Denis Gillet1
1
EPFL, REACT, Station 11, 1015 Lausanne, Switzerland
{wissam.halimi,hagop.jamkojian,denis.gillet}@epfl.ch
2
Automatic Control Laboratory, EPFL, Station 9, 1015 Lausanne, Switzerland
christophe.salzmann@epfl.ch

Abstract. Remote laboratories are an important component of blended


and distance science and engineering education. By definition, they pro-
vide access to a physical lab in a distant location. Many architectures
enabling remote laboratory systems exist, the most common of which are
Client-Server based. In this context, the Server interfaces the physical
setup and makes it software-accessible. The Smart Device Specifications
revisit a Client-Server architecture, with the main aim of cancelling the
dependencies which inherently exist between a Client and a Server. This
is done by describing the Server as a set of services, which are exposed
as well-defined APIs. If a remote laboratory is built following the Smart
Device Specifications, any person with programming skills can create a
personalized client application to access the lab. But in practice, teach-
ers rely on the mediated contact with a lab provider to have information
about what kind of experiment(s) the lab in question implements. Even
though there is a complete description of the available sensors and actu-
ators making up a lab and how to be accessed, it is not clear how they
are connected (relationships). In this sense, a list of sensors and actua-
tors are not enough to make a guided selection of components to create
the interface to an experiment. Therefore, the aim of this work is to
support teachers in choosing the experiments and creating the respec-
tive UI on their own, in a pedagogically oriented scenario and by taking
into consideration the target online learning environment. This is done
by revisiting the Smart Device Specifications and extending them, in
addition to proposing a tool that will automatically generate the user
interface of the chosen experiment(s).

Keywords: Remote laboratories · Online learning · Cyber physical sys-


tems · User interfaces · Personalisation

1 Introduction
Remote laboratories (RLs) are an important component of distance and blended
learning for science and engineering education. They allow learners to experi-
ment in order to validate or refute a hypothesis, accept or reject a taught sub-
ject. By definition, they provide remote access to hands-on sessions, which are

c Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6 73

zamfira@unitbv.ro
Enabling the Automatic Generation of User Interfaces 779

essential for the process of learning and assimilating scientific concepts [2,5].
As some Web technologies emerged and died, many architectures for remote
laboratory systems have been proposed. These architectures range from being
case-specific to more generalised. The most adopted architectures, are Client-
Server based where typically the Server interfaces the physical equipment of
a lab, and the Client provides a software application through which users can
access the lab. With the birth of the ‘separation of concerns’ paradigm enabling
Service Oriented Architectures (SOA), lab providers started building their labo-
ratories in a more modular way [8,10]. With such architectures, the access to the
remote setup is done through web services or APIs where the laboratory server
is exposed as services [4,9,13]. The main aim of adopting a Service Oriented
Architecture for RLs is to separate the tiers of the remote laboratory system.
The Smart Device Specifications for remote labs describe the Server as services
through well-defined interfaces as proposed in [9]. This approach is motivated
by the complete separation of the Client from the Server encourages the broader
sharing of remote labs. The Smart Device Paradigm decouples the Client and
the Server further enabling the personalization of the client applications. When
the Smart Device Specifications are adopted for an RL, the Server is exposed
as services described by APIs enabling any person with programming skills to
create user applications to connect to the labs. This is possible by “talking” to
the APIs and understanding how client applications can access the services pro-
vided by a remote setup. With the Smart Device Specifications, the development
and deployment of remote laboratories is much easier, faster, and modular for
different stakeholders, namely the lab provider and the user client developer.
In this context, invoking a service is equivalent to controlling actuators or
retrieving data from sensors making up the laboratory setup. Provided the APIs,
it is possible to personalize the client application accessing the labs by enabling
the teachers to use the RLs in different ways, according to their educational
needs, by designing their own experiments. This is the case of remote laboratories
that are configurable and offer the flexibility of conducting different experiments,
corresponding to different scientific phenomena. In this work, we refer to the
activity which allows students to freely vary the parameters on lab equipment
as an experiment, and we refer to the combination of sensors and actuators used
in an experiment as a “configuration” from a lab owner point of view. Since
the information provided by the APIs based on the Smart Device Specifications
do not covey enough information that shows how the sensors and actuators are
connected and dependant, usually the teachers resort to the mediated contact
with a lab provider to have information about what kind of experiment(s) the lab
in question implements. On another note, creating User Interfaces (UIs) is still
largely reliant on disposing of a software developer, preventing teachers with no
such privileges from personalizing their own applications. We recognise that it is
important in such setups to give teachers the autonomy to select and create user
interfaces for remote labs that fulfil their own pedagogical objectives, without
the need to contact a lab provider through an application developer. Therefore,
the aim of this work is to support teachers in choosing the experiments and

zamfira@unitbv.ro
780 W. Halimi et al.

creating the respective UI on their own, in a pedagogically oriented scenario and


by taking into consideration the target online learning environment. This is done
by revisiting the Smart Device Specifications and extending them, in addition to
proposing a tool that will automatically generate the user interface of the chosen
experiment(s).
This paper is structured as follows: we begin by providing on overview of
existing remote labs architectures while identifying their pros and cons for the
challenges at hand. In Sect. 3 we elaborate on our extension of the Smart Device
Specifications to enable the automatic generation of UIs for configurable labs.
In Sect. 4 we present our proposed tool for UI generation, and an accompanying
example for a remote laboratory: the Mach-Zehnder Interferometer.

2 Related Work

In [7] the authors make their debut in defining Smart Devices (SDs) motivated
by the need to move away from adopting proprietary technologies for building
remote laboratories, and the need to converge towards common conventions for
designing and building remote laboratory systems. Accordingly, they re-engineer
the server side by implementing separate services for the different hardware
access which are possible for their example lab. In parallel, instead of creating
a complete web application or widget, they provide four separate ones for each
of the accessible services: a graph tool, a video feed, a control panel for the
system’s parameters, and a tool for saving the experimental data. The users of
a remote lab can choose any subset or all the provided widgets to use the lab in
a ‘metawidget’. While this effort is a move toward a personalization of the user
client, it is still proprietary for the embedding web-based environment.
Later in [8,9], the authors elaborate in more detail about the Smart Device
Paradigm and introduce the concept of LaaS (Lab as a Service). The Smart
Device Paradigm revisits the reputed Client-Server architecture for remote labs
by re-thinking the server side and equipping its component with some ‘intelli-
gence’. This is based on Thomson’s definition of smart devices, as devices which
have identity and kind, memory and status tracking, communication capabil-
ities, and more [11]. Accordingly, the Smart Device Specifications extend this
definition to support complex systems such as remote labs. Motivated by the
need to completely separate the server and client sides to further enable the
personalization of client applications, the mentioned specifications represent the
behaviour of the connected sensors and actuators as services exposed through
well-defined APIs. The services representing a sensor or actuator instance are
fully described through ‘metadata’. The ‘metadata’ provide a description of the
considered lab through the General Metadata which tells the name of the lab,
a short high-level description, a contact person, and licensing information. The
API Metadata defines the supported services by the lab, by specifying the cor-
responding sensor and actuator requests and responses. Moreover, the Smart
Device Specifications provide service descriptions for authorisation, which takes
care of user authentication. This metadata category is of no interest to this work.

zamfira@unitbv.ro
Enabling the Automatic Generation of User Interfaces 781

It is claimed that the ‘metadata’ is enough for building user applications with-
out the need for further information from a lab provider. While this might be
true for laboratories supporting one experiment, it is not true for configurable
labs which provide the possibility of conducting many experiments with the
same connected equipment. This is due to the absence of a description of how
the sensors and actuators are connected and which configurations are possible.
The Smart Device specifications provide a description of services as independent
units.
Other frameworks for the generation of remote lab user interfaces exist, such
as the tool based on EjsS in [12]. In this work, the authors bring to impor-
tance the need for user interfaces which can be well integrated in web-based
learning environments such as Moodle. Additionally they invoke the necessity to
support open web technologies and move away from Java applets which are no
longer supported by modern web browsers. While they provide a solution that
is reusable, and prevents application developers from building UIs from scratch
for each lab, this framework only supports the generation of UIs for labs which
are compatible with their implementation of the presented app builder.

3 Revisited Smart Device Specifications


The API Metadata of the Smart Device Specifications specify the communica-
tion protocol and formats for sending requests and receiving responses from a
remote laboratory. It is composed of two main sections: apis and models. The apis
describe which services are implemented and how they can be accessed, by pro-
viding information on the adopted communication protocol, the type of requests
to write and responses to receive specified by their corresponding models, the
parameters to pass to the request, and the authorization schema to implement
at the client side if applicable. The models section details the structure of the
requests, responses, and data to be applied to the actuators or sensed by the
sensors. It includes information on the unit, type, allowed ranges, range steps,
last measured values, and the value update frequency.
The apis section is based on four main API calls: getSensorMetadata, get-
SensorData, getActuatorMetadata, and setActuatorData. getSensorMetadata is
formatted as a SensorMetadataRequest model, returns a list of all sensors in the
lab in a response formatted as a SensorMetadataResponse model. In the response
to this request, the sensorIds are included to allow for separate calls to each.
To read the data on a specific sensor, the UI calls the getSensorData request as
modeled by a SensorDataRequest by including the corresponding sensorId, as a
response the data captured by the sensor is returned in a SensorDataResponse.
A getActuatorMetadata request sent as an ActuatorMetadataRequest returns a
list of all actuators in the lab: the actuatorIds in an ActuatorMetadataResponse.
The actuatorIds is an array which contains the actuatorId of separate actuators.
To write data to an actuator, it is sufficient to invoke the sendActuatorData
request formatted as a SetActuatorData request, providing an actuatorId.
With such information a basic automatic UI generator can be put in place.
The API calls in addition to the requests and responses models are provided. It

zamfira@unitbv.ro
782 W. Halimi et al.

is worth mentioning that all exchanged requests and responses with the Smart
Device are JSON-encoded, further facilitating the parsing of the API calls, which
can be automatized. But what sensors to pick with which actuators? How are the
sensors and actuators working together? Are there several possible experiments
that can be conducted with the same setup, but using different sensor-actuator
configuration? All of this information is not included in the existing SD Specifica-
tions. What differentiates remote laboratories from other cyber-physical systems
is that they are built to fulfil an educational goal: conducting pre-defined experi-
ments to reflect on certain topics. With no knowledge about the interconnections
of the lab components, it is not possible to build a UI that interfaces ‘pedagog-
ically meaningful’ experiments. In this section, we extend the Smart Device
Specifications to describe the possible “configurations” or “experiments” of labs
supporting one or various experiments, further enabling the auto generation of
the user interface. We extend the detailed ‘metadata’ to add a service to the apis
which returns the configurations or experiments supported by the remote lab, in
addition to the requests and responses models. The extended SD Specifications
with the experiments service provides enough information to enable the auto-
matic generation of user interfaces without the need of the lab owner to confirm
the possibility of conducting a particular experiment. Our proposed extension is
two-fold:
1. Define the models for an Experiment, SendExperimentsRequest, SendExperi-
mentRequest, ExperiementsMetadataResponse, and ExperiementMetadataRe-
sponse
2. Define the new api calls: getExperiments and getExperiment

Experiment Model: An Experiment model is characterised by 2 fields common


to all models: id and properties. The id characterises the model at hand, in
this case its value is Experiment. This id field gives knowledge to the automatic
generator about the format of an Experiment JSON object for further processing.
The properties are made up of 5 sub-fields:
– experimentId: which can take any string value. The value of this field is defined
by the lab provider.
– fullName: which contains a non-formal name of the experiment. It can take
any string value.
– description: a human readable description of what the experiment is about.
This field is meant to be informative for teachers, to get a high level descrip-
tion of the experiment.
– sensors: it is an array containing a list of the sensor ids used in a partic-
ular experiment. sensorIds can have any string value. The string values of
sensorIds contained in this JSON object should be corresponding sensorIds
defined in the metadata.
– actuators: it is an array containing a list of the actuator ids used in a par-
ticular experiment. actuatorIds can have any string value. The string values
of actuatorIds contained in this JSON object should be corresponding actu-
atorIds defined in the metadata

zamfira@unitbv.ro
Enabling the Automatic Generation of User Interfaces 783

A complete Experiment model is shown below:


"Experiment": {
"id": "Experiment",
"properties": {
"experimentId": {
"type": "string"
},
"fullName": {
"type": "string"
},
"description": {
"type": "string"
},
"sensors": {
"type": "array",
"items": {
"id": "Sensor",
"properties": {
"sensorId": {
"type": "string"
}}}
},
"actuators": {
"type": "array",
"items": {
"id": "Actuator",
"properties": {
"actuatorId": {
"type": "string"
}}}
}
}

getExperiments api: The getExperiments api allows the retrieval of a list of


supported experiments. The nickname of this call is “getExperiments” which
means it needs to be used when initiating a request. summary and notes fields
give a high level description of what this call does: answers with a JSON object
containing the list of available experiments ids. The response of this call is for-
matted as an ExperimentMetadataResponse which will be detailed later in this
section. As it can be deducted from the properties field, the request is formatted
as a SimpleRequest defined in the original SD Specifications. The authorization
field designates authentication mechanisms that the remote lab is using to per-
mit users to access the lab, if empty it means no authorization needs to be
done. responseMessages detail the possible responses that can be received at the
requester end, in case an ExperimentMetadataResponse cannot be received.

zamfira@unitbv.ro
784 W. Halimi et al.

{"method": "Send",
"nickname": "getExperiments",
"summary": "Returns a list of possible experiments",
"notes": "Returns a JSON array with all the ids of possible experiments",
"type": "ExperimentMetadataResponse",
"parameters": [{
"name": "message",
"description": "The payload for the getExperiments service.",
"required": true,
"paramType": "message",
"type": "SimpleRequest",
"allowMultiple": false
}],
"authorizations": {},
"responseMessages": [{
"code": 402,
"message": "Too many users"},{
"code": 404,
"message": "Experiments not found"},{
"code": 405,
"message": "Method not allowed. The requested method is not allowed
by this server."},{
"code": 422,
"message": "The request body is unprocessable"
}]}

ExperimentRequest Model: To retrieve the required actuatorIds and sen-


sorIds for a particular experiment, an ExperimentRequest has to be sent to
the Smart Device hosting the laboratory as shown hereafter. The Experimen-
tRequest should contain the experimentId of the desired experiment. A list of
experimentsId s can be retrieved with the getExperiments call.
"ExperimentRequest": {
"id": "ExperimentRequest",
"required": ["method", "experimentId"],
"properties": {
"method": {
"type": "string",
"description": "The method should be equal to the nickname of one
of the provided services."},
"experimentId": {
"type": "string"}
}

ExperiementMetadataResponse Model: The response of an Experiemn-


tRequest is an ExperimentMetadataResponse. The id of this response tells the
type of JSON object to expect at the receiving end. It is formatted as to con-
tain the Experiment JSON object which defines an experiment. This should
be enough for an auto generator to create a UI corresponding to the required
request.

zamfira@unitbv.ro
Enabling the Automatic Generation of User Interfaces 785

"ExperimentMetadataResponse": {
"id": "ExperimentMetadataResponse",
"properties": {
"method": {
"type": "string"},
"experiments": {
"type": "array",
"items": {
"$ref": "Experiment"
}}}}

In Sects. 4 and 5 we present our automatic generator of user interfaces based


on the extended Smart Device Specifications, to demonstrate their complete-
ness for our purpose. In addition to providing the teachers with a tool that
enables them to autonomously create basic UIs for remote labs, this automatic
UI generator provides the possibility of personalizing the UI for embedding in
a platform of choice: the social media platform graasp,1 or an LTI consumer
platform such as Moodle or edX2 . The proposed approach is illustrated through
a remote laboratory supporting multiple experiments: the remote Mach-Zehnder
interferometer. This remote laboratory was presented as a work in progress in [6],
where we build the lab using software templates according to the Smart Device
Specifications, and in this paper we extend the implementation to support the
personalised auto-generation of the user interface with the added configurations.

4 An Example: Light Interference Experiences


4.1 The Mach-Zehnder Interferometer

As mentioned earlier, we are especially interested in reconfigurable experiments.


The Mach-Zehnder Interferometer is an example of devices which are used to
study different subjects ranging from light interference to optical telecommuni-
cation, in classical and also in quantum physics [3,6]. The Mach-Zehnder inter-
ferometer considered in this paper has a layout shown in Fig. 1. The apparatus is
composed of a monochromatic light beam, two half-mirrors, two complete mir-
rors, two beam splitters, a density filter, and a detection screen mounted with a
photo diode. With this layout, some light interference characteristics and result-
ing phenomena can be studied by repeatedly reflecting the light beam on the
mirrors and half-mirrors before its arrival on the detection screen. In the next
subsection we describe two of the possible experiments that can be conducted
with this equipment. For visualization purposes, two cameras are placed in the
lab in order to reflect the status of real environment: a camera placed in front
of the detector screen to see the resulting incident light, and an infrared camera
(because the experiments are conducted in the dark) that shows the whole setup.

1
http://graasp.eu/.
2
https://www.edx.org/.

zamfira@unitbv.ro
786 W. Halimi et al.

Fig. 1. The Mach-Zehnder interferometer layout

4.2 The Experiments

The mind map in Fig. 2 shows two possible experiments that can be done with
the Mach-Zehnder interferometer, upon which we will base our explanation of
the implementation and function of the automatic UI generator in Sect. 5.
The first and second experiments are conducted in a high light intensity
setup, meaning that the density filter is not attenuating the intensity of the light
coming from the monochromatic light beam. The first experiment enables the
users to quantitatively understand light interference, by visualizing the resulting
fringes on the screen, and/or also the feed from the infrared camera, in addition
to depicting the direction in which the fringes move when the mirror mounted
with a piezo actuator manually controlled with a voltage which is increasing or
decreasing in value. In the second experiment, the students can quantitatively
study light interference by observing the emitted signal from the photo diode as
the piezo is controlled with a triangular signal causing a translation motion.

Fig. 2. Mindmap of the Mach-Zehnder experiments

zamfira@unitbv.ro
Enabling the Automatic Generation of User Interfaces 787

When a getExperiments api call is sent to the Smart Device hosting this
laboratory, the following response is received:
{"method": "getExperiments",
"experiments": [{
"experimentId": "qualitative",
"fullName": "Qualitative Study",
"description": "Observing light interference on the screen",
"sensors": [{"sensorId": "Video"}, {"sensorId": "VideoIR"}],
"actuators": [{"actuatorId": "laser"},{"actuatorId": "piezo"},
{"actuatorId": "bs1"},{"actuatorId": "bs2"}]
},{
"experimentId": "quantitative",
"fullName": "Quantitative Study",
"description": "Studying the signal provided by the photodiode",
"sensors": [{"sensorId": "photodiode"}],
"actuators": [{"actuatorId": "laser"},{"actuatorId": "piezo"},
{"actuatorId": "bs1"},{"actuatorId": "bs2"}]
}]}

The response shows that there are two possible experiments with the exper-
imentId s “qualitative” and “quantitative”. Accordingly, the list of sensors and
actuators for each of the experiments can be either used from this response, or
retrieved by a separate call to getExperiment while passing the corresponding
experimentId.

5 The Automatic UI Generator


5.1 Design Considerations
In most cases, a remote laboratory is part of a learning activity comprising other
educational resources such as documents, videos, etc. The learning activity is
usually hosted by a MOOC platform such as edX, or social media platform such
as graasp. When a remote lab is used in such contexts, it is important to take into
consideration the following points to insure the integration of the RL user client
in the platform on several levels: knowledge about user identity (awareness), the
context, and having access to the storage resources of the platform.
Awareness about user identity is necessary for several purposes: authentica-
tion with the RL when required, saving and retrieving the data, and capturing
user interaction with the RL user application. It is necessary to associate this
data to the platform users for personalization purposes. Additionally, when con-
ducting an experiment a lot of data is generated. Usually, when students are
doing their experiments in physical labs, they save the data in files to be used
for processing or take note of certain parameters. In all cases, these assets are
saved for later reference or post processing. It is primordial to provide such
facilities to the students, where keeping and retrieving their data can be accom-
plished within the platform. We take into account these considerations when
implementing the automatic UI generator as detailed next.

zamfira@unitbv.ro
788 W. Halimi et al.

5.2 Implementation

The automatic UI generator is a tool that enables the creation of a fully func-
tional remote lab web client with a few clicks. The teacher needs to know the IP
address and the port number over which a Smart Device is serving the desired
remote lab. Using this information, the tool initiates a WebSocket connection
with the lab server, and subsequently call the getExperiments service, which
returns an array describing each experimental configuration supported by the
Smart Device. As mentioned in Sect. 3, each experiment is described by: the
experimentId that uniquely identifies each experiment, the fullname and descrip-
tion of the experiment, in addition to the sensors and actuators arrays that
contain the ids of all the respective sensors and actuators used by each exper-
imental configuration. These configurations are displayed as checkboxes having
the full name and the description of the experiment as their labels. The teachers
can then select one or more of the presented possible configurations according
to their educational goals. After performing this selection, the auto generator
knows the ids of all the different sensors and actuators required for each exper-
iment, and will thus send getAcutatorMedata and getSensorMetadata requests
to the lab server in order to acquire the necessary information about each (See
Fig. 3).

Fig. 3. How the automatic UI generator interacts with the Smart Device to build the UI

For actuator access, the auto generator makes use of some of the fields
obtained from the actuator metadata, in order to generate the necessary UI
components. It uses the actuatorId which uniquely identifies each actuator, to
populate the actuatorId field of the request packet which is sent to the Smart
Device whenever a user of the generated lab client alters the state of an actuator,
thus making a call to the sendActuatorData service. The auto generator also uses
the values field of the metadata, which is an array of all the measurement values

zamfira@unitbv.ro
Enabling the Automatic Generation of User Interfaces 789

each actuator contains. Each actuator value will be represented as a separate UI


component in the generated widget. The auto generator uses the following fields
from the metadata of each value:
– name: used to differentiate among the multiple values of an actuator.
– type: used to decide what type of UI component needs to be created for each
value. For instance, a value of type boolean will be represented as a button
that can be turned on or off by the user. Moreover, a value of type float will
be represented as a numeric slider.
– rangeMinimum and rangeMaximum: used by the auto generator to specify
the boundaries of the numeric slider that is created for a value of type float.
For sensor requests, the UI generator uses the sensorId, which uniquely iden-
tifies each sensor, to populate the sensorId field of the request packet that is sent
to the Smart Device whenever the lab client makes a call to the getSensorData
service. The generator also takes into consideration the webSocketType field of
the sensor metadata to check whether a given sensor requires a text or a binary
WebSocket. In case of a binary WebSocket, the generator assumes that it is a
video feed and creates a UI component that displays the video. In the case of a
text WebSocket, the generator uses the values field of the sensor metadata and
represents each value as a separate UI component in the generated gadget. The
auto generator uses the following fields from the metadata of each value:
– name: used to differentiate among the multiple values of a sensor.
– type: used to decide what type of UI component should to be created for each
sensor value. For instance, a value of type boolean will be represented as a
LED indicator. Moreover, a value of type string will be represented as a text
value.
– unit: used by the generator to append a unit symbol to the retrieved sensor
value.
Furthermore, the teacher has to choose an educational platform in which
the generated UI will be embedded (See Fig. 4). Currently, the automatic UI
generator provides UIs which can be embedded in graasp, or in an LTI consumer
platform (such as Moodle, or edX). If an LTI hosting platform is chosen as
the target platform, then the resulting lab client application will automatically
instantiate a WebSocket connection with the Smart Device whenever a user
accesses the lab client, update the UI components of the sensors upon receiving
new sensor values, handle the actuator changes performed by the user and send
the new actuator data to the Smart Device. According to the teacher’s selection
of one or more experiments, the application will contain one or more tabs. Each
tab represents a selected experimental configuration. Clicking on a tab in the
client application will result in accessing the corresponding experiment, and
displaying all the sensors and actuators associated with that setup. In this case,
the resulting UI is an html file which can be used by the teacher to embed in
the LTI consumer platform.
On the other hand, if graasp is chosen as the target educational platform, then
the generated lab client application is an OpenSocial (OS) widget which can be

zamfira@unitbv.ro
790 W. Halimi et al.

Fig. 4. The landing page of the automatic UI generator showing the two available
experiment configurations for the Mach-Zehnder lab

embedded in the platform. Graasp supports OpenSocial widgets through its own
implementation of the Shindig Apache server, enabling third party applications
to access its database for user information, and for saving and retrieving files,
actions, and other platform specific data [1]. Consequently, the OS widget will
have all the aforementioned features of the LTI-targeted application, in addition
to the following (See Fig. 5):
– Action logging: the generated graasp gadget will use the ActionLogger
library3 , which provides an easy mechanism for logging the activities of the
students. Interactions with the different UI components are saved as Activity
Streams that have the actor-verb-object format. The logged activities can
later be used to perform learning analytics.

Fig. 5. The generated remote lab client application on graasp for the Mach-Zehnder
lab
3
https://github.com/go-lab/ils/wiki/ActionLogger.

zamfira@unitbv.ro
Enabling the Automatic Generation of User Interfaces 791

– Saving experimental data: The lab application allows students to save the
actuator and sensor data that were acquired while conducting the experiment.
The data is saved in a specific format, that allows students to use it in other
applications on the platform. For example, the students can have a graphical
view of the experimental results using the Data Viewer application4 .

6 Teacher Customization

The automatic UI generator provides a basic and fully functional client applica-
tion for operating a remote lab. The UI components are very basic, and might not
be visually attractive. Using the generated code, the teachers can further person-
alize the UI appearance to fit their taste and needs. For example, a teacher in the
Gymnase de Morges in Switzerland, chose to customize the UI to be embedded
in graasp as shown in Fig. 6.
In this widget, there are two tabs to switch between two possible experiments.
In the Quantitative Study tab, there is a simulation diagram which allows stu-
dents to control the lab by clicking on the corresponding image of a component.
For example, to turn the laser beam ON/OFF it is enough to click on the box
representing the light source. On the diagram are also present the placements of
the IR camera and the normal camera allowing the student to know about the
perspective of the video feeds. In this widget, the teacher chose to only display
the video coming from Camera 2 showing the fringes on the screen. Next to it is
a graphing tool that shows the signal captured by the photo diode in real-time.

Fig. 6. Example of personalized Mach-Zehnder OS widget in graasp


4
http://go-lab.gw.utwente.nl/production/dataViewer/build/dataViewerTool.xml.

zamfira@unitbv.ro
792 W. Halimi et al.

Since the teacher doesn’t want the students to have to scroll, and since the sim-
ulation diagram conveys a real-time status of the lab, he decided that there are
enough UI components for the students to conduct the experiment while having
a good user experience.
Of course, the UI could have been customized otherwise to show the UI
components differently, or to resize them in a different way. For example, an
input box to control the piezo actuator could have been a replacement for the
slider control. Also, instead of only showing the feed of Camera 2, both feeds
from Camera 1 and Camera 2 could have been shown, in addition to the graphing
tool. All of this is possible by starting from the code provided by the automatic
UI generator. This alleviates the burden of establishing connections and parsing
the remote lab APIs, making it more easy to personalize the appearance of user
client according to a desired user experience.

7 Conclusion and Future Work

To describe a remote laboratory and enable the automatic generation of user


interfaces, it is not enough to solely rely on describing the services making up
laboratories. In this work we presented the extended Smart Device Specifications
to support the description of the different configurations remote laboratories pro-
vide. This extension further enables the automatic generation of user interfaces.
We also proposed a generator tool which helps teachers in autonomously creat-
ing client applications for different target platforms: graasp or an LTI-consumer
environment. In our implementation of the tool, we take into consideration the
need for full-integration with a target platform hosting the UI by providing an
integration layer already embedded in the application supporting user identifi-
cation, activity tracking, saving and retrieving experimental data.
To the best of our knowledge, this is the first automatic UI generator for
remote lab clients, which is also integrating data storage for the target embedding
learning environment. To ease the adoption of the Smart Device Specifications
and to enable the targeting of other learning environments, the UI generator
application is openly shared under the CC-BY-NC5 creative commons licenses,
on this link: http://shindig2.epfl.ch/gadget/automatic gadget generator/.

Acknowledgment. This research is partially funded by the European Union in the


context of Go-Lab (grant no. 317601) project under the ICT theme of the 7th Frame-
work Programme for R&D (FP7).

References
1. Bogdanov, E., Ullrich, C., Isaksson, E., Palmér, M., Gillet, D.: From LMS to PLE:
a step forward through opensocial apps in moodle. In: International Conference on
Web-Based Learning, pp. 69–78. Springer (2012)

5
https://creativecommons.org/licenses/by-nc/2.0/.

zamfira@unitbv.ro
Enabling the Automatic Generation of User Interfaces 793

2. Dasarathy, B., Sullivan, K., Schmidt, D.C., Fisher, D.H., Porter, A.: The past,
present, and future of MOOCs and their relevance to software engineering. In:
Proceedings of the on Future of Software Engineering, pp. 212–224. ACM (2014)
3. Halimi, W., Salzmann, C., Gillet, D.: The Mach-Zehnder interferometer — a smart
remote experiment based on a software template. In: 2016 13th International Con-
ference on Remote Engineering and Virtual Instrumentation (REV), pp. 287–292.
IEEE (2016)
4. Harward, V.J., Del Alamo, J.A., Lerman, S.R., Bailey, P.H., Carpenter, J., DeLong,
K., Felknor, C., Hardison, J., Harrison, B., Jabbour, I., et al.: The iLab shared
architecture: a web services infrastructure to build communities of internet acces-
sible laboratories. Proc. IEEE 96(6), 931–950 (2008)
5. Lowe, D.: MOOLs: massive open online laboratories: an analysis of scale and feasi-
bility. In: 2014 11th International Conference on Remote Engineering and Virtual
Instrumentation (REV), pp. 1–6. IEEE (2014)
6. Pereira, A., Ostermann, F., Cavalcanti, C.: On the use of a virtual Mach-Zehnder
interferometer in the teaching of quantum mechanics. Phys. Educ. 44(3), 281
(2009)
7. Salzmann, C., Gillet, D.: Remote labs and social media: agile aggregation and
exploitation in higher engineering education. In: 2011 IEEE Global Engineering
Education Conference (EDUCON), pp. 307–311. IEEE (2011)
8. Salzmann, C., Gillet, D.: Smart device paradigm standardization for online labs.
In: 4th IEEE Global Engineering Education Conference (EDUCON) (2013)
9. Salzmann, C., Govaerts, S., Halimi, W., Gillet, D.: The smart device specification
for remote labs. In: 2015 12th International Conference on Remote Engineering
and Virtual Instrumentation (REV), pp. 199–208. IEEE (2015)
10. Tawfik, M., Salzmann, C., Gillet, D., Lowe, D., Saliah-Hassane, H., Sancristobal,
E., Castro, M.: Laboratory as a service (LAAS): a model for developing and imple-
menting remote laboratories as modular components. In: 2014 11th International
Conference on Remote Engineering and Virtual Instrumentation (REV), pp. 11–20.
IEEE (2014)
11. Thompson, C.W.: Smart devices and soft controllers. IEEE Internet Comput. 9(1),
82–85 (2005)
12. de la Torre, L., Sánchez, J., Andrade, T.F., Restivo, M.T.: Easy creation and
deployment of javascript remote labs with EjsS and moodle. In: 2016 13th Inter-
national Conference on Remote Engineering and Virtual Instrumentation (REV),
pp. 260–261. IEEE (2016)
13. Zutin, D.G., Auer, M., Ordu, P., Kreiter, C., et al.: Online lab infrastructure as
a service: a new paradigm to simplify the development and deployment of online
labs. In: 2016 13th International Conference on Remote Engineering and Virtual
Instrumentation (REV), pp. 208–214. IEEE (2016)

zamfira@unitbv.ro
A Practical Approach to Teaching Industry 4.0
Technologies

Tom Wanyama(&), Ishwar Singh, and Dan Centea

School of Engineering Practice and Technology, McMaster University,


Hamilton, ON, Canada
{wanyama,isingh,centeadn}@mcmaster.ca

Abstract. The School of Engineering Practice Technology (SEPT) at


McMaster University is making a deliberate effort to train the next generation of
engineers that are ready to work in Industry 4.0 environment. Under this effort
we have developed equipment for teaching the technologies that support
Industry 4.0, and this paper presents two sets of such equipment. The first set is
used to teach machine-to-machine (M2M) communication, while the second is
used to teach system control and automation data access. The accessed data is
used in a multiplicity of students’ projects, including but not limited to SCADA
systems, system simulation and control using fuzzy logic and artificial neural
network, cloud based systems, and data analytics. In addition, this paper
describes how the equipment is utilized to support graduate and undergraduate
teaching through the experiential learning paradigm of laboratory based pro-
jects. The paper also presents example student projects that have been carried
out using our equipment.

Keywords: Industry 4.0  Teaching and learning  Industrial networks 


Internet laboratories  Internet of Things

1 Introduction

There are pervasive disruptive forces of change around the world that necessitate the
Canadian manufacturing sector to undergo a transformation that is fuelled by the
Internet of Things (IoT), big data, and artificial intelligence in order to remain sus-
tainable. This imminent transformation is generally referred to in literature as Industry
4.0. Industry 4.0 is the fourth industrial revolution, following the previous three rev-
olutions, namely: the introduction of the steam engine, electricity, and information
technology. The main objective of this revolution is to develop new business models
that tap the potential optimization in production and logistics caused by increased and
integrated industrial automation, cloud computing, global databases, networked intel-
ligent system monitoring and control, and autonomous decision-making. While the
technological enablers of Industry 4.0 are all around us, a few manufacturers are truly
transforming. One of the main reasons being the general lack of trades-workers,
technicians, and engineers who are knowledgeable in Industry 4.0 concepts.
Industry 4.0 manufacturing paradigm is heavily dependent on machine to machine
communication (horizontal integration at the plant level), business and manufacturing

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_74
zamfira@unitbv.ro
A Practical Approach to Teaching Industry 4.0 Technologies 795

processes integration (vertical integration), and the value chain integration (horizontal
integration at the services level). All this integration has to be supported by a variety of
networking technologies. Tom Wanyama in his book titled “A Practical Approach to
System Integration” summarizes the Industry 4.0 networking technologies in Fig. 1.
The figure shows that Industry 4.0 depends on horizontal integration networking
technologies at the factory floor level, at the plant level where they create the Industrial
Internet of Things, at the enterprise level, and at the value chain level where they create
the Internet of Services. Moreover, the figure show that Industry 4.0 requires the use of
vertical integration technologies such as Ethernet and TCP/IP, Open Platform Com-
munications (OPC), Virtual Private Networks (VPN), and cloud computing which
enable integration of the factory floor systems with SCADA systems, as well as
business automation systems such as Manufacturing Execution Systems (MES),
Enterprise Recourse Planning (ERP) systems, and other enterprise wide manufacturing
optimization and data analytics systems [19].

Fig. 1. Summary of Industry 4.0 network technologies [19]

In the School of Engineering Practice Technology (SEPT) at McMaster University


we have developed two courses, namely: Industrial Networks and Controllers, and
Advanced System Components and Integration that cover the theory and practice of

zamfira@unitbv.ro
796 T. Wanyama et al.

networking technologies shown in Fig. 1. But this paper focuses on the practical aspect
of these courses which is taught through laboratories and laboratory based projects. For
this aspect, we have developed a set of laboratory infrastructure that can be used by
students to carry out laboratories and projects in the following areas:
• Machine-to-machine communication.
• Vertical integration of manufacturing systems from the factory floor to the enter-
prise and value chain level.
• Value chain integration and process optimization.
• SCADA and Human Machine Interface (HMI) development.
• Advanced system control using artificial intelligence methods such as fuzzy logic
and artificial neural networks.
Our laboratory equipment is accessible remotely through the Internet, meaning that
we use Industry 4.0 technologies to provide online hands on learning of the same
technologies. This is important because with the increased use of technology in every
aspect of human life, digital learning has become a very important form of teaching and
learning. But the future of digital learning lies in the hybridization of courses, incor-
porating in-person components into online classes. This learning framework is gen-
erally referred to in literature as blended learning. Research shows that blended
learning is exceptionally promising. Like purely digital learning, blended learning lacks
the time and space constraints imposed by in-person courses, thus much more con-
ducive for the expansion of learning time. But, unlike purely digital learning, blend
learning includes opportunities for reflection and interaction with peers and teachers.
Moreover, today’s university students already lead blended lives. They access news,
pay bills, search for vacation destinations online, and they communicate by email and
social media [7]. On the other hand, they go to movies, shop in malls, and visit friends
and family in-person. This makes blended learning an appropriate pedagogical para-
digm for today’s students. However, the need to carry out laboratory experiments
during the course of engineering and science programs is a great challenge to the online
component of blended learning.
Although Industry 4.0 technologies were developed to increase resource opti-
mization in the manufacturing industry, they can be used to provide improved remote
access to laboratory equipment. In fact many innovative hardware and software solu-
tions that are making massive changes to the industrial world in the context of Industry
4.0, are also being adopted to support teaching and learning. Common examples of
such innovations include the following:
• Cloud server based learning management systems such as Moodle that support the
offering of millions of courses to millions of students worldwide.
• Wikis which enables cooperative text production and different kinds of assessment
modes of quizzes that give teachers the chance to test students whenever they want
and as many times they want during the semester.
Digitally supported learning brings advantages to students in terms of awareness of
the course content, as well as increased collaboration. Therefore, it is imperative that we
use Industry 4.0 technologies in teaching; especially laboratory work as we prepare the
next generation of engineers to work in Industry 4.0 enabled environment. This gives

zamfira@unitbv.ro
A Practical Approach to Teaching Industry 4.0 Technologies 797

them the skills required to work in semi virtualized world that can be realized as in the
following examples: Analyzing a defective machine, monitoring and optimizing energy
consumption of multiple production sites, coming up with a logistics concept for a
virtual factory, or designing a virtual car [14].
In engineering education computer-supported cooperative and collaborative
learning have long been established as methods which support self-driven and
work-related learning processes. Introducing online (Internet) laboratories lifts such
common learning methods to a new level; and in this paper we present laboratory
equipment that we designed to be accessible online through the Internet. The equip-
ment uses Industry 4.0 remote data access technologies, and it is used onsite and
online. Onsite, the equipment is used to teach hardware configuration and integrated,
and online, it is used to teach PLC programming, PLC automation data access, software
applications integration, and Human Machine Interface (HMI) development. This paper
describes the following:
• The laboratory equipment architecture.
• The deployment of the laboratory in one of the course in the process automation
program at McMaster University.
• The feedback we on the educative effectiveness of the laboratory.
The rest of this paper is arranged as follows: Sect. 2 covers work in literature that is
related to online laboratories. In Sect. 3 we present the framework of our Industry 4.0
technologies laboratory equipment, while Sect. 4 deals with the deployment of the
laboratories in a system integration course at McMaster University. Section 5 deals
with the discussions, and the conclusion and future work are covered in Sect. 6.

2 Literature Review

Laboratories place a critical role in the teaching and learning of science based courses.
These laboratories fall under three general categories, namely: hands-on, simulated,
and remote laboratories. Each laboratory categories has strength and weakness, and
there is no consensus in literature on a category that is the most effective [10].

2.1 General Laboratory Categories


Hands-on laboratories give students real data that shows discrepancies between theory
and practice. Such experiences cannot be produced in simulated laboratories [9].
Moreover, there are many important soft skills such as handling laboratory equipment,
following laboratory regulations, and responding to laboratory (workplace emergen-
cies), which can only be learned through hands–on laboratories. On the flip side,
hands-on laboratories are generally expensive and place a high demand on space and
instruction time [6].
Simulated laboratories imitate hands-on laboratories using infrastructure that is
simulated on computers. This lead to the main strength of simulated labs, reduced cost
and time required to setup and manage the laboratory infrastructure [10]. However,

zamfira@unitbv.ro
798 T. Wanyama et al.

some in literature believe that excessive exposure to simulation may result in a dis-
connection between real and virtual worlds [9].
Remote laboratories used real infrastructure that requires space and management,
just like hands-on laboratories. In remote laboratories the experimenter is geographi-
cally detached from the laboratory equipment, as opposed to being collocated with the
equipment as in hands-on laboratories [10]. It is generally agreed in literature that
remote laboratories are increasingly becoming popular because of the following
reasons:
• They can be shared among different institutions, resulting into a shared pool of real
laboratory infrastructure; and hence reducing cost (note that remote laboratory
equipment can be designed to be used in both remote and hands-on modes) [15].
• They have the ability to extend the capabilities of real laboratory equipment,
making it accessible at any time, and from anywhere [2, 3].
• Some studies show that remote laboratories are at least as effective as hands-on
laboratories [3, 11].
Web-based simulated laboratories and remote laboratories can be used support
online teaching and learning. But, while the framework for supporting online course
content delivery is mature, the framework for supporting online laboratories is still
lacking [8]. Therefore, the need to carry out laboratory experiments in engineering
programs is a great challenge to the online component of blended learning. In their
paper titled “A LabVIEW-Based Remote Laboratory Experiments for Control Engi-
neering Education”, Stefanovic et al. state that: “The idea of having a remote
web-based laboratory corresponds to attempt to overcome different constraints and may
be the next step in distance learning” [17].
It is important to note that there are efforts to solve the challenge of integrating
remote laboratories into blended learning [18]. For example, in German, each of the
seven universities that make up the LearnNet network has to provide a remote lab to all
members [12]. In addition, Internet-based remote-access laboratory was developed,
implemented, and piloted at Stevens Institute of Technology in 2005 [4]. In their
implementation, the experimental equipment can be used in the traditional on-site
fashion or it can be accessed remotely through the Internet. Generally, it is possible to
use a combination of available technologies and specific methods to control, configure,
and acquire data from experimental setups over the internet [18].

2.2 Using Industry 4.0 Technologies to Support Remote Laboratories


It is generally agreed in literature that Industry 4.0 technologies are poise to transform
manufacturing. But in the School of Engineering Practice and Technology (SEPT) at
McMaster University, we are using these technologies to transform teaching and
learning through online laboratories. Industry 4.0 is the new manufacturing paradigm
that seeks to leverage the potential optimization in production and logistics caused by
the following technologies [1, 13]:
• Modern industrial automation.
• Cloud computing and big data.

zamfira@unitbv.ro
A Practical Approach to Teaching Industry 4.0 Technologies 799

• Networking (Machine-to-Machine, SCADA, and Business-to-Business


communication).
• Additive and smart manufacturing.
• Intelligent system monitoring, and control, and autonomous decision-making.
In our remote laboratory systems we use industry 4.0 networking technologies that
support machine-to-machine communication over the Internet. The technologies have
proven reliability as well as inbuilt cybersecurity functionalities (Fig. 5).

3 Framework for the McMaster University iLabs

Since one of the main focuses of the School of Engineering Practice and Technology at
McMaster University is hands-on learning, the McMaster iLabs (MiLabs) are based on
a framework that supports both onsite and (remote) offsite lab access. Onsite students
are able to configure, program and control laboratory equipment directly, while offsite
students access the laboratory equipment through Internet of Things (IoT) gateways to
remotely program and control it. Currently, the following laboratories are offered by
MiLabs:
• The first set of laboratories focuses on network design, configuration, and wiring.
These laboratories are primarily carried out onsite.
• The second set of laboratories can be offered onsite and offsite. They cover PLC
programming with a focus on programming machine to machine communication
integration of manufacturing and business automation software applications, and
PLC automation systems data access.

3.1 MiLabs Onsite Laboratory Equipment


Ethernet is ubiquitous, cost effective, uses common physical link for multiple appli-
cations, has high speed, and increasingly becoming deterministic. Therefore, it is
poised to become the de facto protocol for industrial networks. That is why MiLabs
focus on EtherNet IP as the plant level network [16]. The Onsite set of MiLabs cover
concepts associated with the physical, data link, and network layers of the Open
System Interconnection (OSI) communication reference model. With respect to
EtherNet IP, such concepts include various network principles such as addressing
structure, wiring requirements, and node power requirements. The main objective of
this set of labs is to teach students the process of identifying useable IP addresses as
well as assigning addresses to the network nodes. The laboratories also cover network
configuration and programming using software tools such as CX Configurator,
RsWorx, RsLogix5000, and Productivity3000. Figure 2 show the main components of
the MiLabs onsite equipment.
The figure shows that the laboratory equipment includes Eaton PowerXL DG1
Variable Frequency Drive (VFD), Eaton ELC-CAENET remote I/O module, and
Ethernet IP complaint C411 Motor Insight monitor; connected together using Power
Xport Ethernet switch to form the laboratory Local Area Network (LAN). MiLabs
onsite equipment also support SmartWire configuration and programming laboratories.

zamfira@unitbv.ro
800 T. Wanyama et al.

Fig. 2. Ethernet IP laboratory setup

SmartWire network integrates basic automation devices such as switches, LEDs, and
relays with complex devices such as remote IOs and PLCs. We intend to use a
Smartwire to EtherNet IP gateway to integrate SmartWire with EtherNet IP. This will
enable the moving of process parameters at the basic technologies level of the IEC
automation hierarchy to the enterprise level.

3.2 MiLabs Online Laboratory Equipment


Figure 3 shows the network architecture of MiLabs online laboratory equipment. The
equipment is design to offer a wide variety of laboratories, ranging from PLC pro-
gramming and IED configuration, to horizontal and vertical industrial and business
systems integration required to support manufacturing under Industry 4.0 paradigm.
The equipment depicts a plant that has a process automation system, and an electrical
substation system. The process automation component of the equipment has an
Automation Direct CLICK micro PLC and a Productivity 3000 PLC which is also used
as an Integrated Electronic Device (IED) in some of the laboratories. The electrical
substation is automated using IEC61850 compliant IEDs that have Modbus commu-
nication capabilities, as well as power meters that communicate using Modbus RTU.
This laboratory equipment in Fig. 3 can be configured through students’ labora-
tories or projects to work as follows:
• An Automation Direct CLICK micro PLC is configured to read electrical param-
eters (voltage, current, power, and energy) from a power meter, through a Mod-
bus RTU connection.

zamfira@unitbv.ro
A Practical Approach to Teaching Industry 4.0 Technologies 801

To Cloud Manufacturing OPC Client


SCADA Level Engineering vertical
Server and IEC61850
OPC server communication

plant network (University Network)


Group HMI
IoT
Gateway
Unit HMI-2

process network substation network

horizontal communication horizontal communication


Unit HMI-2
IP: 192.168.1.101 IED-PLC MB Gateway IED
Modbus Msg to Gateway interfaces
IP: 192.168.1.10
Modbus RTU ID 2 MB Gateway
On Modbus RTU Network Modbus TCP to RTU
CLICK
OPC Client PC PLC IP: 192.168.1.10
IED-PLC VFD IP: 192.168.1.5 No. of Slaves 8:
Modbus TCP This is the PCI Card With IDs 2……9
IP: 192.168.1.13 Unit HMI-1
to with KEPServerEX Baud Rate: 19200
Modbus Msg to Gateway is Configured to PM CLICK PLC
IP: 192.168.1.10 Connect ID 3 Port 3 Salve ID 2
Modbus RTU ID 2 Party: Even
On Modbus RTU Network EtherNet IP PM-Power Meter Modbus RTU Baud Rate: 19200

Fig. 3. Ethernet IP, Modbus serial, and Modbus TCP remotely accessible laboratory setup

• The Human Machine Interface connected to the substation network in Fig. 4 can
read the power parameter from the CLICK micro PLC through the Schneider
TSXETG100 Modbus RTU to Modbus TCP gateway. In this case, each register is
read separately by the HMI. Since HMIs do not support logic instruction, the
registers are read periodically, causing a great amount of traffic on the Modbus RTU
network. This causes the HMI to flag a message timeout error from time to time.
This issue is addressed by using a Productivity 3000 PLC as an IED to read the
CLICK registers through the Modbus gateway, using a Modbus TCP read
instruction.
• The Productivity 3000 PLC can be configured to communicate with the CLICK
micro PLC, the SEL IEC61850 Relay, and the Power meter through the Modbus
serial network using Modbus RTU, or through the Ethernet network using Mod-
bus TCP. In addition, the PLC can communicate with the Eaton ELC-CAENET
remote I/O and the PowerXL DG1 or Powerflex40 VFD through Ethernet IP.
• Students configure and program the network devices using a laboratory computer
that functions as an engineering station shown in Fig. 3. The station has two net-
work cards. One that connects to the process, and the electrical substation networks;
and the other that connects to the plant (university) network.
The laboratory equipment is accessible remotely through Industry 4.0 compliant
eWON Cosy gateways as shown in Fig. 4. The gateway creates a Virtual Private
Network (VPN) via a cloud server called Talk2M to support the laboratories as follows:

zamfira@unitbv.ro
802 T. Wanyama et al.

Remotely Located Students


doing Labs Individually or in
Groups Using Screen Share
Software Applications

VPN
Lab University
Equipment Lab Network Network
Machine Network

PLC HMI
VFD

Remote IO Digital IoT


Scale Module Remote IO
Gateway Firewall

Fig. 4. Laboratory remote access using internet of things router

• The eWON Cosy connects to the Talk2M server.


• One group member uses eCatcher client to remotely log into his/her Talk2M
account. Thereafter the student selects the laboratory equipment he/she wants to
connect to.
• A fully secure VPN tunnel is set up between the student and the equipment. The
student can then go live with any devices connected to the eWON Cosy’s LAN
ports.
• If the group members are located remotely from the student who has logged into the
Talk2M VPN, he/she share his/her computer screen with the other group member
using the Learning Management System (LMS) screen share functionality. More-
over, students can communicated over voice or instant messaging functionality
provided by the LMS.
The eWON Cosy uses an outbound connection across the laboratory LAN (HTTPS
port 443 or UDP 1194). This makes the eWON Cosy to be isolated from Internet by
working with private IP address, non-reachable from the Internet. No IT/firewall changes
are needed to establish communication [5]. A key asset to the laboratory set up process.

4 Deployment of MiLabs

In the fall of 2015, we offered four related laboratories and a course project using the
MiLabs equipment. The first two laboratories were done in house to enable students to
know each other and develop working relationships. Thereafter the following labora-
tories were offered online:

zamfira@unitbv.ro
A Practical Approach to Teaching Industry 4.0 Technologies 803

• Lab 3 (a) - OPC Server Configuration.


• Lab 3 (b) - OPC DataHub Applications.
The OPC laboratories deal with the integration of the process automation system
with the electricity substation automation system (horizontal integration of systems at
the plant level), and the laboratory sessions were structured as follows:
• A two hours onsite trail was held to prepare students to do labs remotely.
• Although students accessed the laboratory equipment remotely, the laboratories
were scheduled and done synchronously.
• Students worked individually or in groups of two or three, and in some cases the
students in a group were remotely located from each other. In those cases, one
student logged in the laboratory system and shared his/her screen with the other
students. This way the group would agree on actions to be taken, and the logged in
students would take those actions on behalf of the group.
• Each student was required to record their screen to prove that they participated in
the laboratory, and to show how the group arrived to the solution.
• Some students were allowed to work onsite, but they had to log into the laboratory
equipment through the campus network.
• All student were required to log into a chat room where help queries were posted.
Everyone was free to respond to the queries.
After the laboratories had been held, students carried out a laboratory based project
in accordance with the pedagogical paradigm of laboratory based project for experi-
ential learning [20]: In the project, students were required to carry out the following
tasks:
• TASK 1: Complete the HMI of the Temperature Control and Energy Monitoring
System used in labs 3a and 3b.
1. Add more SEL relay tags to the HMI.
2. Develop the HMI using the guidelines presented in the paper, A High Perfor-
mance HMI: Better Graphics for Operations Effectiveness, by Bill Hollifield,
available at http://isawwsymposium.com/wp-content/uploads/2012/07/
WWAC2012-invited_BillHollified_HighPerformanceHMIs_paper.pdf
• TASK2: Implement one of the following functions of OPC DataHub software
application:
1. DDE: Provide data access to excel, MatLab, or Database
2. Scripting
The project was done individually, and the students accessed the laboratory
equipment remotely for a period of three weeks. It focused on vertical industrial
systems integration, in which plant floor data is moved up to the business level, where
it is integrated with data from other sources using business software applications such
as Microsoft Excel.

zamfira@unitbv.ro
804 T. Wanyama et al.

4.1 Sample of the Students’ Work


While a wide range of solutions were developed by students, the sample solution
presented in this paper is good representation of the nature of all the solutions. The
student developed an HMI in accordance with the guidelines presented in the paper
titled “A High Performance HMI: Better Graphics for Operations Effectiveness”, by
Bill Hollifield. Figure 5(a) is the first page of the HMI, while Fig. 5(b) is the second
page. The first page of the HMI does not have any controls as well as data visualization,
which is a good design. However, it would have been better if the data visualization on
page two was separated from the system control inputs.
The data shown in Fig. 5 was move to and displayed in a business application
(Microsoft Excel). This data can be integrated with any other business data to produce
actionable information. But what is remarkable is the fact that students were able to do
a laboratory based project without having to physically come to the laboratory. The
amount of time that students need to access the equipment would have made it
impossible to carry out the project if the project was to be scheduled on the laboratory
time table. In addition, because of being accessible online, 53 students were able to do
individual project using one piece of equipment.

(a) First Screen of HMI (b) Second Screen of HMI

Fig. 5. Sample student’s HMI solution for MiLabs based course project

4.2 Student’s Experience


The online laboratories brought far more to bare than what we expected. The two hours
trail run was a learning experience. Here students learned to configure software
applications used to access remotely located hardware. This in itself is a highly
desirable skill in industry. They also learned how to use collaboration software
application so as to share their computer screens with other group members.
Students’ participation in the chatroom was amazing. It felt like every students was
both a laboratory attendee and a teaching assistant. Every time a question was posted
one of the students quickly responded to it, and my role was reduced to summarizing
and clarifying responses to queries. Students who did the laboratories onsite were very
helpful too. Whenever I gave them some support, they were eager to give support to
others facing similar challenges, through the chat room.

zamfira@unitbv.ro
A Practical Approach to Teaching Industry 4.0 Technologies 805

Students were generally excited about doing laboratories and the course project by
accessing laboratory equipment online. Therefore, they gave us a lot of unsolicited
feedback. The following is some of the feedback we obtained that are representative of
the general view of the class about MiLabs:
1. “……Performing labs 3a and 3b remotely was an overall good experience. It
allowed me to focus more on the material as I was alone in a quiet area and could
not become distracted by others. Troubleshooting was also more enjoyable as I had
to rely solely on myself. I believe working remotely and alone would also stop other
students from giving up on a problem easily as they do not have easy and instant
access to the lab instructor and are forced to use critical thinking and problem
solving skills. An area that could be improved is restricting the amount of access
students have to certain tags in order to reduce the risk of shutting down or
damaging the system. Configuring and accessing systems remotely is something
that we will use when working in all industries so being able to experience it first
hand in the lab was valuable to me……”
2. “………. I am writing to you about OPC lab for 4AS3. Since I was a part of the lab
in ETB B111 and I also went home and did the lab again, I have a more informed
perspective of how the entire lab was run. The labs were very straight forward and
were not a struggle for many of the students. Only problems that I experienced were
due to miscommunication and lack of preparation from the other students. Make
sure that the VPNs are set up and ready to go before attempting the lab especially
from home. The learning environment was very good and the chat room that was
set up really helped communication of issues. Having all of the software ready to go
before hand really speeds things up and following the procedure was not difficult.
The project idea is great and I also feel like we should spend more time on OPC
and cover more of its abilities just due to how easy the remote access was from
home. This new type of learning was a very good experience and I thoroughly
enjoyed the labs. These labs should be continued and definitely added to, the remote
access experience was very unique and I learned a lot……..”
3. “………..Referring to labs 3a and 3b, I found that these were well structured. It
allowed the experimenter to get a bit of experience of using VPN to access a
machine and checking parameters, status etc. It also allowed the individual to get
experience creating an HMI to represent the information…….”
4. “……..I know many people now that are connected to their workplace 24/7, being
able to watch systems run and make changes as needed. It is part of our future of
big data, analysis, and optimizations. Next, it gives everyone the opportunity to
work simultaneously on a system, and be able to ask and answer questions to get a
better understanding of what is happening. For students that commute long dis-
tances, it can be a huge benefit. This term it benefited me extremely as I only had the
one lab on Fridays. I didn’t need to commute from Brantford to Hamilton in order
to do the lab. It saves me time, money, and lets me sleep a little longer. Finally, it
gives us a hands-on experience of the types of software we will be using in the field,
and gives us better insight into how communication protocols relate and connect to
each other. This is incredibly helpful. After the third lab I was able to much better
understand the systems in place and I could troubleshoot most problems much more

zamfira@unitbv.ro
806 T. Wanyama et al.

easily and quickly. Even though industry is evolving at a rapid pace, and the
software is changing to accommodate that, updating this lab annually wouldn’t be
a problem as the majority of the software would still support legacy devices.
……..”
Towards the end of the laboratory module, students did a post laboratory test,
followed with a laboratory based project. The students who did the laboratories
associated with MiLabs remotely performed slight better than those who did the lab-
oratories onsite in both the laboratory test and project. This can be attributed to the face
that remotely located students hard far more access time to the equipment than their
counterpart who had to come to the laboratory once a week.

5 Anticipated Outcomes

Digitally supported learning brings advantages to students in terms of awareness of the


course content, as well as increased collaboration. Therefore, it is imperative that we
use Industry 4.0 technologies in teaching; especially laboratory work as we prepare the
next generation of engineers to work in Industry 4.0 enabled environment. This gives
them the skills required to work in semi virtualized world that can be realized as in the
following examples: Analyzing a defective machine, monitoring and optimizing energy
consumption of multiple production sites, coming up with a logistics concept for a
virtual factory, or designing a virtual car.
At the operational level, online labs reduce space and cost equipment of running
labs. For example one piece of equipment can be access by the entire class of 40
students to carry out data access labs and value chain integration projects. Moreover the
lab equipment is always available to students. Therefore, they can review the lab work
at any time and test “what if” scenarios they may have. Besides collapsing time and
space for students, online labs enable and encourage instructors to include demon-
strations of sophisticated laboratory experiments into their lectures. In fact, the online
labs and laboratory based project framework forms a strong basis for integrating
experimentation into distance and electronic learning offerings.

6 Discussion

From the students’ feedback as well as our own studying of the way MiLabs were
deployed in the course PROCTECH4AS3 – Advanced System Components and
Integration at McMaster University, it is clear that students appreciate online labora-
tories if they are offered within the following structure:
• Run one or two laboratories onsite to start off the class so that the students develop
working relationships and become acquaintances. This improves their collaboration
during online laboratories.
• Hold pre-online laboratory preparation session. During this session the students
should install and test the applications that support remote access to the laboratory
equipment. In addition, they should install and test applications that enable then to

zamfira@unitbv.ro
A Practical Approach to Teaching Industry 4.0 Technologies 807

collaborate in their groups. Finally, test run the major instructions of the laboratory
during this session. For example in our laboratories the main instruction requires
students to record their screen using an application called CamStudio. This
recording provides extra proof of participation in the laboratory session.
• Run a few scheduled synchronous online laboratories, and use a simple confer-
encing mechanism such as a chatroom. This allows students to post questions and
get support from the instructor and colleagues, just as they would in onsite
laboratories.
• It is good to add a small project which students do asynchronously without the
chatroom support. This give them the freedom to try out different thing and learn
from that experience.
There were two main issues that students identified about MiLabs, namely failing to
log on during high access volume, and students having to do extra work to prove that
they logged onto the system. The first issue has been addressed by developing three
new remote access laboratory stations; and the second issue has been address by
making the new laboratory stations capable of recoding user access.

7 Conclusion and Future Work

Online laboratories have the potential to support increased access to laboratory


equipment by providing virtual knowledge spaces. These innovative virtual knowledge
spaces offer all kinds of possibilities for teaching, and for learning to work in times of
industry 4.0. In order to use the new technologies for engineering education in a proper
way, deeper insights in reception, cognition and communication in virtual environ-
ments are necessary. Simply providing the technical infrastructure does not automat-
ically guarantee successful teaching and learning In fact many misunderstand online
laboratories by thinking that they are met to simply replace in person laboratories for
purposes of reducing cost and scheduling constraints. On the contrary, online labora-
tories require innovative delivery paradigms as discussed in Sect. 5.
Besides collapsing time and space for students, MiLabs enable and encourage
instructors to include demonstrations of sophisticated laboratory experiments into their
lectures. Moreover, this framework forms a strong basis for integrating experimentation
into distance and electronic learning offerings. Consequently, we hope to share MiLabs
with other institutions of learning in the future. This will increase our collaboration
with learning institutions, including other departments in the university.

References
1. Bunse, B., Kagermann, H., Wahlster, W.: Industry 4.0: Smart Manufacturing for the Future.
Germany Trade and Invest, Berlin, German, July 2014
2. Canfora, G., Daponte, P., Rapuano, S.: Remote accessible laboratory for electronic
measurement teaching. Comput. Stand. Interfaces 26(6), 489–499 (2004)

zamfira@unitbv.ro
808 T. Wanyama et al.

3. Cooper, M., Donnelly, A., Ferreira, J.M.: Remote controlled experiments for teaching over
the internet: a comparison of approaches developed in the PEARL project. In: Proceedings
of the ASCILITE Conference, Auckland, New Zealand (2002)
4. Del Alamo, J., Nikulin, V.: Engineering laboratory accessible via the Internet. In:
Proceedings of the 2006 ASEE Annual Conference and Exposition, Session 1526, Chicago,
USA, 18–21 June 2006
5. eWON: eWON Cosy 131 Installation Manual. https://ewon.biz/sites/default/files/ig-022-0-
en-ewon_cosy131.pdf. Accessed 15 May 2016
6. Farrington, P.A., Messeimer, S.L., Schroer, B.J.: Simulation and undergraduate engineering
education: the Technology Reinvestment Project (TRP). In: Proceedings of the 1994 Winter
Simulation Conference, Lake Buena Vista, Florida, USA (1994)
7. Glazer, F.S.: Blended Learning: Across the Disciplines, Across the Academy, Stylus
Publishing, LLC (2011)
8. Koller, D., Ng, A.: Coursera makes top college courses free online, Phys.org. http://phys.org/
news/2012-07-coursera-college-courses-free-online.html. Accessed 26 Feb 2014
9. Magin, D.J., Kanapathipillai, S.: Engineering students’ understanding of the role of
experiments. Eur. J. Eng. Educ. 25(4), 351–358 (2000). 2
10. Nickerson, J. V., Hands-on, simulated, and remote laboratories. ACM Comput. Surv. 38(3),
(2006). Article 7
11. Scanlon, E., Colwel, C., Cooper, M., Paola, T.D.: Remote experiments, re-versioning and
re-thinking science learning. Comput. Educ. 43(1–2), 153–163 (2004)
12. Esche, S.K.S., Prasad, M.G., Chassapis, C.: A remotely accessible laboratory approach to
undergraduate education. In: Proceedings of the 2004 ASEE Annual Conference and
Exposition, Session 3220, Salt Lake City, UT, United States
13. Schuh, G., Reuter, C., Hauptvogel, A., Dölle, C.: Hypotheses for a theory of production in
the context of industrie 4.0. advances in production technology. Lect. Notes Prod. Eng. 11
(2015)
14. Schuster, K., Groß, K., Vossen, R., Richert A., Jeschke, S.: Preparing for industry 4.0 –
collaborative virtual learning environments in engineering education. In: The International
Conference on E-Learning in the Workplace, New York, NY, USA, 10th–12th June 2015
15. Sell, R.: Remote laboratory portal for robotic and embedded system experiments. Int.
J. Online Eng. 9(8), pp. 23–26 (2013)
16. Singh, I., Al-Mutawaly, N., Wanyama, T.: Teaching network technologies that support
industry 4.0. In: Proceedings of the Canadian Engineering Education Association
Conference, Hamilton – Canada, June 2015
17. Stefanovic, M., Cvijetkovic, V., Matijevic, M., Simic, V.: A labview-based remote
laboratory experiments for control engineering education. Comput. Appl. Eng. Educ. 19(3),
538–549 (2009)
18. Tumkor, S., Esche, S.K., Chassapis, C.: Design of remote laboratory experiments using
LabVIEW web services. In: Proceedings of the International Mechanical Engineering
Congress and Exposition, Education and Globalization; General Topics, vol. 5 (2012)
19. Wanyama, T.: A practical approach to industrial systems integration, industry 4.0 and
industrial internet of things: case of manufacturing, energy, building, environment and
business data integration using ethernet and OPC technologies, Hamilton, Ontario-Canada,
September (2016)
20. Wanyama, T., Singh, I.: A training demonstration for experiential learning in OPC based
process automation data access. In: Proceedings of the Canadian Engineering Education
Association Conference, June 2013, Montreal, QC, Canada

zamfira@unitbv.ro
Design of WEB Laboratory for Programming
and Use of an FPGA Device

Nikola Jović ✉ and Milan Matijević


( )

Faculty of Engineering, University of Kragujevac, Kragujevac, Serbia


nikolajovic@live.com

Abstract. This paper covers a topic on design and implementation of web-based


laboratory for programming and use of an FPGA device. This web-based labo‐
ratory will be used for remote programming and control of an FPGA device
designed for use in the course “Introduction to Design and Control of Integrated
Circuits for Communication, Sensors and Actuators” in Faculty of Engineering,
University of Kragujevac. Because of limited laboratory resources, both human
and technical, needed for teaching programming of an FPGA devices on Faculty
of Engineering, University of Kragujevac, a web-based laboratory is a viable
solution to the problem This approach was proved successful in the past, namely
in the course “Measurement and Control” on the Faculty of Engineering, Univer‐
sity of Kragujevac, where 190 students have successfully done four laboratory
exercises via web-based laboratory on four experimental setups. Experimental
setup is consisted of Digilent Nexys2-FPGA development board connected to the
USB port for programming, and Arduino Leonardo development board with
Firmata firmware for controlling physical inputs of the FPGA board. Software
for this web-based laboratory was written using MEAN stack. Outcomes of this
work are full implementation of a web-based laboratory for teaching purposes of
programming and control of an FPGA device, with all documentation needed for
students to successfully pass a course “Introduction to Design and Control of
Integrated Circuits for Communication, Sensors and Actuators” in Faculty of
Engineering, University of Kragujevac.

Keywords: Web-based remote laboratory · FPGA programming

1 Introduction

Web-based remote laboratories are consisted of physical laboratory model and meas‐
urement and control apparatus that could be remotely controlled through Internet
connection via Internet browser. Brief history of web-based remote laboratories is given
in [1], and positive influences of web-based remote laboratories are described in [2].
Developing countries could benefit from web-based remote laboratories by increasing
teaching capacities without downgrading quality of lectures [3]. Some other benefits are
opportunity for lecturers to demonstrate real physical experiments during lecture [4],
higher availability of laboratory capacities to students in contrast to the fixed terms,
higher availability of laboratory models to students with special needs, and availability

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_75

zamfira@unitbv.ro
810 N. Jović and M. Matijević

of laboratory models to teachers and students on those faculties that do not have neces‐
sary equipment to perform experiments. Safety of students is greatly improved during
execution of risky experiments (high voltage, temperature, chemicals, etc.). Cost of
equipment to number of users’ ratio is well favored [1–5]. Conditions that web-based
remote laboratory should satisfy are:
• Availability of laboratory model 24/7
• Client side should be written using HTML5 and JavaScript, without third-party soft‐
ware (Adobe Flash, Java)
• Minimal administration (reservation-free, access is granted to the first connected
client)
• Maximal security (safety of hardware, difficult to hack)
• System should return to the initial state after execution of experiment
• Experiment duration should be short (a few minutes)
Outcomes of this work are:
• Enabling web-based laboratory lectures for programming and control of an FPGA
• Providing environment for testing and execution of student’s design on real hardware
via Internet connection
• Easy to use graphical interface
• Contributing to development of lecture material in Serbian language based on MIT
6.111 “Introductory Digital Systems Laboratory” course
• Contributing to the general software solution for web-based remote laboratories which
is in development stage within Faculty of Engineering, University of Kragujevac
First version of the web-based remote laboratory software was tested in Faculty of
Engineering, University of Kragujevac during 2015/2016 winter semester within the
course “Measurement and Control”. A hundred and ninety students were able to
complete four laboratory exercises via Internet connection on 2–4 physical models
available in the same time [6].
Use of a web-based remote laboratory helped in solving several problems:
• Limited resources regarding space, teaching staff and laboratory equipment
• Lectures were done with respect to the existing resources, but students were provided
with access to the laboratory resources via Internet connection anytime they want
• Students had an opportunity to do individual laboratory exercises despite “20 students
per group” limitation imposed for given level of study (BSc) within University of
Kragujevac
Software solution for a web-based remote laboratory possess an aggregator, which
enables adding unlimited number of new laboratory models into the existing content of
a web-based remote laboratory [7]. Similar solution for programming and control of an
FPGA is given in [8], so idea is not new. Several drawbacks in solution given in [8] are
expensive equipment and windows-only server side. One of the aims of this work is
overcoming these problems by making lightweight and portable server side that could
work on broad spectrum of devices (PC, RaspberryPi, BeagleBone, etc.). Laboratory

zamfira@unitbv.ro
Design of WEB Laboratory for Programming 811

models used in this work are based on Digilent Nexys2-500k FPGA development plat‐
form, which are chosen because of small price and lots of digital inputs and outputs.

2 Hardware Aspect of Web-Based Laboratory Model for


Programming and Control of an FPGA Development Platform

Laboratory model for programming and control of an FPGA development platform is


based on Digilent Nexys2-500k FPGA development platform. Aim of laboratory is
remote programming of this FPGA development platform and monitoring results via

Fig. 1. Structural diagram of web-based laboratory model. (1) Client (2) Internet (3) Server
(4) Arduino Leonardo for controlling digital inputs of the FPGA development board (5) USB
JTAG programmer for FPGA development board (6) FPGA development board (7) LCD
screen connected to the FPGA (8) Web cameras for real-time video streaming

Fig. 2. Laboratory model for programming and control of an FPGA device. (1) FPGA
development board, (2) Resistor-based voltage divider circuit (3) Arduino Leonardo (4) USB
JTAG programmer (5) VGA output from FPGA board

zamfira@unitbv.ro
812 N. Jović and M. Matijević

real-time video stream. Once programmed, user can control up to 12 inputs to the FPGA
development platform. Inputs to the FPGA development platform are generated with
Arduino Leonardo development board and/or Raspberry Pi single board computer
(Fig. 1).
Laboratory model incorporates Arduino Leonardo development platform with a
resistor-based voltage dividers for controlling inputs of the FPGA development platform
(Fig. 2).
There are two web cameras for monitoring outputs of the FPGA development plat‐
form and recording images from the LCD screen connected to the FPGA development
board.
Features of the used FPGA development platform are [9]:
• Xilinx Spartan 3E FPGA circuit (Fig. 3, 1)
• Built-in USB JTAG programmer (Fig. 3, 2)
• 16 MB of Micron PSDRAM and 16 MB of Intel StrataFlash ROM memory (Fig. 3, 3)
• Xilinx Platform Flash for non-volatile configuration storage (Fig. 3, 4)
• 50 MHz cyrstal oscillator (Fig. 3, 5)
• 60 general purpose input/output pins (3.3 V) (Fig. 3, 6)
• Eight LEDs (Fig. 3, 7), four eight-segment displays (Fig. 3, 8)
• Four pushbuttons (Fig. 3, 9) and eight switches (Fig. 3, 10)
• PS/2 (Fig. 3, 11), VGA (Fig. 3, 12) and RS-232 connectors (Fig. 3, 13)
• Power over USB, external adapter or battery (Fig. 3, 14)
In order to control physical inputs of the FPGA development board over Internet
connection, it is necessary to make an interface from FPGA board to the server which

Fig. 3. Digilent Nexys2 FPGA development platform

zamfira@unitbv.ro
Design of WEB Laboratory for Programming 813

will pass digital signal to the inputs of the FPGA development board. For passing a
course successfully, twelve digital inputs are sufficient. This is achieved by sending a
digital signal from Arduino Leonardo to the Digilent Nexys2 development board.
Because of logic level mismatch between Arduino Leonardo (5 V) and Digilent Nexys2
FPGA development board (3.3 V), precautions must be taken in order to prevent damage
to the FPGA development board. This is achieved using resistor based voltage divider
circuit (Fig. 4).

Fig. 4. Resistor based voltage divider circuit for interfacing Arduino Leonardo and Digilent
Nexys2 FPGA development board

Digilent Nexys2 FPGA development platform has four PMod connectors for
connecting various peripherals to it. Each of the four PMod connectors has eight general
purpose input/output pins. Those connectors are used for connection with Arduino

Fig. 5. Schematic of the connection between Arduino Leonardo and Digilent Nexys2

zamfira@unitbv.ro
814 N. Jović and M. Matijević

Leonardo development board. Digital outputs from Arduino Leonardo (D1–D12) are
connected to the digital inputs on PMod connectors of the Digilent Nexys2 FPGA
development board (JA1–JA8 on first PMod connector, and JB1–JB4 on the second one).
Schematic is given in Fig. 5.
This configuration has its own limitations in terms that user is in control of only
twelve inputs to the FPGA development platform, but number of available inputs is
sufficient to successfully pass laboratory exercises based on material given in [10].

3 Software Solution and Services of Web-Based Remote


Laboratory

Software for web-based remote laboratory is written in JavaScript programming


language using Node.JS for back-end. It is consisted of two parts: Central server which
acts as aggregator for all available laboratory models and micro servers that serves
particular laboratory model. Clients are connecting to the central server of web-based
remote laboratory where they can choose one of the available laboratory models to
conduct experiments on it. Central server takes information from database, applies style
to it and presents it to the client in the form of HTML document. Going to the link of
the particular laboratory model, central server takes experiment page from micro server,

Fig. 6. Architecture of the web-based remote laboratories within Faculty of Engineering,


University of Kragujevac

zamfira@unitbv.ro
Design of WEB Laboratory for Programming 815

and embeds it into its own page. Diagram of the whole web-based remote laboratory
architecture is presented in Fig. 6.

3.1 Central Server


Central server of the web-based remote laboratory (aggregator of laboratory models
within web-based remote laboratory) is written MEAN Stack, which is consisted of
Node.JS server side, Express.JS server framework, MongoDB NoSQL database and
Angular.JS for client side. Central server is communicating with database which
contains information about every laboratory model integrated into the web-based remote
laboratory as well as information about users. Central server possesses RESTful API,
which is implemented for communication between client and server side, so only content
that is changed is updated instead of whole web page. This API have all the functions
which can be executed on server:
• Creating new user account
• Changing user account
• Adding new laboratory exercises
• Adding descriptions for laboratory exercises
• Listing of all available laboratory models
• Listing of all users
• Adding or editing laboratory models
When user creates an account, he will receive a confirmation e-mail. If account is not
confirmed within 24-hour period, it is deleted from the database. Database possess two
collections. Information about laboratory model is stored in the first collection. Fields
in this collection are:
• _id Unique identification number
• name Name of the laboratory model
• short Short name, without spaces, 12 characters max.
• setupDescript Description of the laboratory model, HTML page
• assignments Possible assignments, HTML page
• MaPbackground Description of theoretic background of the laboratory model,
characteristics of the laboratory model, HTML page
• CaMapparatus Description of the equipment used for measurement and acquisition
for the given laboratory model, HTML page
• experimentStructure Description of the structure of experiment, HTML page
• dateCreated Time and date of integration of the laboratory model in the web-based
remote laboratory
• thumb Thumbnail photo of the laboratory model for the main page of the aggregator
• description Short description of the laboratory model for the main page of the
aggregator
• skeleton List of skeleton and example condes for particular laboratory model, HTML
page

zamfira@unitbv.ro
816 N. Jović and M. Matijević

Second collection is containing information about users of web-based remote laboratory.


Fields in this collection are:
• password User’s password encrypted with SHA-256 encryption
• email User’s e-mail address
• name First and Last name of the user
• institution Institution from which user come from
• id Student id number
• labs List of all laboratory models accessed by user, with time and date of accession
• role Stores information about role of the user (administrator, student, moderator)
Client page is using HTML5, Angular.JS and jQuery technologies. Twitter Bootstrap
library is used for styling. Depending on route, user is served with various views. List
of views and their description is given below:
• formaLogin Log in and registration form
• infoView View of contents for the particular laboratory model, data is collected
through API from database entry for model of interest
• labAdd graphical interface for adding of the laboratory model, it has embedded
HTML WYSIWYG editor, as well as possibility for uploading various multimedia
content

Fig. 7. Index page of the aggregator web page. Laboratory models are shown on pictures with
corresponding descriptions.

zamfira@unitbv.ro
Design of WEB Laboratory for Programming 817

• labList Shows all available laboratory models within web-based remote laboratory,
default view when user access central server. Thumbnail and short description is
shown for each laboratory model
• navbar Navigation panel
• webLabView Web page of laboratory model. This is the page where user can conduct
an experiment. It uses iFrame to embed content from micro server.
Each view has its own controller with all logic contained within, achieving Model-View-
Controller philosophy. Index page of the aggregator is shown in Fig. 7.

3.2 Micro Server Architecture


Micro server is developed in JavaScript programming language using Node.JS frame‐
work. If laboratory model is not busy at the moment of accession, a HTML page with
controls and two real-time video stream is served to the connected client. First video
stream is showing FPGA development board, and second one is streaming information
shown on the LCD screen (Fig. 8).

Fig. 8. Graphical interface of the client side for the remote experimentation on the FPGA
development board. (1) Real-time video stream of the FPGA development platform (2) Real-time
video stream of the LCD monitor connected to the FPGA development platform (3) Field for
uploading user design (4) Eight switches (5) Four pushbuttons

Web-based remote laboratory is concepted in a way that user can upload its own
program for controlling a laboratory model, monitoring outputs of the model in real-
time. That program should not invoke any harmful effect on the laboratory model. For
that reason, as well as abstracting low level implementation from the user, program is
scanned for any security threat, and then it is embedded into a larger main program
which will call subroutines from user program. Real time control is achieved with the
help of the Socket.IO communication library. In this particular case, instead of program
for controlling of the laboratory model, user would upload compiled Verilog design in

zamfira@unitbv.ro
818 N. Jović and M. Matijević

the form of the bitstream file. Once the process of upload is finished, server will start
Digilent Adept program which will download user’s design on the FPGA development
board. After that, user can control the FPGA development board with four pushbuttons
and eight switches available on the web page of the laboratory model, and monitor output
through real-time video streams. As previously described, interface between micro
server and FPGA development board is Arduino Leonardo. It is programmed with
Firmata protocol, which will communicate with Node.JS server through JohnnyFive
library. This library enables control of Arduino’s GPIO ports over USB from PC [11].
Micro server is developed as finite state machine, where triggers for states are made with
EventEmitter library for Node.JS. State transition diagram is shown on Fig. 9.

Fig. 9. State transition diagram of the web-based remote laboratory micro server

During startup of micro server, communication is established between Arduino and


PC via Johnny-Five library. After that, web server is started, and laboratory model
becomes available online. When user accesses laboratory model, a counter is enabled
which guarantee user some limited time to perform an experiment (15 min in this case).
During that time, other users cannot access to this laboratory model.
Client side is written with HTML5, CSS3 and jQuery technologies. Controller for
client-laboratory model communication is written in JavaScript using jQuery library.
There are two open communication lines from server to client via Socket.IO. First
communication line has a purpose of informing a client of availability of that particular
model, and the second one is for the real-time control of the digital inputs on the Digilent
Nexys2 FPGA development board [7].

4 Learning Material for Programming and Use of an FPGA Device

One of the aims of this work is enabling realization of the teaching methodology seen
in [10] in the case of users who don’t have access to the organized laboratory resources

zamfira@unitbv.ro
Design of WEB Laboratory for Programming 819

(prior knowledge, software, organized laboratory exercises, lecture materials, etc.). As


in [10], student is obligated to do eight homeworks and five laboratory exercises, which
isn’t possible to do without programming and testing design on the real FPGA device.
That being said, all laboratory exercises, homeworks and lectures are based on [10], but
adapted for web-based remote laboratory. Homeworks, laboratory exercises and short
Verilog course is given in the web-based remote laboratory web page [12]. Planned
laboratory exercises are:
• Making of 74LS163 binary counter, output on LEDs
• 4-bit to 7 segment hexadecimal decoder, 7 segment display driver and making prior
made binary counter to show current count on 7 segment display
• Finite state machine based alarm
• Pong game (VGA display)
Relevant web pages are equipped with problems and solutions for the laboratory
exercises, hence web-based remote laboratory is also equipped with teaching material.
In the Table 1, dynamics of the course is given by the weeks.

Table 1. Dynamics of the course given by the weeks


Week Material Homework/laboratory Deadline
exercise
1 Lecture 1–4 Homework 1–4, Lab. 1 7 days
2 Lab. 2 instructions Lab. 2 7 days
3 Lecture 5–6, Lab. 3 Homework 5, Lab. 3A 7 days, FSM diagram
instructions
4 Lecture 7–8 Homework 6, Lab. 3B 7 days
5 Lecture 9–11, Lab. 4 Homework 7, Lab. 4 7 days
instructions
6 Lecture 12–14 Homework 8 7 days
7–12 / Final projects 4–6 weeks

5 Conclusions

This work presents a web-based remote laboratory for programming and control of an
FPGA device designed for use within the course “Introduction to Design and Control
of Integrated Circuits for Communication, Sensors and Actuators” in Faculty of Engi‐
neering, University of Kragujevac. Guided by the prior experiences in web-based remote
laboratories for the course “Measurement and Control” on the same faculty, expected
results are:
• Deeper understanding on topic of programming an FPGA device by student
• Smaller financial overhead regarding laboratory equipment, compared to traditional
laboratory exercises
• Easier tracking of student’s progress during the course
• Availability of laboratory equipment to student 24/7

zamfira@unitbv.ro
820 N. Jović and M. Matijević

Outcomes of this work are full implementation of a web-based laboratory for


teaching purposes of programming and control of an FPGA device, with all documen‐
tation needed for students to successfully pass a course “Introduction to Design and
Control of Integrated Circuits for Communication, Sensors and Actuators” in Faculty
of Engineering, University of Kragujevac. As number of students enrolled in this course
is small (<20), two web-based experimental setups will be sufficient. One setup is
consisted of Digilent Nexys2 FPGA development board, a PC and Arduino Leonardo
board for controlling inputs on the FPGA device. Second setup is cheaper, with same
FPGA device and Raspberry Pi single board computer used for programming the FPGA
device via FPGA’s integrated USB to JTAG interface and controlling it via integrated
general purpose input/output pins. Students will be able to control up to twelve inputs
of the FPGA device (eight switches and four push buttons), which is sufficient to do
every task given in laboratory exercises. Real-time camera streams will enable students
to monitor integrated eight LEDs and four eight-segment displays integrated into the
board, as well as VGA monitor connected to the board. For every exercise, skeleton
codes and hints will be available to students. In addition to overcoming limited resources,
doing exercises will be more pleasing to students in the comfort of their home.

Acknowledgment. Work on this paper was partly funded by the SCOPES project
IZ74Z0_160454/1 “Enabling Web-based Remote Laboratory Community and Infrastructure” of
Swiss National Science Foundation.

References

1. Heradio, R., de la Torre, L., Galan, D., Cabrerizo, F.J., Herrara-Viedma, E., Dormido, S.:
Virtual and remote labs in education: a bibliometric analysis. Comput. Educ. 97, 14–38 (2016)
2. Farrokhnia, M.R., Esmailpour, A.: A study on the impact of real, virtual and comprehensive
experimenting on students’ conceptual understanding of DC electric circuits and their skills
in undergraduate electricity laboratory. Procedia Soc. Behav. Sci. 2(2), 5474–5482 (2010)
3. Garcia-Guzman, J., Villa-L-pez, F.H., Silva-Del-Rosario, F.H., Ramirez-Ramirez, A.,
Enriquez, J.V., Alvarez-Sanchez, E.J.: Virtual environment for remote access and automation
of an AC motor in a web-based laboratory. Procedia Technol. 3, 224–234 (2012)
4. Abdulwahed, M., Nagy, Z.K.: Systematic evaluation of the use of remote and virtual
laboratories in engineering education. In: 21st European Symposium on Computer Aided
Process engineering – ESCAPE, vol. 21 (2011)
5. Matijević, M.S., Cvjetković, V.M., Filipović, V.Ž., Jović, N.D.: Basic concepts of automation
and mechatronics with LEGO mindstorms NXT. Tehnika 69(4), 653–660 (2014)
6. Mitrović, R., Jović, N., Cvjetković, V., Nedeljković, M., Matijević, M.: Internet mediated
laboratories in engineering education. In: Proceedings of XXII Conference on Development
Trends: “New Technologies in Teaching”, TREND 2016 (2016)
7. Jović, N.: Design of web-based remote laboratory for teaching purposes of programming and
use of an FPGA device. Master’s thesis, Faculty of Engineering, University of Kragujevac,
October 2016
8. Tsai, J.: Design and implementation of an online laboratory for introductory digital systems.
Master’s thesis, Massachusetts Institute of Technology, August 2005

zamfira@unitbv.ro
Design of WEB Laboratory for Programming 821

9. Digilent, Inc.: Digilent Nexys2 Reference Manual (2016). https://reference.digilentinc.com/


nexys/nexys2/refmanual
10. MIT 6.111: Introductory Digital Systems, Fall 2010. http://web.mit.edu/6.111/www/f2010/
11. Milan, M., Cvjetkovic, V.: Overview of architectures with Arduino boards as building blocks
for data acquisition and control systems. In: 2016 13th International Conference on Remote
Engineering and Virtual Instrumentation (REV), pp. 56–63. IEEE (2016)
12. Web-based remote laboratories aggregator web site (2016). http://cpa.fin.kg.ac.rs

zamfira@unitbv.ro
Remote Triggered Software Defined Radio
Using GNU Radio

Jasveer Singh T. Jethra1, Pavneet Singh1, and Kunal Bidkar2 ✉


( )

1
Remotelabs.in, Pune, India
jasveer@remotelabs.in, pavs94@gmail.com
2
Department of Computer Engineering,
Sinhgad Institute of Technology and Science, Pune, India
kunalbidkar13@gmail.com

Abstract. Software-defined radio (SDR), refers to wireless communication in


which the transmitter and receiver both are able to modulate signals defined by a
computer software. It is the capability of a system to behave as a transceiver and
to be configured to various systems with the help data flow graphs on a software.
Software Defined Radio is mainly configured by a popular open source library –
GNU Radio. With the increasing need of laboratories and requirements of
multiple devices, colleges and universities are unable to supplement to the
demands of the students limiting them from performing experiments. In the same
way, Software Defined Radio devices are high cost equipment not affordable by
every college. This increasing demand in Software Defined Radio technology
calls for finding an optimal solution to the problem. The solution involves,
bringing Software Defined Radio - GNU Radio Library to a remote platform using
Cloud Computing, Virtual Network Computing and Web Technologies, wherein
the user is able to create data flow graphs and run his experiments on the hardware
connected to the servers from anywhere across the globe. The benefits of using
this Remote Software Defined Radio platform would be providing hardware
access to every user without the need of purchasing it and eventually to provide
a remote learning experience for the user. This would also mean that colleges can
get access to readily available hardware in no time without setting up an actual
laboratory.

Keywords: Remote laboratory · Software defined radio · GNU-radio · Virtual


network computing

1 Introduction

In a typical everyday laboratory there is an apparatus or electronic equipment, a


computer and students performing experiments on these devices. Usually, there isn’t
much of interactive learning going on in these laboratories, the students only perform
the experiments in the said method and leave. There is no scope for the student to redo
or re-learn these experiments because they are not made readily available to them more
often. Moreover, the student to equipment ratio is high resulting in poor practical skills
of the learner.

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_76

zamfira@unitbv.ro
Remote Triggered Software Defined Radio Using GNU Radio 823

2 Remote Laboratories

2.1 Definition

Remote Laboratories are those laboratories that provide access to any hardware present
at a physical location such that the laboratory can be operated at any location different
from that of the physical location of the device [1]. This means that the user need not
be present physically to perform the experiments on the hardware.

2.2 Advantages

The advantages of remote laboratories are as follows:


• The user does not have to purchase any equipment.
• S/He can simply connect to the equipment with an internet connection and a web
browser.
• Moreover, the user gets an interactive and user-friendly learning experience.
• He/she need not be present physically in the laboratory.
• There are no time constraints.

2.3 Remote Software Defined Radio


A remote laboratory for Software Defined Radio is a method of accessing the hardware
connected to a computer system, the connection to this system is done with the help of
a Remote Connection protocol known as Virtual Network Computing (VNC) [3]. VNC
is a remote access and control to a system, usually called as a VNC server. The system
from where the server is being access is known as the viewer. At all times there is
bidirectional communication through a frame buffer between the server and the viewer
allowing the user to make changes onto files on the server through the Graphical User
Interface (GUI) displayed on viewer.

3 Remote System Architecture

The architecture, as given in Fig. 1, provides details about the Remote Triggered Soft‐
ware Defined Radio System.
• The Laboratory Computer is installed with the VNC Server. This computer has GNU
Radio Software which is the software accessed by the remote user.
• The Cloud Server is configured with the Apache Tomcat and Guacamole is installed
onto the Tomcat Environment.
• The front-end web application is the interface for the end user to access the VNC
connection.
• The client simply has to access the web application from the HTML5 browser to
view the contents of the VNC Session.
• The client has the ability to upload his Data Flow Graph from the web interface.
• The client has to ability to run real time experiments on the Remote System.

zamfira@unitbv.ro
824 J.S.T. Jethra et al.

Fig. 1. Remote SDR – system architecture

4 How It Works

There are mainly 3 components that need to be configured to ensure that the remote
connection to the apparatus is successful. They are:
• To configure the Laboratory Server with VNC.
• To configure the Cloud Server.

zamfira@unitbv.ro
Remote Triggered Software Defined Radio Using GNU Radio 825

• To configure the Client Application which connects to the Laboratory Server via a
Web Browser.

4.1 Configuring the VNC Server


Before the connection to the laboratory server or computer is made, it needs to be setup
to establish a bidirectional communication between the laboratory server, the cloud
server and the client is made. In order to setup the laboratory server or computer the
following must be done:
• The laboratory computer must be assigned with a static Internet Protocol (IP) Address
so that there is a permanent address to access this computer. Similarly, if there is
more than one computer providing access to the hardware, these computers can be
connected with the help of sub netting.
• For a laboratory computer that is installed Ubuntu Operating System - a Linux
Distribution, the computer is updated with the latest version of VNC server software.
Usually, software such as TigerVNC and TightVNC perform well in Virtual Network
Computing Environments and are highly recommended.
• The VNC server helps to configure the connection and completely define the content
that the end user is able to see when he accesses this computer. The basic architecture
of VNC in this context is as below (Fig. 2):

Fig. 2. VNC architecture

• The configuration file on VNC server helps deduce it show application specific details
to the end user, for example; if the user wants to display the gnome desktop in Ubuntu,
the configuration file contains the runtime executable of gnome such as gnome-
session, if the user wants to start an application at the connection setup, VNC server

zamfira@unitbv.ro
826 J.S.T. Jethra et al.

can specify the application in the configuration file which helps starting that
application at the connection setup.
This laboratory computer has the application for Software Defined Radio – GNU Radio,
which is to be access by the end user with the help of VNC.

4.2 Configuring the Cloud Server


Once VNC Server is setup, the Cloud Server is configured. The main application of the
cloud server is to direct traffic that is transported across the different connections in
VNC. The Cloud Server acts as a medium to direct the client that is connecting to the
laboratory computer.
It is installed with the LAMP (Linux, Apache, MySQL and PHP) Stack and is
configured to run Apache Tomcat Applications. The system is installed with CentOS 7
and Apache Web Server is configured for the web to run Hyper Text Transfer Protocol
(HTTP) applications. Apache Tomcat helps in running large scale web applications that
run on Java Servlets, Java Server Pages and Web Sockets. Tomcat runs on top of the
Apache Web Server. Database connectivity is done with MySQL and PHP.
The server is also configured with a Hyper Text Markup Language (HTML5) client‐
less Remote Desktop Gateway, called as Guacamole. Guacamole helps to connect to
the Laboratory server. Users connect to the Guacamole Server from their browser [6].
The web application present on the Guacamole server can interpret only Guacamole
protocol. The proxy to which the protocol connects is called guacd which actually inter‐
prets the contents of the Guacamole protocol, connecting to any number of remote
desktop servers on behalf of the user. Guacd is a daemon process that runs in the back‐
ground along with the Guacamole application that listens to the TCP connection. The
web application cannot interpret the connection directly; hence, guacd forms the core
of the Guacamole Protocol.
The front-end application is based on the HTML5 concept of canvas, implemented
in popular browsers such as Firefox and Chrome, Hence, in order to run Guacamole the
prerequisites are that the browser running the web application must be based on HTML5.
Browsers such as Internet Explorer do not support this functionality.
Guacamole, deployed on the cloud server, interacts with the VNC Server present in
the laboratory. The interaction between the client and the VNC Server is done with
Extensible Markup Language (XML), which is worked by translating VNC into XML
based version of the same. The architectural components of the cloud server are as below
(Fig. 3):

zamfira@unitbv.ro
Remote Triggered Software Defined Radio Using GNU Radio 827

Fig. 3. Cloud server architecture

The cloud server must be of a good configuration to handle the requests of the VNC
connection to serve both the laboratory server as well as the client system.

4.3 Configuring Guacamole

Guacamole, does not in itself configure or spawn VNC connections to the laboratory
computer. The connection parameters must be setup by an administrator who manages
connections to the VNC server.
A new connection is created in Guacamole which has the following parameters [5]:
• Hostname/IP Address of the Laboratory Server.
• Port Number for VNC; starts with 5901 and continues until different VNC displays
are created to which VNC server is listening on, usually 5900 or 5900 + display
number. For example, if the VNC server is serving display number 2, the port number
would be 5902.

zamfira@unitbv.ro
828 J.S.T. Jethra et al.

• Autoretry parameter is set to give the number of attempts made to connect automat‐
ically before giving an error.
VNC sessions also require the following to authenticate the sessions:
• Username of the laboratory remote computer.
• Password of the laboratory remote computer.
The client cannot request any display size from the VNC Server and hence it depends
on the server to provide a suitable screen size. Although, if there is less bandwidth on
the client system, the server can reduce the color depth. In case of Guacamole, it can
automatically detect 256-color images. The following configurations must be set to
ensure suitable display settings:
• Color depth is used to specify the color in bits per pixel. This parameter is optional,
and the color depth is present in 8, 16, 24 and 32 bits.
• Cursor is a pointer on the VNC display, if set to a remote cursor, the mouse pointer
will be rendered remotely. Although, the motion of the remote cursor will be slow
as it will depend on the bandwidth present on the client system.
• Read only is to ensure if the connection is read only, no input will be accepted from
the user, and the user will be able to see whatever the other users who are using the
same desktop doing. This parameter is optional as well.
In order to ensure, that the user can upload his files on the remote system and execute
his program, Secure File Transfer Protocol (SFTP) is used to configure the connection,
so that the user has the ability to upload on the laboratory system on a VNC connection.
SFTP also has certain parameters that need to be setup to ensure that the system is capable
of upload functionality.
They are as follows:
• sftp-hostname is the hostname or IP address of the server that is hosting the SFTP
server.
• Port number is 22 which is the default port number for File Transfer Protocol (FTP).
• sftp-username which is required to authenticate the connection to the server.
• sftp-password is also required for authentication.
• sftp-directory is the default directory location that will be present on the laboratory
server where the files will be uploaded.
Since VNC is really versatile there exist a lot of options that can be set based on the
functionality of the system. In case of Remote Software Defined Radio, these parameters
are sufficient to ensure that the system works as expected.

4.4 GNU Radio


The application that will be displayed on the VNC Screen which the users can operate
and create dataflow graphs for their system is GNU Radio.
GNU Radio is a free and open source software that is used to implement software
defined radio by providing blocks of signal processing elements [4]. These elements put

zamfira@unitbv.ro
Remote Triggered Software Defined Radio Using GNU Radio 829

together form different software defined radio elements and signal processing opera‐
tions. GNU Radio can be connected to a RF hardware to run experiments in real time.
The external hardware that is provided has a wide range from 400 MHz to 4 GHz. The
hardware is able to configure itself based on the data flow graphs provided by the soft‐
ware – GNU Radio, and hence the name Software Defined Radio. Internal hardware
components need not be changed or reconfigured because the complete controls of the
components are done with the software (Fig. 4).

Fig. 4. GNU radio companion

This allows the construction of radios where the actual waveform is transmitted and
received are defined by software. Since SDR’s require a lot of Digital Signal Processing,
the computer system executing the application must be of high configuration.
GNU Radio blocks are written in Python and it follows a graphical user interface
approach to use these signal blocks. Hence SDR used with GNU Radio is a great option
for writing any kind of Digital Signal Processing applications.
The laboratory system, that is present to be accessed is a high configuration modern
system, capable of handling any configurations of the SDR, this system is preloaded
with the GNU Radio Companion software.
The laboratory system is the access point for Remote SDR.

5 Results

Remote Triggered Software Defined Radio helps connect the hardware to the web
application and access via a domain. The results of the system is as follows:

zamfira@unitbv.ro
830 J.S.T. Jethra et al.

Figure 5 shows a GSM Receiver program run on a Remote Software Defined Radio
System. The Output is again displayed on the browser, enabling anytime, anywhere
access to the SDR system.

Fig. 5. Accessing remote SDR from web browser to show GSM receiver.

6 Conclusion

Given the context of the Software Defined Radio Systems and the cost that goes in
implementing such a system, the connection of the SDR Hardware on a remote system,
is highly beneficial in providing an on-demand access to the system.

References

1. Jethra, J.S.T., Patkar, S.B., Datta, S.: Remote triggered FPGA based automated system. IEEE,
February 2014
2. Sareen, P.: Cloud computing: types, architecture, applications, concerns, virtualization and
role of IT governance in cloud. Int. J. Adv. Res. Comput. Sci. Softw. Eng. 3(3), March 2013
3. tej Koganti, K., Patnala, E., Narasingu, S.S., Chaitanya, J.N.: Virtualization technology in
cloud computing environment. Int. J. Emerg. Technol. Adv. Eng. 3(3), March 2013
4. Selva, A.F.B., Reis, A.L.G., Lenzi, K.G., Meloni, L.G.P., Barbin, S.E.: Introduction to the
software-defined radio approach. IEEE Lat. Am. Trans. 10(1), January 2012
5. Configuring Guacamole Protocol Web Documentation. https://
guacamole.incubator.apache.org/doc/gug/configuring-guacamole.html
6. The guacd server and Guacamole Protocol Web Documentation. https://
guacamole.incubator.apache.org/doc/gug/guacamole-architecture.html#guacamole-protocol-
architecture

zamfira@unitbv.ro
Open Educational Resources

zamfira@unitbv.ro
MOOC in a School Environment: ODL Project

Olga Dziabenko1 ✉ and Eleftheria Tsourlidaki2


( )

1
Deusto Foundation, Bilbao, Spain
olga.dziabenko@deusto.es
2
Ellinogermaniki Agogi, Pallini, Greece
eleftheria@ea.gr

Abstract. Unlike schools 15 years ago, contemporary schools use many ICT
tools in their classes, e.g. computers, tablets, smartphones, etc. accompanying by
open educational software, OER and apps. Teachers gradually turned to more
student-centred approaches like inquiry, game-based, project-building, flipped
learning, learning-by-teaching to name only but a few. Personalized teaching and
learning supported by all these approaches help schools offer more effective and
efficient education. Although Massive Open Online Courses (MOOCs) have
proved to be helpful in university and adult education, until now they has not been
yet deployed in school education. The ‘Open Discovery of STEM Laboratories’
project (ODL) exploits this potential and opens up MOOCs for it. In this paper,
we discuss the first results of this implementation.

Keywords: STEM · School education · mMOOC · Virtual and remote


laboratories

1 Introduction

The fast-changing global economy acts like an engine that generates the demands of the
skills that school, college and university graduates are expected to have in order to be
competitive and have a capacity to drive innovation. Therefore, the future prosperity
and social stability depend on the optimal use of our human capital. About 70 million
Europeans [1] lack sufficient reading, writing and numeracy skills, and 40% of the EU
population lack a sufficient level of digital skills. This potentially is one of the main
sources of unemployment, poverty and social exclusion. On the other hand, 40% of
European employers have difficulty in finding people with the right skills to foster
growth and innovation [2]. At the same time, numerically high-qualified young people
work in job positions that do not match their talents and knowledge.
Based on the research “The Survey of Adult Skills”, in 2016 the European Commis‐
sion has adopted a new and comprehensive skills Agenda for Europe to improve the
teaching and recognition of skills - from basic to higher skills, as well as transversal and
civic skills - and ultimately to boost employability. As we see, the digital literacy, and
therefore, the education in applied sciences, engineering, and technologies is one of the
keys to contribute to the European Commission’s first political priority, “A New Boost
for Jobs, Growth and Investment”. It is a responsibility of all education players - schools,

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_77

zamfira@unitbv.ro
834 O. Dziabenko and E. Tsourlidaki

universities and policy-makers to ensure that no-one is left behind and that Europe
nurtures the high-end skills that drive innovation and competitiveness.
The Open Discovery of STEM Laboratories (ODL) project [3] was created in order
to introduce the use of MOOCs in school curricula and in conjunction with the STEM
laboratories available online. ODL offers to school teachers a methodology for building
micro-MOOCs for their students. Exploring the MOOC idea in a school context, the
consortium determined that based on the content-load, activity and time consuming, it
would be beneficial to chunk the MOOC course on few small learning lessons – micro-
MOOCs (activity for 20–40 min in the classroom). Usually, course consists of several
lectures. Each lecture in format of micro-MOOC includes laboratory work, theoretical
and practical content, assessment and discussion. The suggested structure allows to
easily embed micro-MOOCs in classroom time and, furthermore, to reduce a time
required for creating learning materials.
In this paper we present the first outputs of our project. The main aim is to introduce
the benefits and lessons learned through the integration of micro-MOOCs - an innovative
approach for deploying STEM labs into a school. During the project, the team has created
multidisciplinary MOOCs and trains teachers to design and implement the MOOC
approach in their school. In this paper we discuss the format of MOOCs proposed for
application in schools, how to introduce online labs in a MOOC environment and how
to organize the individual and collaborative learning using this instrument. In order to
create our MOOCs the ODL project partners use the edX open platform [4] where a
MOOC space area was created for the project.
In Sect. 2 we describe the ODL project as it is – partners, aims and objectives as well
as its structure. Section 3 is devoted to illustrate the inquiry-based learning approach
used for introducing online laboratories. An example micro-MOOC is presented in
Sect. 4. Section 5 summarizes our conclusions and introduces possible future work.

2 Open Discovery of STEM Laboratories

The ODL project aims to foster teacher collaboration in creating innovative STEM
school curricula by open discovery of remote and virtual laboratories and their appli‐
cation in education. The consortium offers schools a micro-MOOC methodology for
transforming separate education materials into coherent lessons. Micro-MOOCs
preserve the principles of open teacher collaboration in STEM curriculum development.
It is planned that teachers will work together on creating micro-MOOCs that will be
united under the theme of one MOOC. In this case diverse national practices will be
applied.
For this purposes the project proposes a MOOC methodology to be used on different
subjects of school curricula and it offers an MOOC platform designed to meet teachers’
needs. The project aspires to train at least 300 school teachers to develop micro-MOOCs
for STEM lessons. By end of the project at least fifty-five micro-MOOCs that include
the use of remote and virtual laboratories will be available on the platform. Students and
teachers from EU school communities will have access openly to all the learning mate‐
rials.

zamfira@unitbv.ro
MOOC in a School Environment: ODL Project 835

The methodology proposed will help educators find and organize digital learning
resources while designing and delivering personalized instructions in a school environ‐
ment. Our training will facilitate the teachers to find and evaluate content; collect and
organize OERs, remote and virtual laboratories according the curricula; build STEM
micro-MOOCs, manage lesson plans, content, student activities - laboratory and practice
work; engage students through the students-centered learning and personalized feed‐
back.
The project focuses on teachers, curriculum designers and administrators strength‐
ening their profile by supporting them to deliver high quality teaching practices and to
adopt new methods and tools. In particular, the project will extend teachers’ knowledge
and skills and support new teachers so that they have all necessary competences right
from the start.
The project is designed to enhance digitalization of learning, teaching, and training
by improving accessibility to high quality learning through micro-MOOC, use of OER,
and teacher and school collaboration in modernization of STEM school curricula.

3 Inquiry-Based Learning

Inquiry-based learning is a contemporary educational strategy that aims on constructing


scientific knowledge [5]. In this approach students perform engage in methods and
practices that simulate how scientists work. During an inquiry activity, students discover
connections between phenomena, the practice in formulating hypotheses and testing
them by conducting experiments and/or making observations [6]. Inquiry-based learning
fosters active participation of the learners and gives the opportunity to discover new
knowledge. In this framework the learning process is organized by doing experiments
that aim to explore the relation of one set of dependent or independent variables. It is
important to mention that students explore the knowledge that is new to them but it is
not novel in the science, in general.
In the last decade inquiry-based learning is gaining popularity in science curricula
and it has turned out to be a powerful teaching instrument. The modern technological
developments allow the use of inquiry processes with online learning environments and
digital tools that can improve learning outcomes. Educational instructors organize
inquiry-based learning into inquiry phases that together form an inquiry cycle. There
are numerous versions of inquiry cycles proposed that can be found throughout the
literature.
For example, de Jong et al. suggested five distinct general inquiry phases: Orienta‐
tion, Conceptualization, Investigation, Conclusion, and Discussion [7, 11]. Some of
these phases are divided into sub-phases. In particular, the Conceptualization phase is
divided into two sub-phases, Questioning and Hypothesis Generation; the Investigation
phase is divided into three sub-phases, Exploration, Experimentation and Data Inter‐
pretation; and the Discussion phase is divided into two sub-phases, Reflection and
Communication.
In the ODL project we offer to school teachers the simple one - 5E structure
(ENGAGE, EXPLORE, EXPLAIN, EXTEND, EVALUATE) suggested by Bybee [8].

zamfira@unitbv.ro
836 O. Dziabenko and E. Tsourlidaki

On the ‘Engage’ stage the teachers aim to capture the students’ imagination and moti‐
vation. Here students get the first introduction to the topic and understand the learning
environment and tools that are used to build the inquiry curiosity. The ‘Explore’ stage
allows to develop students’ critical thinking and to help them explore new things on the
subjects at hand. The ‘Explain’ stage requires from students to explain the involved
phenomena using scientifically correct arguments. At this stage students start to create
a model, discuss the data collected with their peers and the teacher and begin to commu‐
nicate what they have learned. ‘Extend’ is the stage in which students expand their
knowledge on the concept(s) they have studied, make connections to other related
concepts, and apply their understanding to the real world. Finally, through discussion
and disputes students make analyses and evaluate the knowledge they acquired during
the activity.
Depending on the teachers’ and/or students’ needs three scenarios or pedagogical
frameworks are suggested in the project:
• Traditional approach or Confirmation inquiry [9]
• Structured or guided inquiry approach [9]
• Elicited or Open inquiry approach [10].

4 “Light Pollution”: An Example of Micro-MOOC

In order to fit to a “rhythm” of a school lesson we suggest a micro-MOOC structure –


between one and three didactical hours (45 min) using an inquiry learning cycle and an
online STEM laboratory. The suggested structure allows to easily perform micro-
MOOC actions during classroom time.

Fig. 1. Animation - intro to the theme

zamfira@unitbv.ro
MOOC in a School Environment: ODL Project 837

The key outcome of the ODL project is a collection of micro-MOOCs available for
school teachers. These micro-MOOCs will support the engagement of schools in inno‐
vative approaches of blended education in the everyday practices. The Light Pollution
micro-MOOC [13] is one of the micro-MOOCs of our collection.
Every micro-MOOC begins by providing information on the activity (see Fig. 1).
For example, “Light pollution is a global problem that affects us all. In this micro-
MOOC you will have the opportunity to learn more about light pollution and its impact
on the planet”. In addition, requirements that allow the efficient performance of the tasks
in the micro-MOOC are presented. The traditional approach with 5E structure is used
to build this inquiry-based scenario (see Fig. 2).

Fig. 2. 5E structure of inquiry-based scenario

At the introduction stage the research questions and tasks pick the interest of the
students to the light pollution problem in the cities. The introduction of videos and
animations helps to highlight the problem briefly and in an attractive visual way. Several
video presentations, discussions, multiple-choice self-assessments are the main tools
that keep students involved. In this scenario the STEM online labs used is an interactive
map of sky glow and light pollution simulator which assist students to explore the
phenomenon.
This example introduces the basic requirements of the well-designed micro-MOOC,
namely:
– affective engagement of the students;
– harmonize learning process for students with different knowledge and interest;
– generating curiosity and leading to questions;
– a cognitive conflict;
– scientific investigation and explanation within the competence of the students
involved;
– creating scientific knowledge;

zamfira@unitbv.ro
838 O. Dziabenko and E. Tsourlidaki

– requiring the students to use inquiry skills to explain the involved phenomena;
– limiting time of use (1–2 lessons for the presentation and applying of remote/virtual
labs).

5 Conclusion

Although the ODL project is still at the beginning stage, it is clear that teachers are
interested in such approach. They see micro-MOOC as a tool to open a horizon of STEM
subjects to their students; to embed the use online labs in the framework of their curric‐
ulum, which they never could use otherwise and to broad their collaboration with the
colleagues on multidisciplinary aspects. In this paper we presented the preliminary
outputs obtained from the implementation of the micro-MOOC approach in secondary
school classes. Unlike traditional MOOCs that could last several months, the ODL
micro-MOOCs are adjusted so as to meet the needs of in-class activities and last from
20 min to a few of class hours. The inquiry-based cycle based on the 5E stages
(ENGAGE, EXPLORE, EXPLAIN, EXTEND, EVALUATE) was introduced. The
reader can try the micro-MOOCs available on the ODL portal [12]. One of them is Light
Pollution that explains the influence of the light on the ecosystem and humans, and risks
caused by this influence. By gaining and understanding of light pollution students could
be encouraged to search for solutions to reduce the negative impact light pollution on
the environment.
In the near future, the project plans to organize a set of workshops which will offer
a discussion for designing new learning materials and will give valuable feedback on
the impact of the proposed methodology to school education. The consortium is planning
to create at least 50 micro-MOOCs offered to schools. The study targets to evaluate the
impact of the proposed methodology on students’ knowledge and increase of interest
towards STEM. The analytics system incorporated in edX platform will provide us with
the necessary independent data and results.
The results of our trials will be published on the project website (http://opendisco‐
verylabs.eu/) and on the facebook group discussion wall. The study results could be
helpful for secondary school sector representatives, education instructors, parents and
policy makers to respond to current and future education needs.

Acknowledgement. This work was partially funded by the European Union in the context of
the ODL project (Project Number: 2015-1-ES01-KA201-016090) under the ERASMUS+
programme. This paper does not represent the opinion of the European Union, and the European
Union is not responsible for any use that might be made of its content.
We want to thank all ODL partners who contributed to the discussion of the ideas of MOOC
and an inquiry-based learning at secondary school sector to support the work is performed.

References

1. The Survey of Adult Skills (PIAAC): Implications for education and training policies in
Europe. European Commission (2013)

zamfira@unitbv.ro
MOOC in a School Environment: ODL Project 839

2. Eurofound, 3rd European Company Survey


3. ODL website. http://opendiscoverylabs.eu/
4. Open edX. https://open.edx.org/
5. Pedaste, M., Mäeots, M., Leijen, Ä., Sarapuu, S.: Improving students’ inquiry skills through
reflection and self-regulation scaffolds. Technol. Instr. Cogn. Learn. 9, 81–95 (2012)
6. Keselman, A.: Supporting inquiry learning by promoting normative understanding of
multivariable causality. J. Res. Sci. Teach. 40, 898–921 (2003). http://dx.doi.org/10.1002/tea.
10115
7. Pedastea, M., Mäeotsa, M., Siimana, L.A., de Jong, T., van Riesenb, S.A.N., Kampb, E.T.,
Manolic, C.C., Zachariac, Z.C., Tsourlidakid, E.: Phases of inquiry-based learning:
definitions and the inquiry cycle. Educ. Res. Rev. 14, 47–61 (2015). http://
www.sciencedirect.com/science/article/pii/S1747938X15000068
8. Bybee, R.W.: Scientific inquiry, student learning, and the science curriculum. In: Bybee, R.W.
(ed.) Learning Science and the Science of Learning, pp. 25–35 (2002). http://
wolfweb.unr.edu/homepage/louisl/Bybee%20learning%20cycle.pdf
9. Banchi, H., Bell, R.: The many levels of inquiry. Sci. Children 46(2), 26–29 (2008)
10. Persano Adorno, D., Pizzolato, N.: An inquiry-based approach to the Franck-Hertz
experiment. Società Italiana di Fisica (3) (2015). doi:10.1393/ncc/i2015-15109-y
11. de Jong, T., Linn, M.C., Zacharia, Z.C.: Physical and virtual laboratories in science and
engineering education. Science 340, 305–308 (2013). http://www.sciencemag.org/
12. ODL MOOC space. http://moocspace.odl.deusto.es/
13. Light Pollution, micro-MOOC. http://moocspace.odl.deusto.es/courses/
Ellinogermaniki_Agogi/EA101/2016/about

zamfira@unitbv.ro
Survey and Analysis of the Application of Massive
Open Online Courses (MOOCs) in the Engineering
Education in China
Based on a Survey of XuetangX, the World’s Largest
MOOC Platform in the Chinese Language

Yu Long ✉ , Man Zhang, and Weifeng Qiao


( )

Tsinghua University, Beijing, China


032225@163.com

Abstract. At present, many Chinese colleges and universities have reformed or


are reforming their teaching model by using MOOCs. During the cultivation of
engineering talents, how well is MOOCs applied and, are there any difficulties or
puzzles encountered in this process? Based on a survey and analysis of XuetangX,
China’s largest MOOC platform, this paper introduces the present situation of the
application of MOOCs in China’s engineering education. In the meantime, based
on the relevant survey, this paper analyzes the problems encountered in the popu‐
larization and application of MOOCs in China. It also gives some suggestions on
the further development and application of MOOCs in the engineering education
in China.

Keywords: Engineering education · MOOCs · China

1 Introduction

The rapid development of massive open online courses (MOOCs) has broadened the
application scope of online education and transformed classroom teaching from
“teaching centeredness” to “learning centeredness”. This shift offers a new way for
learners to acquire knowledge. At the same time, according to the research by many
researchers, integrating MOOCs in blended courses contributes to improved motivation
and school performance of students, especially to the cultivation of engineering capacity
of engineering students. At present, many Chinese colleges and universities have
reformed or are reforming their teaching model through the use of MOOCs. During the
cultivation of engineering talents, how well are MOOCs applied and, are there any
difficulties or puzzles encountered in this process?
Based on a survey and analysis of XuetangX (xuetangx.com), China’s largest MOOC
platform, this paper aims to introduce to domestic and foreign scholars the application
of MOOCs in the engineering education in China. In the meantime, based on the relevant

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_78

zamfira@unitbv.ro
Survey and Analysis of the Application of MOOCs 841

survey, this paper analyzes the problems encountered in the popularization and appli‐
cation of MOOCs in China. It also gives some suggestions on the further development
and application of MOOCs in the engineering education in China.

2 Significance and Value of MOOCs in China

As a new learning and teaching method, MOOCs, by virtue of its large scale and open‐
ness, is impacting the global education landscape. The emergence of MOOC makes it
easier to share educational resources, realize Educational informationalization and to
pursue self-directed learning and lifelong learning. Since 2013, MOOCs have mush‐
roomed in China. The wide application of MOOCs means that it stands to create an
entirely new, and fairer educational model. Teaching and learning can take place
anywhere at any time, by dint of the Internet-enabled course and lecture video, embedded
course test and assessment, and student-teacher interactions provided by MOOC. This
is a very case where educational resources are perfectly shared.
In addition, educational informatization ranks among China’s national strategic
layouts. In his letter of congratulation to the International Congress on ICT in Education
(held on May 23, 2015), Chinese president Xi Jinping said that China would advance
educational reforms and innovations, structure a network-based, digital, personalized,
and lifelong educational system, and build China into a learning society in which
everyone learns anywhere at any time [1]. While presiding an executive meeting of the
State Council on November 15, 2014, Chinese premier Li Keqiang stressed that China
would foster new business formats and new industries, among which online education
is prioritized. The Chinese Ministry of Education (MOE) is also boosting MOOCs in
terms of policy, fund, and platform construction, etc. [2].
A large number of scholars have made empirical studies on the relationship
between online learning behavior and learning effect in different environments, and
it is found that the online behavior of learners has an important influence on the
learning effect [3–7]. As shown by the result of the relevant survey [8], the blended
learning integrating MOOC has not only motivated students, but more importantly
has improved students’ ability to learn independently. According to the study, most
students believe that the blended learning offered by MOOC gives them a lot of
freedom as they can flexibly arrange their learning time and develop their own effi‐
cient learning plan. Many students think that the discussion group of MOOC makes
it possible for them to discuss questions in learning with learners from around the
globe so that they can experience the collision of ideas under different thinking
models. As such, the study holds that, the MOOC-based blended teaching model
plays a certain role in improving the teaching quality of higher education, and in
cultivating students’ capacity in collaboration and self-directed learning.

zamfira@unitbv.ro
842 Y. Long et al.

3 Application of MOOCs in Engineering Colleges


and Universities of China

This part is based on a data survey of XuetangX, the world’s largest MOOC platform
in the Chinese language. It aims to provide a look at the application of MOOCs in the
engineering education in China.

3.1 Profile of XuetangX

On October 10, 2013, Tsinghua University launched XuetangX, the first-ever MOOC
platform in the Chinese language. Following the foundation of the MOE Research
Center for Online Education in April 2014, XuetangX became the platform for research
exchange and result application of the Center. After three years’ efforts, XuetangX has
emerged as one of MOOC platforms in the Chinese language with the largest course
scales and most active learners. By August 31, 2016, XuetangX had recorded 6.39
million of online sign-ups for the courses, and 4.21 million of registered users coming
from 137 countries and regions across the world. As of that time, XuetangX had opened
875 courses, including 250 from domestic colleges, universities, and institutions. Also,
it had introduced 4 courses from Stanford University, 31 from edx.org, and linked to
575 courses from edx.org. By then, XuetangX had built SPOC (Small Private Online
Course) platforms for 184 schools and institutions, opening 3350 SPOC courses. On
March 1, 2016, guokr.com released the 2015 Global MOOC Ranking List after survey
and statistics involving 14103 MOOC learners. According to the list, the top three
MOOC platforms worldwide with the most high-quality courses were Coursera.org,
edx.org, and XuetangX [9]. According to the statistical report by Xuetangx, “The
construction, operation and application of MOOCs by XuetangX (August 2016)”, the
course construction, the distribution of courses according to disciplines and the person-
time of sign-up and completion of XuetangX MOOCs are as follows [10].
First, XuetangX brings together high-quality domestic and foreign online course
resources, through independent development and introduction from foreign countries.
In addition, XuetangX has managed to cover the specialties of university degree courses,
and is developing towards systematism.
Figure 1 is the distribution of course resources of the 285 opened courses whose
courseware resources are on XuetangX (as of August 2016). As shown by the figure, of
the courses opened at XuetangX, 76.18% are from domestic colleges and universities,
2.62% are from domestic enterprises, 1.31% are from domestic non-business institutions,
and 19.9% are from abroad (including courses from some foreign institutions, colleges
and universities, in addition to those from prominent universities in the United States
and Australia, among others).
Second, Fig. 2 is the distribution of the 285 opened courses according to disciplines
whose courseware resources are on XuetangX (as of August 2016). Engineering courses
take the biggest share, accounting for 38.22% of the total courses; science courses come
second, making up 17.8% of the total; economics courses come third, representing
10.21% of the total.

zamfira@unitbv.ro
Survey and Analysis of the Application of MOOCs 843

Fig. 1. Distribution of course resources of XuetangX

Fig. 2. Distribution of XuetangX courses according to disciplines

Third, in terms of sign-up and completion, as of August 31, 2016, XuetangX had
recorded 6.39 million of online sign-ups, and 4.21 million of registered users, with the
average sign-up of each user reaching 1.5. Figure 3 is the distribution of sign-ups for
various courses. Of the total sign-ups, 25% are concentrated in economic and manage‐
ment courses, 23% in computing courses, and 20% in language courses. These three
types of courses are top three at XuetangX by sign-up. The courses have been operated
by 973 rounds, and 71,375 person-time have completed the courses, with the completion
rate being 1.15%.

zamfira@unitbv.ro
844 Y. Long et al.

Fig. 3. Distribution of sign-ups of XuetangX for various courses

3.2 Application of XuetangX in Tsinghua University and Engineering Education

XuetangX has 45,008 users from Tsinghua University, the sign-ups are 243,595, and
completion are 15,538. It has imported 706 courses from the University. According to
a survey by XuetangX, the percentage of students who have not attended blended courses
dropped from 68% in the fall of 2015 to 38% in the spring of 2016. That means more
and more students have been involved in blended learning. And the percentage of
students who have not completed online courses also dropped from 48% in the fall of
2015 to 24% in the spring of 2016. This implies that online courses have already become
part of students’ learning activities, and that online learning is known to a growing
number of students [11].
Additionally, 52 total MOOC-based blended course-time (38 courses) have been
opened, covering all levels from basic to professional, and all disciplines including

Fig. 4. Distribution of types of blended courses in Tsinghua University

zamfira@unitbv.ro
Survey and Analysis of the Application of MOOCs 845

science, engineering, arts, and medicine. There are 14 engineering blended courses,
including Electrotechnics, Circuit Principle, Cloud Computing and Software Engi‐
neering, etc. The foundation courses in engineering account for 18% of the total blended
course (as shown in Fig. 4); the percentage of engineering courses is 42% (as shown in
Fig. 5).

Fig. 5. Distribution of disciplines of blended courses in Tsinghua University

189 MOOCs provided by Tsinghua University can be found on the website of


XuetangX [12], which include 24 courses in Computing, 8 in Electronic Engineering,
2 in Environment & Earth, 14 in Design and 55 in other engineering specialties. Among
these engineering courses, the most popular courses include “The Design of C++
Programming Language”, “Chinese Architectural History”, “Data Structure”, “Circuit
Principle”, “Java Program Design”, “Fundamental of the Big Data Systems”, “Auto‐
mobile Theory”, “Operating System”, etc. The sign-up person-time of each popular

Fig. 6. Sign-up person-time of the top 8 popular engineering courses

zamfira@unitbv.ro
846 Y. Long et al.

engineering course at each round reached 10000–30000 s (as shown in Fig. 6), while
the completion was up to more than 600 (as shown in Fig. 7).

Fig. 7. The completion person-time of the top 8 popular engineering courses

4 Problems in the Application of MOOCs to Engineering Education

We have identified some problems in the application of MOOCs to engineering educa‐


tion after the survey of interviews with some teachers using MOOCs for engineering
education.

4.1 The Method for the Use of MOOC Resources Is Single


We have discovered in our survey that, some vocational technology colleges, in partic‐
ular, have introduced MOOC resources with great interest, but teachers have divided
opinions over the use of such resources. Some teachers even use them passively.
According to some teachers, since the college has introduced MOOC resources—all of
which are given by excellent teachers, and many of which are world-class course
resources—they just take advantage of these resources in a simple manner, ignoring the
positioning of the college for talent cultivation. Meanwhile, some teachers use these
resources with a single method. Instead of giving lessons themselves, the teachers simply
assign students to watch corresponding course videos and complete the test on the plat‐
form, giving little thought on how to make effective use of these MOOC resources.

4.2 The Supporting Incentive Mechanism of the University Has Yet to Be


Established

Yu Xinjie, an associate professor with the Department of Electrical Engineering of


Tsinghua University, is the first university teacher of China to open MOOC. The course
of Circuit Principle that he opens is highly popular with students at home and abroad.
As of October 10, 2016, the total sign-ups for Circuit Principle had reached 96489. In
an interview with Tsinghua University News, Mr. Yu said that “The making of MOOC
is a drudgery. For example, I need to figure out how to explain my ideas fluently in front
of the camera, and how to turn a 90-minute class into some little stories that each lasts

zamfira@unitbv.ro
Survey and Analysis of the Application of MOOCs 847

only 5 min or so. Not only the opening, developing, changing, and concluding, but also
each and every detail are crucial, posing the greatest challenge to me [13]”. Therefore,
in addition to a lot of funds, it requires teachers to invest a lot of time and energy to
make good MOOCs.
Apart from those teachers recording MOOC videos, while interviewing some
teachers using MOOC for teaching, it has also been recognized that, traditionally, the
calculation of teachers’ workload by the university is based predominantly on teaching
hours. In addition to the teaching hours scheduled by the university, teachers have to
spend a lot of time answering students’ questions on the Internet. However, such input
does not count into their workload. Before using MOOCs, most teachers only needed
to impart to students the knowledge well-known to them and answer their questions once
a week. After the introduction of MOOCs, teachers’ workload doubled and tripled, yet
there is not a corresponding supporting incentive mechanism for the calculation of
workload and rewarding measures, etc.

5 Suggestions and Opinions for Improving Application of MOOCs


to Engineering Education

5.1 Suggestions for Colleges and Universities Using MOOCs

The Application of MOOCs to Engineering Education Should Be Correctly


Understood
Although Despite the MOOCs boom among Chinese colleges and universities, some
people are still unclear about the application of MOOCs to engineering education. For
all its innovations in teaching model and technical method, MOOC, in essence, remains
an imitation to the actual classroom. As an assistant platform for education and teaching,
MOOC contributes to improved teaching efficiency, but it is far from a panacea for
education. True, MOOC may facilitate knowledge acquisition, yet it has its limitations
in the cultivation of the ability to apply knowledge, most notably in the application of
knowledge to the hands-on engineering education sector. In other words, to equip future
engineers with comprehensive quality, MOOC cannot completely replace college
education; in particular, it plays a limited role in reinforcing engineers’ occupational
experiences and practices.

Assemble an Excellent Faculty


Science and engineering courses have plenty of various contents. Therefore, it is imper‐
ative that the colleges and universities using MOOCs assemble an outstanding faculty
to collaborate in, manage, operate, and maintain MOOC teaching, so that they can moti‐
vate students and increase their performance, by reasonably using the available infor‐
mation resources to design rational, scientifically sound teaching methods.

Establish an Effective Guarantee Mechanism


Teachers and their teams need to put in a lot of energy to record MOOC videos. Such
energy cannot be simply calculated by teaching hours; an effective incentive guarantee
mechanism needs to be established to count the efforts of teachers in a more reasonable

zamfira@unitbv.ro
848 Y. Long et al.

way. At the same time, colleges and universities should optimize their hardware and
software facilities and make them compatible with each other. For instance, the form of
desks and chairs in the classroom can be changed to provide students with a cozy learning
environment. Tsinghua University has set an example in this regard. It has arranged in
the teaching building roundtable classrooms and “transformers”-like desks, so as to
facilitate in-class discussions.

5.2 Suggestions for Engineering Teachers


Keep Improving Self-quality
The immense popularity of MOOCs has increased competition among different coun‐
tries in higher education. On the one hand, top-level global teaching resources are readily
available to learners; on the other, the horizontal comparison presented by MOOCs can
also help teachers identify some of their shortcomings in education and teaching. This,
in turn, dictates that college teachers must keep learning and improving their capabilities
and qualities in knowledge and teaching, explore MOOC-based teaching models, and
enhance teaching results, in order that they will grow more competent to cultivate engi‐
neering talents for the 21st century.

Conduct Effective Teaching Design


As practicality and engineering are the most fundamental characteristics of engineering
activities, engineering education should involve more practical education. In addition,
in engineering course teaching, the training of practical abilities is the core for the engi‐
neer cultivation, and learning by doing is a salient feature and essential link in the culti‐
vation of engineering talents. In light of this, efforts should be made to innovate the
design of MOOC teaching in such a way as to make MOOC an important channel for
imparting knowledge and, at the same time, to allow MOOC to strengthen offline, face-
to-face class interactions between the teacher and students and among students them‐
selves, and to showcase and arrange enough hands-on scenarios. Only in this way can
practical teaching be intensified so as to improve students’ ability to combine theory
with practice and thus solve engineering problems.

Be Clear About the Positioning of Talent Cultivation


Exposed to diversified industrial needs, colleges and universities should cultivate
various kinds of engineers at all levels that meet particular social needs. Not all MOOCs
are suitable for the cultivation of all engineering talents; not all engineering talents need
unified MOOC resources. For those teachers from engineering colleges and universities
with industry characteristics, they need to consider how to combine high-quality MOOC
resources with their industries. And for those teachers from vocational technology
colleges, what they need to contemplate is how to break excellent MOOC resources into
small chunks, and simplify them to make them more understandable and acceptable to
students, so as to meet multi-level, all-around needs of engineering and technical talents.
Given this, MOOC needs to provide engineering students with more and richer educa‐
tional experiences before it can equip engineers with diversified, multi-level qualities.

zamfira@unitbv.ro
Survey and Analysis of the Application of MOOCs 849

5.3 Suggestions for XuetangX


Enrich MOOCs by Famous Engineering Teachers
At present, China has built the globe’s largest higher engineering educational system.
So far, China has 41,236 engineering education programs, covering 41.5% of total
programs for undergraduates and junior college students. The number of current engi‐
neering students is 10.4 million, accounting for 38.1% of total current students at colleges
and universities [14]. Prestigious engineering education teachers abound in China, a
nation that has cultivated the most engineering talents in the world. As such, efforts
should be made to leverage the role of these well-known teachers.
Meanwhile, the institutions where these renowned teachers work should establish
effective mechanisms to ensure and promote the development of engineering MOOCs.

Strengthen Multi-language Subtitle Translation of Excellent Engineering MOOCs


According to a survey in 2016, the number of followed foreign language MOOCs with
Chinese subtitles is 10 times that of those without Chinese subtitles. It was also found
by the 2015 Online Learning General Survey that, 82% of Chinese learners said they
could not keep up with MOOC without Chinese subtitles [9]. On the other hand, multi-
language subtitle translation is more urgent, in the sense that we are to popularize to the
world engineering MOOCs of Tsinghua University and other Chinese colleges and
universities, and that we should educationally benefit many developing countries
covered by the Belt and Road Initiative.

6 Conclusion

In conclusion, based on the afore-said surveys and analysis, we hold that Chinese
colleges and universities should be rational in the face of MOOCs boom. It is advisable
that they draw on advanced foreign experiences and the practice outcomes in teaching
reform of Chinese engineering education, and based on this, these colleges and univer‐
sities are advised to explore a MOOCs system taking on Chinese characteristics and, at
the same time, make the best use of MOOCs to assist traditional engineering education.
Only in this way can Chinese colleges and universities grasp the initiative to cultivate
top-notch global engineering talents.

Acknowledgment. We thanks XuetangX for providing the data about MOOCs, especially thanks
Chairman Nie Fenghua and Associate Curriculum Director Shi Xuelin for their kindly help.

References

1. Xi Jinping’s letter of congratulation to the International Congress on ICT in Education, 23


May 2015. http://news.xinhuanet.com/politics/2015-05/23/c_1115383959.htm
2. Li Keqiang chaired a State Council executive meeting, 15 November 2014. http://
politics.people.com.cn/n/2014/1115/c1024-26031983.html

zamfira@unitbv.ro
850 Y. Long et al.

3. Kizilcec, R., Piech, C., Schneider, E.: Deconstructing disengagement: analyzing learner
subpopulations in massive open online courses. In: Proceedings of the Third International
Conference on Learning Analytics and Knowledge, LAK 2013, Leuven, Belgium (2013)
4. Zong, Y., Sun, H., Zhang, H., Zheng, Q., Chen, L.: A logistic regression analysis of learning
behaviors and learning outcomes in MOOCs. Distance Educ. China (5), 14–22 (2016)
5. Jiang, L., Han, X., Cheng, J.: Analysis of the characteristics and learning effects of MOOCS
learners. China Educ. Technol. (11), 54–59 (2013)
6. Hong, M., Liu, M., Yang, J.: A study of the correlation between learning behavior and learning
effect. Coll. Engl. 5(2), 145–148 (2008). Academic Edition
7. Lv, Y., Yi, Y., Deng, C., S.Y.: The influence of network behavior on college students’
academic performance and mental health. Chin. J. Sch. Health 25(2), 250–251 (2004)
8. Mou, Z., Dong, B.: Exploration of blended learning mode based on MOOC. Modern Educ.
Technol. 24(5), 73–80 (2014)
9. MOOC Institute released the 2015 global MOOC rankings. http://www.jiemodui.com/N/
42596.html
10. Data comes from the statistical report by xuetangx.com, “The construction, operation and
application of MOOCs by XuetangX,” August 2016
11. Data comes from the survey report of application of MOOCs in Tsinghua University by
xuetangx.com
12. MOOCs provided by Tsinghua University on xuetangx.com. https://www.xuetangx.com/
courses?org=29
13. Xinjie, Y.: “Circuit Principle”—the first global MOOC by Tsinghua. http://www.tsinghua.edu.cn/
publish/news/4205/2015/20150324091655301219541/20150324091655301219541_.html
14. “Several Thoughts on the Reform and Development of Engineering Education in China”, a
report made by the Chinese Vice Minister of Education Lin Huiqing at the 2015 International
Forum on Engineering Education

zamfira@unitbv.ro
Conversion of a Software Engineering
Technology Program to an Online Format:
A Work in Progress and Lessons Learned

Jeff Fortuna(&), Michael D. Justason, and Ishwar Singh

McMaster University, Hamilton, ON, Canada


fortunjj@mcmaster.ca

Abstract. Institutions may have multiple reasons for converting courses and
programs to an online format. The W Booth School of Engineering Practice and
Technology in McMaster University’s Faculty of Engineering has recently begun
the implementation of an online Software Engineering Technology Program as
part of the Schools’ Degree Completion Programs (the final 2-years of a
3-year-plus-2-year Degree). The intent of this conversion is to attract students
from an unlimited geographical area. While the online conversion is ongoing,
there are a number of important observations worth sharing. This paper provides
an overview of the motivation, challenges, and opportunities related to the
conversion of an existing Software Engineering Technology curriculum to a fully
online format. The purpose of this study is to highlight student feedback that
rejects the notion of a ‘flipped-classroom’ in favor of a more traditional
delivery-model (simply converted to an online format). This study also outlines a
suggested implementation model for the conversion of a curriculum to an online
format, with specific suggestions for the increased use of digital media, inter-
active resources, and synchronous online collaboration. Observations regarding
the development of supplementary course material and the resources required to
develop these materials are also provided. Recommendations from this study will
include: a suggested format for online delivery of engineering/technical courses,
suggestions regarding student assessment, a suggested timeline for implemen-
tation, suggested resources (technical support, etc.), suggested technology that
provides the greatest ease-of-use for both instructors and students, suggested
supplementary course materials, and a word about cost.

Keywords: Online  Flipped-classroom  Degree completion  Program


conversion

1 Introduction

The W Booth School of Engineering Practice and Technology at McMaster University


in Hamilton, Ontario, Canada is an educational unit with a strong emphasis on
student-centered learning. The school is an amalgamation of a collection of seven
undergraduate Engineering Technology programs and a number of specialized graduate
programs with a focus on engineering practice. The undergraduate student body
numbers about 1300 students.

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_79
zamfira@unitbv.ro
852 J. Fortuna et al.

Of the seven undergraduate programs, four of them are “degree completion pro-
grams” (DCP). These are designed for graduates of post-secondary institutions called
“colleges of applied arts and technology” or colleges for short (the college concept was
introduced in the province of Ontario in the mid 1960’s - college programs are
designed to be applied in nature and offer a diploma upon graduation). McMaster
University’s degree completion programs offer a four-year Bachelor of Technology
degree upon the completion of 24 courses above and beyond the completion of a
three-year college diploma in a related field. The degree completion programs are
designed to be more conceptual and theoretical in nature, to complement the applied
nature of the college diploma. They are also designed to be completed in a part-time or
full-time capacity. Students are often working in industry and completing the program
simultaneously. Generally, more than 50% of the students in the program are working
either full-time or part-time. In 2006 a Computing and Information Technology (re-
named in 2013 to Software Engineering Technology) program was included as a
DCP. The program contains 17 technical courses – similar in nature to Software
Engineering programs. These include courses such as Data Structures and Algorithms,
Software Requirements and Specification, Software Design, Software Architecture, etc.
It also contains 7 management courses, covering areas such as Engineering Economics,
Management Principles, The Management of Technical Projects, etc. The program was
originally introduced as a fully face-to-face offering, with no intention to offer courses
online. This was simply due to the fact that the other three DCPs were also being
offered in the traditional modality.
In this paper, we will present the motivation for converting a face-to-face program
in Software Engineering Technology to one that is fully online. We will then describe
the challenges we faced in designing the program and how we addressed them. Fol-
lowing the challenges, we will detail the current implementation of the program,
including the technology, resources, and assessment approaches we used. Next, we will
present the results of a small survey designed to measure the efficacy of our imple-
mentation. Lastly, we will conclude with key findings from our work.

2 Motivation for an Online Software Engineering Technology


Program

Of the four degree completion programs, the software program has lagged in enroll-
ment since its inception in 2006. In a study that our department conducted in 2012, we
summarized the total number of graduates by program stream from colleges in Ontario
that feed students into our programs and the results are summarized in Fig. 1.
While the number of potential students available for our software program was
roughly equal to our manufacturing program, the actual enrollment for software was
roughly half that of manufacturing.
In 2013 we undertook a study to attempt to identify the reasons for low enrollment
in the software program specifically. We arrived at a list of a number of possible
factors, that were addressed in ways that are not relevant to this paper. However, one
factor - the geographic constraints of our location in the Hamilton area - was deemed to
be very important. A great many jobs and companies in the computing field are located

zamfira@unitbv.ro
Conversion of a Software Engineering Technology Program 853

Fig. 1.

in the Greater Toronto Area (GTA). Toronto is Canada’s largest city and is a major
business center. Toronto is about 50 miles from Hamilton and the commute is generally
about an hour each way – at the best of times. This commute would be difficult for
students living and working in the GTA. This was directly supported by data from our
study which indicated that the majority of students (55% in 2012) in our software
program came from one local college in Hamilton. As a result, we felt that the single
most important initiative that we could undertake to increase enrollment would be to
make the program accessible to students living and working anywhere in the province.
The software program was deemed to be particularly amenable to an online
delivery modality for a number of reasons:
1. The program historically used computer-based labs with open source implemen-
tations. These can easily be made accessible to students either through local or
cloud-based installations.
2. The students in the program are already “computer savvy” and are very familiar
with the use of technology for communication purposes.
3. The students see computer-based learning as an extension of the same learning
modality they routinely use at the technology companies that employ them.

3 Program Design Challenges

There were a number of challenges in designing and developing a fully online program
in a part-time, degree completion context. We had anticipated these challenges a priori
and attempted to address as many as possible at design time. Unfortunately, for one of
the challenges, we did not provide an adequate solution before the course was run
online for the first time. Herein we will describe the challenges we faced and the
approaches that we used to ameliorate problems.
Our first design challenge was that all students are in various stages of completion
of the program. Therefore, there is no single “date” where all students that signed up for
a “face-to-face” program (before the transition to online) will have finished their
studies. Entry into the program is continuous over three semesters – Fall, Winter and
Summer. In order to address the challenge of not having cohorts in our program, we

zamfira@unitbv.ro
854 J. Fortuna et al.

opted to gradually phase-in the deployment of online courses with primarily technical
content (SFWR TECH courses). This was effective for three reasons. First, a number of
courses in the management (GEN TECH courses) part of our program had been online
for a few years. Therefore, most students currently enrolled in the program had already
been exposed to these courses which eased their transition to a fully online program.
Second, as we gradually rolled out SFWR TECH online courses over a period of about
two years (the first technical courses were developed in 2015 and all courses are
scheduled to run online after the fall of 2017), we obtained as much feedback as
possible from students regarding our design choices. A sample of that feedback will be
shown in the results section of this work. In any case, by receiving copious amounts of
feedback on an ongoing basis, we could make better choices for upcoming courses.
Third, we advertised that the program was moving in an online direction during all of
our advertising and recruiting activities, so that students were aware of the type of
program that they were signing up for.
A second challenge was that students have a wide range of opinions about what
constitutes a positive educational experience [1]. This is the challenge of managing
student expectations. In order to manage expectations, we attempted to reassure stu-
dents that the courses would be of high quality. We showed the students evidence that
considerable time and effort had gone into the design and deployment of our online
courses. While this was not a guarantee that students would find the efforts satisfactory,
we found that it was critical to reassure all those involved (particularly those that were
skeptical of an online modality – see below) that we were doing our best to deliver a
quality educational product.
There are a wide range of options for the design and delivery of an online course
[2], which created a third design challenge. Unfortunately, in meeting this challenge,
our initial efforts were somewhat off-base. In our initial investigation of pedagogical
approaches to online courses, it seemed obvious that a flipped-classroom approach is
particularly well suited to online courses. The flipped-classroom relies heavily on
supplemental material – often in the form of instructional video – that is meant to be
viewed asynchronously by the student. This is precisely the environment that an online
environment fosters. In fact, the flipped-classroom has been strongly advocated for in a
face-to-face environment as well [3]. Additionally, the management courses that were
already online were making extensive use of this approach, with apparent success.
However, as soon as we presented two technical courses – Data Structures and
Algorithms and Computer Security - online using this pedagogical design, a large body
of the students rejected the approach. In a meeting with the program chair early in the
semester of first offering of the aforementioned courses, student concerns were sum-
marized as:
• Lack of “instruction”.
• Missing the enjoyment of my lectures.
• Video lectures not equivalent to live lectures.
• Balance between lecture time and tutorial time.
• Flipped classroom model is contentious.
• Volume of work outside of the classroom.

zamfira@unitbv.ro
Conversion of a Software Engineering Technology Program 855

Very quickly, a decision needed to be made about whether or not we should


continue with the flipped-classroom approach. It was decided that the student concerns
were NOT focused on the online delivery model per se. The issue, clearly, was one of
pedagogy. We felt that it was not appropriate to hinge the success of our new online
courses on the merits of the flipped-classroom pedagogical model. Since we were
confident that the online program was the most direct way to grow the program, for the
reasons mentioned in the previous section, we simply decided to shift the courses back
to a more traditional model. This amounted to simply delivering the same LIVE lecture
that would have been given in the face-to-face setting via video meeting software. This
live lecture included the same basic methodology that has been traditionally used in
classrooms – lecture slides (with the additional benefit of easy annotation) and a virtual
whiteboard. More details about the implementation will be given in the next section.
Another challenge for the design of an online program was that a bias may exist
against online learning with respect to the efficacy of courses implemented using an
online modality [4, 5]. We observed the same bias amongst faculty members in the
department. Two comments from faculty members when discussing the merits of
online courses are particularly memorable. One commented that online learning was “a
fad” and that once we realize the folly of the approach, we will return to more tradi-
tional “brick and mortar” classrooms. The other comment was focused on the perceived
“critical” role of facial expressions (both from the students and the instructor) in
learning. While there certainly is some evidence [6] that facial expressions may play a
role in interpreting the comprehension level of the students, it is not at all clear how
important that factor is.
In any case, with respect to student biases, we found that, generally, Software
Engineering Technology students were relatively receptive to an online approach a
priori, with the important caveat mentioned above: a lecture should be presented LIVE.
This will be illustrated by our survey results in a following section. Our students rou-
tinely advised us that while there are some things that they feel that they are “missing” in
the online environment (largely centered around interaction with other students – as will
also be illustrated by our survey results), on the whole, given their busy life/work
schedules, the convenience of an online course far outweighs any disadvantages.
Lastly, it should be noted that converting an entire program of 24 courses into one
that uses a completely different modality requires a substantial expense of human
capital and, potentially, the purchase of expensive technology. The majority of this cost
was labor. About a fifth of the cost of development was faculty release time for the
development of materials by the subject matter experts. Two fifths of the cost was for
multimedia production and the remainder was for equipment and other faculty training.
In McMaster University’s case, we had access to a multimedia development group
which was already well-equipped for creating content. Therefore, additional funding
was not required for the purchase of video/audio recording and editing equipment
although, as will be discussed in the next section, some additional equipment was
purchased specifically for the development of our online courses. This included a
smartboard equipped classroom for live lecture delivery and a lightboard for producing
effective whiteboard instructional examples.

zamfira@unitbv.ro
856 J. Fortuna et al.

4 Program Design Methodology

Each online course in the program was designed according to the following template
given to each course designer:
Guidelines for Developing On-Line Modules/Course
1. List the Main Learning Outcome and sub learning outcomes or sub topics
2. Provide Learning Materials (Asynchronous Mode) - videos, reading material,
powerpoint, interactive learning material
3. Provide Support Material (two types: one for reinforcing prerequisite material and
additional relevant material) – links, powerpoint, papers, other work
4. Provide Self-Assessment Tools & Exercises
5. Provide Interaction, Engagement and Discussion Activities (Synchronous Mode)
6. Provide Learning Outcome Assessment Tools (must be linked with the learning
outcomes) – tests, quizzes, assignments, group work, presentations, discussions
7. Provide Feedback on the learning outcome and module – instructor and student

Guideline for the Midterm & Final Exam


The students taking the online course are required to write midterm and final exams at
McMaster University Campus. Those students who have to travel more than 100 km or
have special needs can arrange the midterm and final examination to be written at a
network of approved exam invigilation centers.
Note that the course design is not significantly different from a typical face-to-face
offering of the course. The School of Engineering Practice had already standardized the
dissemination of all course materials through McMaster’s learning management sys-
tem. The only major difference is that all instructors are encouraged to provide
recordings of all lecture sessions as provided from our web lecture software (currently
WebEx Training Sessions). Interestingly, we have found that attendance for our online
lectures is very high – close to 100% - without the need for special “incentives”, despite
the fact that all lectures are recorded. This supports the fact that our students are happy
with the online format when they have the ability to interact with the instructor in a live
setting.
Some asynchronous materials were developed specifically for our online courses,
but we did not spend too much time and money developing entire video lectures
because, of course, all of our lectures are delivered live and available for review. Video
development was reserved for tutorials showing worked examples such as narrated
Excel documents or narrated lightboard [7] presentations.
One final note regarding our synchronous lectures – almost universally, students
were not interested in interacting with the instructor using voice communication. They
simply wished to use chat to ask questions as the lecture progressed. This assertion is
supported by the survey results shown in the next section. Our instructors are generally
satisfied with this arrangement, since they now have a record of all questions asked and
they can answer them (using voice) at their convenience, without interrupting the flow
of the lecture.

zamfira@unitbv.ro
Conversion of a Software Engineering Technology Program 857

5 Significant Results from a Recent Survey of Students

Herein we will examine selected results from an ethics approved recent survey where
we received 28 responses from students that took online courses during the calendar
year 2016. In some cases a student may have completed the survey multiple times as
the survey was designed for responses for each course taken.
Significant overall findings include:
• 67.86% of responses were from students employed-full time. 7.14% were employed
part-time. The remainder were not employed.
• 60.71% of responses were from students that would have to travel more than 12
miles to attend classes at McMaster. 10.71% would have to travel more than 48
miles to attend classes.
• 48.15% of responses were from students in the Software Engineering Technology
program. The remainder of the responses were from other DCP students taking our
online management courses.
• When asked which method of synchronous delivery respondents prefer (and they
could check all that applied), 85% of the responses favored Audio and Lecture
Slides (with annotation) and 74.07% preferred Audio and a Virtual Whiteboard.
29.63% preferred Audio and Video of the Instructor Speaking (webcam).
• When asked which type of interaction was preferred, 78.57% of responses were
from students that preferred to use chat. 10.71% preferred to use voice. The
remainder preferred not to interact.
• 46.43% of responses indicated that students had less interaction with classmates in
the online course. 35.75% indicated that they had the same or more interaction. The
remainder indicated that they did not interact at all.
• When asked how satisfied students were of the live component of the online course,
78.57% of the responses were from students that were either very satisfied or
satisfied.
• 67.86% of responses indicated that students found the live portion of the class most
effective.
• 64.29% of responses indicated that students were either very satisfied or somewhat
satisfied with the course as an online offering.
• 53.57% of the responses indicated that students preferred to take the course online
rather than face-to-face.

6 Conclusions

In this paper we have described the motivation for an online Software Engineering
Technology course and some of the challenges we were presented with during the
design of the program. We then detailed our implementation – based on a model that
roughly emulates the traditional face-to-face lecture through the use of live online
lectures with the addition of videos that provide supplemental tutorial support. We
found that students roundly rejected the notion of a “flipped-classroom” because they
deemed actual live instruction time in the form of a lecture to be very valuable.

zamfira@unitbv.ro
858 J. Fortuna et al.

Students felt that it was more efficient to be directly “instructed” rather than to be
forced to spend too much time conducting independent research. This makes sense
given the time constraints of students that are also working in industry in a full-time or
part time capacity.
The survey results directly support our contention that our focus on the live online
lecture as a replacement for the traditional face-to-face in class experience has been
quite well received, as the vast majority of our students were satisfied with our online
courses and a majority expressed a preference for taking the course online over a
face-to-face delivery even though about 40% of the responses indicated they would
have had to travel less than 12 miles to attend the class. Therefore, we feel that the
design decisions that we made and the implementation (although ongoing) have met
the needs of our student body that must maintain a work/life balance while furthering
their education in the fast-paced field of software design and development.

References
1. Pashler, H., McDaniel, M., Rohrer, D., Bjork, R.: Learning styles: concepts and evidence.
Psycholog. Sci. Public Interest 9(3), 105–119 (2009)
2. Duncan, H.E., Young, S.: Online pedagogy and practice: challenges and strategies. The
Researcher 22(1), 17–32 (2009)
3. Bishop, J.L., Verleger, M.A.: The flipped classroom: a survey of the research. In: ASEE
National Conference Proceedings, Atlanta, GA, vol. 30, no. 9, June 2013
4. Kelly, H.F., Ponton, M.K., Rovai, A.P.: A comparison of student evaluations of teaching
between online and face-to-face courses. Internet High. Educ. 10(2), 89–101 (2007)
5. Redpath, L.: Confronting the bias against on-line learning in management education. Acad.
Manag. Learn. Educ. 11(1), 125–140 (2012)
6. Sathik, M.M., Sofia, G.J.: Effect of facial expressions on student’s comprehension recognition
in virtual educational environments. SpringerPlus. 2, 455 (2013)
7. Gressman, P.: Engaging students through technology symposium 2015 presentation by Phillip
Gressman, 01 October 2015. http://repository.upenn.edu/showcase_videos/97

zamfira@unitbv.ro
Increasing the Value of Remote Laboratory
Federations Through an Open Sharing
Platform: LabsLand

Pablo Orduña1(B) , Luis Rodriguez-Gil1 , Javier Garcia-Zubia2 ,


Ignacio Angulo1 , Unai Hernandez1 , and Esteban Azcuenaga3
1
DeustoTech - Deusto Institute of Technology, Bilbao, Spain
{pablo.orduna,luis.rodriguezgil,ignacio.angulo,unai.hernandez}@deusto.es
2
Faculty of Engineering, University of Deusto, Bilbao, Spain
zubia@deusto.es
3
LabsLand, Bilbao, Spain
esteban@labsland.com

Abstract. A remote laboratory is a software and hardware tool that


enables students to access real equipment located somewhere else
through the Internet. This equipment is usually located in universities,
schools or research centers. During the last couple of decades, different
initiatives have emerged dealing with the development and management
of remote laboratories, their integration in learning management systems
or their sharing. This last point is particularly relevant, since remote
labs are a clear example of excess capacity: since they are usually used
only some hours a day, some weeks a year, they could be shared among
institutions to reduce costs or to increase the offer of experiential learn-
ing. However, despite this fact, the overall impact of these laboratories
is fairly limited beyond the scope of the host institution or the scope
(and duration) of projects in which the host institution is involved. The
focus of this contribution is to outline a set of potential reasons for this
fact, and solutions that are being developed to tackle them. After over
10 years working on the area, the WebLab-Deusto research group has
started a spin-off focused on this topic, called LabsLand. A key factor
of this spin-off is to provide a platform similar to other sharing econ-
omy marketplaces, aiming to provide features commonly ignored in the
remote laboratories literature such as trust, accurate reliability or dif-
ferent pricing schemes for different scenarios; as well as the laboratories
that are being initially provided.

1 Introduction
An Educational Remote Laboratory is a software and hardware solution that
enables students to access real equipment located in their institution, as if they
were in a hands-on-lab session, using an standard web-browser. The laboratories
are typically deployed in universities or research centers.
A key factor of remote laboratories is that once they are available through the
Internet their usage can be scaled up and used by students of other institutions.

c Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6 80

zamfira@unitbv.ro
860 P. Orduña et al.

Thus, two or more institutions can share different equipment to reduce costs by
requiring less duplicated equipment: it is typically only used in certain hours of
the day and in certain days of the year. Furthermore, this empowers a sharing
economy where multiple providers provide access to their laboratories to each
other, freely or not.
In the literature there is a wide range of remote laboratories in many fields
(e.g., robotics, electronics, physics, chemistry). Software frameworks have been
developed to make the development of remote laboratories more affordable (e.g.,
Remote Laboratory Management Systems such as WebLab-Deusto1 [30], iLab
Shared Architecture2 , RemLabNet3 [37] or Labshare Sahara4 [22]) and tools
(e.g., gateway4labs5 [32]) to provide integrations with educational tools (such as
Moodle, Sakai or other LMS, both through ad hoc solutions and through stan-
dards such as IMS LTI) or repositories linking remote and virtual laboratories
(such as Go-Lab [13,21], LiLa [34] or iLabCentral).
However, while the number of remote laboratory initiatives is high, the over-
all impact of these laboratories is fairly limited beyond the scope of the host
institution or the scope (and duration) of projects in which the host institution
is involved. There are cases where the laboratories are regularly used by other
institutions, but these are still exceptions and remote laboratories are not yet
widely used. This is not the case for virtual laboratories (simulations), where
the maintenance costs and work required once developed tend to be low.
In the literature there are studies that identify key elements for this problem:
lack of a technical framework, pedagogic framework or proper strategy. Some
business initiatives have been created focusing also on the sustainability (such
as Labicom6 [39] or RemoteLabs.in), but also with a limited reported impact.
The focus of this contribution is to outline a novel initiative addressing this
scaling problem. After over 10 years working on the area our research group
has started a spin-off, called LabsLand7 focused on this topic. This contribution
outlines the key component developed by this company: a portal that acts as a
repository of remote laboratories supporting multiple providers (relying on exist-
ing interoperability efforts), but which provides a quality assurance mechanism
(not available at this moment in any of the repositories found in the literature),
and based on simple contracts (supporting both free sharing and paid sharing)
that aims to provide reliability to the final users and sustainability for the lab-
oratory providers.

1
http://weblab.deusto.es.
2
http://ilab.mit.edu.
3
http://www.remlabnet.eu.
4
https://remotelabs.eng.uts.edu.au.
5
http://gateway4labs.readthedocs.org.
6
http://labicom.net.
7
https://labsland.com.

zamfira@unitbv.ro
Increasing the Value of Remote Laboratory Federations 861

Fig. 1. Robot laboratory [9]. At the left, the mobile robot itself. At the right, the user
interface once the program has been submitted.

2 Current Solutions for Sharing Remote Laboratories


This section introduces the concepts of remote laboratories, Remote Laboratory
Management Systems (RLMS), remote laboratory federations and portals for
sharing remote laboratories.

2.1 Remote Laboratories


A remote laboratory is a software and hardware tool that allows students to
remotely access real equipment located in the university. Users access this equip-
ment as if they were in a traditional hands-on-lab session, but through the Inter-
net. To show a clear example, Fig. 1 shows a mobile low cost robot laboratory
described in [9]. Students learn to program a Microchip PIC microcontroller, and
they write the code at home, compile it with the proper tools, and then submit
the binary file to a real robot through the Internet. Then, students can see how
the robot performs with their program through the Internet (e.g., if it follows
the black line according to the submitted program, etc.) in a real environment.
In this line, there are many examples and classifications in the literature
[14,16]. Indeed, remote laboratories were born nearly two decades ago
[1,4,20], and since then they have been adopted in multiple fields: chemistry [5,6],
physics [7,12], electronics [17,27], robotics, [35,40], acoustics [41], and even nuclear
reactor [18].

2.2 Remote Laboratory Management Systems


Every remote laboratory manages at least a subset of the following features:
authentication, authorization, scheduling users to ensure exclusive accesses -
typically through a queue or calendar-based booking, user tracking and admin-
istration tools. These features are common to most remote laboratories, and are
actually independent of the particular remote laboratory settings. For example,

zamfira@unitbv.ro
862 P. Orduña et al.

an authentication and queuing system is valid both for an electronics laboratory


and for a chemistry laboratory.
For this reason, Remote Laboratory Management Systems (RLMSs) arose.
These systems (e.g., MIT iLabs8 , WebLab-Deusto9 or Labshare Sahara10 ) pro-
vide development toolkits for developing new remote laboratories, as well as man-
agement tools and common services (authentication, authorization, scheduling
mechanisms). The key idea is that by adding a feature to a RLMS (e.g., support-
ing LDAP, a Learning Analytics panels [29] or similar cross-laboratory features),
all the laboratories which are managed with that RLMS will support this feature
automatically.

2.3 Federating Remote Laboratories

As previously stated in the introduction, a key factor of remote laboratories is


that once the laboratory is available on the Internet, it can also be shared with
other institutions.
To do this, there are three general approaches:

– Leave the laboratories completely open, so whoever wants to use them can use
them. This may reduce the chances of providing proper Learning Analytics or
supporting proper accountability mechanisms, in addition to avoiding priori-
ties among students coming from different institutions, leading to a tradeoff
between accessibility and advanced features [32].
– Share accounts between the different RLMS: if University A want to use
laboratories of University B, then someone in University A will provide a list
of usernames to University B and students will go to this institution using
credentials in University B. Ideally, some federated authentication could be
used to avoid providing credentials in different domains (such as Shibboleth,
OAuth or similar), but it is not typically the case.
– Federate laboratories: if a RLMS supports federation, then if installed in
two different institutions (e.g., University A and University B ), students of
University A will go to the RLMS of University A and they will transparently
use laboratories in University B, working in a institution-to-institution basis
(so University B does not need to know the list of students of University A
and simply rely on an existing agreement with that university).

From the items in this list, the most advanced mechanism is the federation
of remote laboratories through proper protocols oriented to market-like situa-
tions. These federation protocols have been used for fostering interoperability
between RLMS [30]. These interoperable bridges between different systems can
be enhanced if properties such as transitivity or federated load balance are pro-
vided [28].

8
http://ilab.mit.edu.
9
http://weblab.deusto.es.
10
http://github.com/saharalabs.

zamfira@unitbv.ro
Increasing the Value of Remote Laboratory Federations 863

2.4 Remote Laboratory Portals

In the literature there are different portal solutions that provide listings of virtual
and remote laboratories. In [8] 13 repositories are analyzed, out of which 6 were
involving remote laboratories (the rest are virtual laboratories - simulations),
plus another one is presented (golabz). Most of these repositories provide a por-
tal with more or less features, including: social features (e.g., rating resources,
adding comments, tags), materials (users’ materials, students’ materials, teach-
ers’ materials) or supportive apps. In particular, the Go-Lab portal (golabz11 )
provides all these features, providing support for both remote and virtual labora-
tories, a pedagogic framework, tools for sharing and reusing pedagogic contents
and tools for publishing results.

3 Access Economy Platforms

The last years have seen the rise of the access economy platforms, very often also
known as sharing economy platforms. Those companies rely on a business model
where their services are traded on the basis of access, as opposed to ownership.
This relatively novel term arose as a correction to the sharing economy one
to emphasize the fact that many of the major players do not actually share
anything. Instead, they provide the means for a provider to temporarily lend
something to a consumer.
A representative example is Airbnb12 . Airbnb is an online apartment and
room rental platform. It differentiates two types of user roles: hosts—who provide
the room or house—and guests—who pay for it. The platform provides significant
value to both. To hosts, they provide the means to get their property known,
to manage their payments easily, to schedule stays, to review potential guests,
etc. To guests, they provide the means to find rooms or whole apartments and
rent them easily, compare them in a competitive environment, be able to asses
the quality and security of the host through the rating and host identification
system, etc.
Other interesting example is BlaBlaCar13 . It is a car ride-sharing platform
that enables drivers and passengers to organize rides. It is particularly popular
in Spain. Whenever the Blablacar drivers are going to ride to a particular place,
they may post the details in the platform, along with the number of available
seats and the amount they charge. Then, those interested can join them. It is
noteworthy that, unlike Airbnb, the focus of Blablacar is not for the drivers to
earn money, but, simply, to let them recover part of the expenses of the trip.
Thus, they actually limit the maximum amount to charge depending on the trip.
While in this contribution we refer to these phenomenom as an access econ-
omy or a sharing economy [24,25] (its most popular term nowadays), this phe-
nomenom is not new and has received different names in the literature [2].
11
http://www.golabz.eu.
12
https://www.airbnb.com.
13
http://blablacar.com.

zamfira@unitbv.ro
864 P. Orduña et al.

Regardless the term used, they all agree in that a key feature in all these plat-
forms is delivering verifiable trust. Airbnb co-founder Joe Gebbia [10] points out
trust as a key feature for Airbnb since both guests and hosts need to trust each
other to run the service, explaining how the Airbnb reputation system does not
change much with few opinions but after a threshold of around 10 (good) opin-
ions, the chances of one trusting that person increase considerably. Indeed, trust
is analyzed in studies [43] which compare the results of the reputation system of
Airbnb with hotels opinions in TripAdvisor (where the average punctuation is
lower than Airbnb), pointing out some potential reasons (from individual entre-
preneurs offering rooms in Airbnb would be more selective with which guests to
accept so as to avoid a bad opinion or resetting the page to a fresh property
page to avoid having past bad opinions).

4 Discussion
As mentioned in Sect. 2.4 Remote laboratory portals, some repositories (such
as golabz) for virtual and remote laboratories do provide social features (e.g.,
rating resources, adding comments, tags), materials (users’ materials, students’
materials, teachers’ materials), as well as supporting applications, a pedagogic
framework or tutoring platforms [3].
However, none of the existing portals provides trust mechanisms other than
user-based ratings, and typically those portals supporting user-based ratings do
not have many ratings, and with few context about who is the person providing
the rating, how much as used the tool, etc.
This might be a minor issue in certain environments, where trust on the
tool is not so critical, such as where the tool is reliable (e.g., it is a simulation)
and it is always freely available. However, in the field of remote laboratories,
sustainability remains as an open problem [23]. An important advantage is that
two institutions can share real laboratories, real equipment, reducing the costs
if these costs are somehow shared. However, regardless the federation protocols
built (see Sect. 2.3), it is not possible to engage different institutions in using
each others’ laboratories if there is no reliable mechanism to trust the reliability
of the laboratories and the ability of the host institution to fix the problems if
they appear.
However, there is usually no way for the existing remote laboratory portals
to know whether the laboratories are running or not, how often failures happen
(e.g., what was the percentage of uptime during the last 9 months?) or how
long does it usually take to the laboratory owners to fix it. The portals are
additionally unable to track the usage of the tools from third parties (e.g., they
are when it is about their own resources) and publicly display this information
in a way useful for teachers (e.g., what laboratories are more popular in the
repository?). Furthermore, the existing portals do not embrace the ability to
manage the usage of the laboratories and manage potential payments.
We consider that this is a key factor for the not spread usage of remote
laboratories among different institutions (as compared with their wide usage in

zamfira@unitbv.ro
Increasing the Value of Remote Laboratory Federations 865

the host institutions [36,42]). No RLMS and no portal provide the ability to
manage payments properly (not necessarily by paying per access, but through
fixed rates or virtual mechanisms where the more someone uses your labs, the
more you can use labs of other providers), and even if they did, no portal has
the ability to provide real data on how reliable and trustworthy a particular
laboratory is. Without such information, the trust relation completely relies on
direct relationship between the provider and the consumer (where the consumer
must trust on the provider because they know each other or other reputation
system).
With such a portal, and with the technologies, pedagogic frameworks and
tools already demonstrated in the literature, we consider that it will be possible
to engage different providers and different consumers to use remote laboratories.
Only if this adoption happens, it will be possible to foresee a sustainable and
maintainable model for a distributed network of remote laboratories.

5 LabsLand Portal
LabsLand is a spin-off of the WebLab-Deusto14 research group. As part of
WebLab-Deusto, the team has worked on:
– A set of remote laboratories, including physics, electronics or biology.
– An Open Source Remote Laboratory Management System called WebLab-
Deusto for the development of remote laboratories. This RLMS is used in
a number of universities in different countries (Spain15 , Slovakia16 , Brazil17 ,
Serbia [26] or Georgia).
– A federation model and protocol for sharing laboratories in a market-like
decentralized environment [28].
– A set of tools for interoperability with other RLMS: both ad hoc [30,33] and
through a collaboratively developed and Open Source system called gate-
way4labs [31,32].
Now as part of LabsLand, and on top of the experience and maintaining
the tools mentioned above as Open Source, a spin-off has been created to deal
with the pitfalls that will be pointed out in the paper. The spin-off, in addition
to explore business models around remote laboratories, provides a centralized
portal (see Fig. 2) that will act as a technology-agnostic marketplace for remote
laboratories consumers and providers. The portal does not rely only in own
technologies (such as WebLab-Deusto), but also supports external providers. To
this end, interoperability efforts have been placed in gateway4labs18 to support
external remote laboratory providers, including the iLab Shared Architecture,
RemLabNet, UNR-FCEIA or repositories including remote laboratories such
as ViSH.
14
http://weblab.deusto.es.
15
http://weblab.ieec.uned.es.
16
http://weblab.chtf.stuba.sk.
17
http://weblabduino.pucsp.br/weblab/.
18
https://github.com/gateway4labs/.

zamfira@unitbv.ro
866 P. Orduña et al.

Fig. 2. LabsLand portal. It will be publicly available for the beginning of 2017.

The portal tries to guarantee trust from the very beginning, by regularly and
automatically checking the existing resources to be able to provide trustworthy
information to the teachers on which laboratories are reliable and how much
(with several different types of automatic checks), and enforcing different policies
on the remote laboratory providers to be clear on when the laboratory is going
to be available. The portal will not penalize that a remote laboratory provider
is not working 24/7: it will just require remote laboratory providers to define
in which time ranges it should be available and penalize those not being online
during the defined time. In addition to this, public opinions (only of those having
used the platform) and all the D.R.E.A.M.S. framework values are considered
in its design.
The LabsLand portal emphasizes certain technical characteristics and fea-
tures to provide and guarantee high quality and standards.

5.1 HTTPS Communications


Experience has shown that the schools, universities and other institutions in
which remote laboratories are normally used tend to be environments that are
particularly vulnerable to security issues. Untrusted networks and not particu-
larly security-aware users are relatively common in this environment. Consider-
ing that some personal information (e.g., mail addresses, names, marks) or even

zamfira@unitbv.ro
Increasing the Value of Remote Laboratory Federations 867

payments may be involved, it is of utmost importance to provide security at the


network level by at least encrypting communications through the HTTPS pro-
tocol. Thus, communications won’t be able to be intercepted or modified even
from unsecure networks such as open school WiFi networks.
In line with this, the LabsLand portal and related systems rely on the Cert-
bot19 developed by the Electronic Frontier Foundation to obtain and automat-
ically renew the certificates for its domains, thus ensuring that endpoints such
as https://labsland.com or https://login.labsland.com are protected.

5.2 OAuth and SAML


A portal such as the LabsLand portal needs to integrate with schools, universities
and other institutions which rely on very heterogeneous systems. The integration
needs to be, for them, as seamless and straightforward as possible. This will often
involve providing a Single Sign-On mechanism, by supporting whatever identity
provider or protocol they use for that purpose.
Some of the most remarkable protocols for which the LabsLand portal has
developed support are OAuth and SAML. OAuth [19] is a protocol used by iden-
tity providers such as Google, Facebook or LinkedIn. By supporting OAuth, with
only minor differentes, the LabsLand portal supports authentication through all
of the aforementioned providers. SAML [11] is also a standard protocol for the
same purpose, which is also used by several providers, and supported.
Through those capabilities, a particular institution that relies, for example,
in Google Apps (as is the case for many schools and universities), can easily be
integrated into LabsLand, without needing to explicitly create or store passwords
for their students, or to provide them access. LabsLand can simply check the
identity of their users through their own service (in this case, Google).

5.3 Google Classroom and LMS


Related to the above, a common use case nowadays is for teachers to rely on
Google Classroom for their homework, tasks, and activities. The LabsLand por-
tal has dedicated significant efforts to guarantee a proper integration with this
service. From the users’ perspective, the process is straightforward. If the whole
school needs to access the experiments, no particular steps need to be taken. If
only a specific class needs to access the experiments, then the teachers provide
LabsLand the Google Classroom class identifier. They don’t need to do anything
else.
Internally, Google Classroom supports an API that can be accessed through
OAuth. Once LabsLand has used the OAuth mechanism to authenticate the
students and to verify the school that they belong to, it uses a LabsLand Google
App and that API to verify that the specific user does also belong to the specific
class.
Though we have emphasized Google Classroom due to its novelty and ubiq-
uity, it is noteworthy that other LMS such as Moodle are also supported.
19
https://certbot.eff.org/.

zamfira@unitbv.ro
868 P. Orduña et al.

5.4 Quality Control

Teachers often need to trust that the remote laboratories they intend to use will
indeed work. Traditionally, in the remote laboratory environment, there have
been difficulties at that respect. Maintaining a remote laboratory takes effort
and resources, and due to their nature and mechanical components, are often
prone to failures.
LabsLand has designed and developed a system to automatically run dis-
tributed tests against deployed laboratories, both against internal (LabsLand-
provided) laboratories and external ones. This way, it will be possible to obtain
reliability and quality data, that will let potential laboratory users make con-
scious and informed decisions about which labs they rely on, and to what extent.

6 Base Laboratory Offer

As described in previous sections of this paper, the value of an access economy


(or sharing economy) platform as a whole depends on the number of consumers
and providers that participate in the network. The higher the total number,
the greater the value for each individual one. Thus, in order to be able to offer
significant value from the start, LabsLand has designed an initial laboratory
offer that is intended to cover a relatively wide audience.
Some of those laboratories are developed and maintained by LabsLand itself,
while others are reached through agreements with specific universities.

6.1 The LabsLand Arduino Robot Laboratory

One of the most interesting of these laboratories is the Arduino Robot Lab-
oratory, which is designed, developed and maintained by LabsLand. This lab
is oriented to robotics students of all ages, or even to young programming stu-
dents. The equipment is a line-following robot controlled with an Arduino. It has
several sensors, including those to follow the line, several proximity sensors, etc.
The students can program the robot (by programming the Arduino), and
then see how their program behaves in the real robot through a webcam. There
are also buttons, a serial monitor and other devices available, so the student can
actually interact with the robot in real-time.
Because, as explained, it is intended to be useful for a wide audience, there are
two different programming interfaces available. First, young students (or those
who simply are not particularly inclined to Arduino C-like programming) can use
Blockly20 . Blockly is an Open Source scratch-like21 visual programming language
created by Google. The blocks that the LabsLand Arduino Robot Laboratory
provides match the robot’s library quite closely, so even through the blocks it is
possible to obtain a relatively low-level understanding of the robotics involved.

20
https://developers.google.com/blockly/.
21
https://scratch.mit.edu/.

zamfira@unitbv.ro
Increasing the Value of Remote Laboratory Federations 869

Fig. 3. Arduino Robot Laboratory.

Second, those with some programming knowledge can program the robot directly
in the Arduino C-like language, through a simplified online IDE.
Figure 3 shows the actual robot to the left, and the interaction user interface
of the Arduino robot laboratory to the right.

6.2 BIFI Kinematic Laboratories

Physics are particularly important in the curriculum of young students. Tradi-


tionally, schools have had difficulty providing practical and experimental experi-
ences to reinforce their theoretical contents. Teachers that have been contacted
have expressed different reasons for what this may be the case. Among other
reasons, it may be because schools have limited access to equipment. Also, pro-
gramming and executing laboratory experiences requires significant time and
effort, especially for the teachers. They need to reserve laboratory time, thor-
oughly program the experience, check and prepare the equipment, ensure that
the students know the safety guidelines and respect them and the equipment, etc.
Remote laboratories provide a solution for these issues. Schools and other
institutions can share the equipment so there is no purchasing cost, and the expe-
rience is always already set-up. No time is lost with bureaucracy, displacements
or setting up. Safety is guaranteed for both the students and the equipment.
With this goal, LabsLand has purchased a set of Kinematics laboratories
developed by BIFI22 . These laboratories let students experiment with free-fall,
pendulums and springs.
Figure 4 shows the BIFI laboratories. Although they might seem a single
laboratory, they are actually three distinct ones. The free-fall one is to the left.
The pendulum is to the middle. The spring is to the right. The black device in
the front is a single camera, which is shared by the three laboratories.

6.3 The Relle Laboratories


The Relle23 platform provides a number of high-quality laboratories created
and maintained by the UFSC (Universidade Federal de Santa Catarina), from

22
http://bifi.es.
23
http://relle.ufsc.br/.

zamfira@unitbv.ro
870 P. Orduña et al.

Fig. 4. BIFI laboratories equipment. Source: [15].

Fig. 5. Relle AC electrical panel. Source: [38].

Brazil. Certain Relle labs will be offered through the LabsLand portal. This
serves two main goals. Firstly, it adds direct value to the portal users through
a wider selection of useful, quality labs. Secondly, it demonstrates the federated
and inclusive nature of the LabsLand portal by integrating from the beginning
laboratories from different providers and different frameworks.
Among others, some remarkable Relle laboratories are the DC Electrical
Panel and the AC Electrical Panel laboratories, which provide access to physi-
cal boards that let students carry out different basic electronic calculations over
basic electronic components. Other interesting examples are the Optical Labora-
tory which lets the students experiment with optical lenses; or the Microscope,
which lets students examine different samples with a microscope in real-time.
Figure 5 shows the Relle AC Electrical Panel, which lets students predict
the value for the voltage and intensity for several circuits and compare their
predictions with reality.

zamfira@unitbv.ro
Increasing the Value of Remote Laboratory Federations 871

7 Conclusions
The daily usage of remote laboratories has been reported in the literature. How-
ever, while the number of remote laboratory initiatives is high, the overall impact
of these laboratories is fairly limited beyond the scope of the host institution or
the scope (and duration) of projects in which the host institution is involved.
In this contribution, existing efforts have been described from a technical
perspective, and sharing economy platforms have been described, identifying
some factors such as trust as key for their success. Thus, the contribution suggests
that this factor must be included in any portal attempting to encourage adoption
on the usage of remote laboratories beyond the scope of the host institution and
related projects or direct relationships.
Finally, the contribution presents the LabsLand portal (as part of a spin-off
of the WebLab-Deusto research group), which attempts to provide these features
so as to foster adoption of remote laboratory uses for achieving sustainability and
maintainability of remote laboratories. The contribution describes the features
and design philosophy of this portal, which will be available in early 2017.

References
1. Aktan, B., Bohus, C., Crowl, L., Shor, M.: Distance learning applied to control
engineering laboratories. IEEE Trans. Educ. 39(3), 320–326 (1996)
2. Böckmann, M.: The shared economy: it is time to start caring about sharing; value
creating factors in the shared economy. In: 1st IBA BT Conference, vol. 1 (2013)
3. Cao, Y., Tsourlidaki, E., Edlin-White, R., Dikke, D., Faltin, N., Sotiriou, S., Gillet,
D.: Stem teachers community building through a social tutoring platform. In:
Advances in Web-Based Learning, ICWL 2015, pp. 238–244. Springer (2015)
4. Bohus, C., Crowl, L.A., Aktan, B., Shor, M.H.: Running control engineering exper-
iments over the Internet. IFAC Proc. Vol. 29(1), 2919–2927 (1996)
5. Cedazo, R., Sanchez, F., Sebastian, J., Martı́nez, A., Pinazo, A., Barros, B., Read,
T.: Ciclope chemical: a remote laboratory to control a spectrograph. In: Advances
in Control Education, ACE (2006)
6. Coble, A., Smallbone, A., Bhave, A., Watson, R., Braumann, A., Kraft, M.: Deliver-
ing authentic experiences for engineering students and professionals through e-labs.
In: 2010 IEEE Education Engineering (EDUCON), pp. 1085–1090. IEEE (2010)
7. Del Alamo, J., Brooks, L., McLean, C., Hardison, J., Mishuris, G., Chang, V.,
Hui, L.: The MIT microelectronics weblab: a web-enabled remote laboratory for
microelectronic device characterization. In: World Congress on Networked Learning
in a Global Environment, Berlin, Germany (2002)
8. Dikke, D., Tsourlidaki, E., Zervas, P., Cao, Y., Faltin, N., Sotiriou, S., Sampson,
D.G.: Golabz: towards a federation of online labs for inquiry-based science educa-
tion at school. In: 6th International Conference on Education and New Learning
Technologies (EDULEARN) (2014)
9. Dziabenko, O., Garcı́a-Zubia, J., Angulo, I.: Time to play with a microcontroller
managed mobile bot. In: 2012 IEEE Global Engineering Education Conference
(EDUCON), pp. 1–5. IEEE (2012)
10. Gebbia, J.: How Airbnb designs for trust. In: TED Talks, February 2016
11. Geer, D.: Taking steps to secure web services. Computer 36(10), 14–16 (2003)

zamfira@unitbv.ro
872 P. Orduña et al.

12. Gillet, D., Latchman, H., Salzmann, C., Crisalle, O.: Hands-on laboratory experi-
ments in flexible and distance learning. J. Eng. Educ. 90(2), 187–191 (2001)
13. Gillet, D., de Jong, T., Sotirou, S., Salzmann, C.: Personalised learning spaces
and federated online labs for stem education at school. In: 2013 IEEE Global
Engineering Education Conference (EDUCON), pp. 769–773. IEEE (2013)
14. Gomes, L., Bogosyan, S.: Current trends in remote laboratories. IEEE Trans. Ind.
Electron. 56(12), 4744–4756 (2009)
15. Gordillo Méndez, A., Barra Arias, E., Quemada Vives, J.: Enhancing k-12 science
education through a multi-device web tool to facilitate content integration and
e-infrastructure access. In: 7th International Technology, Education and Develop-
ment Conference (INTED2013), 04–05 March 2013, Valencia, Spain, pp. 5432–5440
(2013)
16. Gravier, C., Fayolle, J., Bayard, B., Ates, M., Lardon, J.: State of the art
about remote laboratories paradigms-foundations of ongoing mutations. iJOE 4(1)
(2008)
17. Gustavsson, I., Zackrisson, J., Håkansson, L., Claesson, I., Lagö, T.: The VISIR
project–an open source software initiative for distributed online laboratories. In:
Proceedings of the REV 2007 Conference, Porto, Portugal (2007)
18. Hardison, J., DeLong, K., Bailey, P., Harward, V.: Deploying interactive remote
labs using the ilab shared architecture. In: 38th Annual Conference on Frontiers
in Education Conference, FIE 2008, p. S2A–1. IEEE (2008)
19. Hardt, D.: The OAuth 2.0 authorization framework (2012)
20. Henry, J.: Running laboratory experiments via the world wide web. In: ASEE
Annual Conference (1996)
21. de Jong, T., Linn, M.C., Zacharia, Z.C.: Physical and virtual laboratories in science
and engineering education. Science 340(6130), 305–308 (2013)
22. Lowe, D., Machet, T., Kostulski, T.: UTS Remote Labs, Labshare, and the Sahara
architecture. Using Remote Labs in Education: Two Little Ducks in Remote Exper-
imentation, p. 403 (2012)
23. Lowe, D., de la Villefromoy, M., Jona, K., Yeoh, L.: Remote laboratories: uncover-
ing the true costs. In: 2012 9th International Conference on Remote Engineering
and Virtual Instrumentation (REV), pp. 1–6. IEEE (2012)
24. Martı́nez-Polo, J., Martı́nez-Sánchez, J.T., Vivó, J.M.N.: Participation and sharing
economy: The Spanish case of #compartirmola. In: Entrepreneurship, Business and
Economics, vol. 1, pp. 15–22. Springer (2016)
25. Matzner, M., Chasin, F., von Hoffen, M., Plenter, F., et al.: Designing a peer-to-
peer sharing service as fuel for the development of the electric vehicle charging
infrastructure. In: 2016 49th Hawaii International Conference on System Sciences
(HICSS), pp. 1587–1595. IEEE (2016)
26. Milošević, M., Milošević, D., Dimopoulos, C., Katzis, K.: Security challenges
in delivery of remote experiments. XXII Skup TRENDOVI RAZVOJA: NOVE
TEHNOLOGIJE U NASTAVI, Zlatibor, 16–19 Feb 2016
27. Nedic, Z., Machotka, J., Nafalski, A.: Remote laboratory netlab for effective inter-
action with real equipment over the internet. In: 2008 Conference on Human Sys-
tem Interactions, pp. 846–851. IEEE (2008)
28. Orduña, P.: Transitive and scalable federation model for remote laboratories. Ph.D.
thesis, Universidad de Deusto, Bilbao, Spain, May 2013. http://paginaspersonales.
deusto.es/porduna/phd/
29. Orduña, P., Almeida, A., Ros, S., Lpez-de Ipiña, D., Garcı́a-Zubia, J.: Leveraging
non-explicit social communities for learning analytics in mobile remote laborato-
ries. J. Univers. Comput. Sci. 20(15), 2043–2053 (2014)

zamfira@unitbv.ro
Increasing the Value of Remote Laboratory Federations 873

30. Orduña, P., Bailey, P., DeLong, K., López-de Ipiña, D., Garcı́a-Zubia, J.: Towards
federated interoperable bridges for sharing educational remote laboratories. Com-
put. Hum. Behav. 30, 389–395 (2014). http://www.sciencedirect.com/science/
article/pii/S0747563213001416
31. Orduña, P., Botero Uribe, S., Hock Isaza, N., Sancristobal, E., Emaldi, M., Pes-
quera Martin, A., DeLong, K., Bailey, P., López-de Ipiña, D., Castro, M., Garcı́a-
Zubia, J.: Generic integration of remote laboratories in learning and content man-
agement systems through federation protocols. In: 2013 IEEE Frontiers in Educa-
tion Conference, Oklahoma City, OK, USA, pp. 1372–1378, October 2013
32. Orduña, P., Garbi Zutin, D., Govaerts, S., Lequerica Zorrozua, I., Bailey, P.H.,
Sancristobal, E., Salzmann, C., Rodriguez-Gil, L., DeLong, K., Gillet, D., et al.:
An extensible architecture for the integration of remote and virtual laboratories in
public learning tools. Tecnologias del Aprendizaje, IEEE Revista Iberoamericana
de 10(4), 223–233 (2015)
33. Orduña, P., Lerro, F., Bailey, P., Marchisio, S., DeLong, K., Perreta, E., Dzi-
abenko, O., Angulo, I., López-de Ipiña, D., Garcia-Zubia, J.: Exploring complex
remote laboratory ecosystems through interoperable federation chains. In: 2013
IEEE Education Engineering (EDUCON). IEEE (2013)
34. Richter, T., Boehringer, D., Jeschke, S.: LiLa: a European project on networked
experiments. Autom. Commun. Cybernet. Sci. Eng. 2009(2010), 307–317 (2011)
35. Safaric, R., Truntič, M., Hercog, D., Pačnik, G.: Control and robotics remote lab-
oratory for engineering education. Int. J. Online Eng. (iJOE) 1(1) (2005)
36. Santana, I., Ferre, M., Izaguirre, E., Aracil, R., Hernandez, L.: Remote laboratories
for education and research purposes in automatic control systems. IEEE Trans. Ind.
Inf. 9(1), 547–556 (2013)
37. Schauer, F., Krbecek, M., Beno, P., Gerza, M., Palka, L., Spilakov, P., Tkac, L.:
Remlabnet iii – federated remote laboratory management system for university and
secondary schools. In: 2016 13th International Conference on Remote Engineering
and Virtual Instrumentation (REV), pp. 238–241. IEEE (2016)
38. Simão, J.P.S., et al.: Relle: Sistema de gerenciamento de experimentos remotos
(2016)
39. Titov, I.: Labicom.net-the on–line laboratories platform. In: 2013 IEEE Global
Engineering Education Conference (EDUCON), pp. 1137–1140. IEEE (2013)
40. Torres, F., Candelas, F., Puente, S., Pomares, J., Gil, P., Ortiz, F.: Experiences
with virtual environment and remote laboratory for teaching and learning robotics
at the University of Alicante. Int. J. Eng. Educ. 22(4), 766–776 (2006)
41. Zappatore, M., Longo, A., Bochicchio, M.A.: Enabling MOOL in acoustics by
mobile crowd-sensing paradigm. In: 2016 IEEE Global Engineering Education Con-
ference (EDUCON), pp. 733–740. IEEE (2016)
42. Zappatore, M., Longo, A., Bochicchio, M.A., Zappatore, D., Morrone, A.A., De
Mitri, G.: Mobile crowd sensing-based noise monitoring as a way to improve learn-
ing quality on acoustics. In: 2015 International Conference on Interactive Mobile
Communication Technologies and Learning (IMCL), pp. 96–100. IEEE (2015)
43. Zervas, G., Proserpio, D., Byers, J.: A first look at online reputation on Airbnb,
where every stay is above average. Where Every Stay is Above Average, 23 January
2015

zamfira@unitbv.ro
Standardization Layers for Remote Laboratories
as Services and Open Educational Resources

Wissam Halimi1(B) , Christophe Salzmann2 , Denis Gillet1 ,


and Hamadou Saliah-Hassane3
1
EPFL, REACT, Station 11, 1015 Lausanne, Switzerland
{wissam.halimi,denis.gillet}@epfl.ch
2
Automatic Control Laboratory, EPFL, Station 9, 1015 Lausanne, Switzerland
christophe.salzmann@epfl.ch
3
TELUQ, Université du Québec, Montréal, Canada
hamadou.saliah-hassane@teluq.ca

Abstract. Delivering education and educational resources has evolved


from class-centered settings towards distributed, cloud-based models.
This is mainly the consequence of publicly available educational resources
such as documents, videos, and web applications. At the same time,
emerging technologies in information and communication are enabling
the development and deployment of remote laboratories on the Web.
Today, these freely and openly available educational interactive media
are known as Open Education Resources (OERs). Learning management
systems, MOOC platforms, and educational social media platforms pro-
vide a medium for teachers to create their teaching activities around
OERs in a structured way. To enjoy an effective and productive learn-
ing experience, it is necessary for the educational resources to be fully
integrated in the hosting platform. While most platforms have a ready-
to-embed infrastructure for certain types of OERs, they are not ready
to host remote laboratories in an integrated fashion. In this paper, we
define the necessary integration layers for remote labs in online learn-
ing environments. The work is validated by two implementations with
different target platforms.

Keywords: Lab as a service · Remote laboratories · Online learning ·


Open educational resources · Standardization

1 Introduction
Offering hands-on sessions are one of the main requirements for implementing
STEM Education (Science Technology Engineering and Mathematics) [3,8]. By
conducting laboratory work, pedagogical objectives such as learning by doing,
applying theory to practice, learning to manipulate the physical environment and
understanding its flaws and limitations can be attained. Remote experimentation
is one way to attain that goal. Broadly speaking, a remote lab is a real physical
lab which is accessible at distance through computer networks. More specifically,

c Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6 81

zamfira@unitbv.ro
Std. Layers for Remote Laboratories as Services and OER 875

it is a collection of sensors and actuators, configured to conduct a meaningful


scientific experiment, and which can be accessed through a user application over
the Internet. In parallel, learning environments are evolving. Nowadays, there
is a shift from classroom based educational settings to distance, blended and
other learning modalities which do not constrain the learner in space and time.
This is mainly facilitated by the availability of educational resources and web-
based educational platforms [10]. UNESCO defines Open Educational Resources
(OERs) as being “any type of educational materials that are in the public domain
or introduced with an open license. The nature of these open materials means that
anyone can legally and freely copy, use, adapt and re-share them. OERs range
from textbooks to curricula, syllabi, lecture notes, assignments, tests, projects,
audio, video and animation” [15]. With the wide online availability of OERs
from different sources (Google, educational repositories such as Golabz1 , OER
Commons2 and others), teachers are encouraged to gather relevant resources
in a structured format to teach a certain topic or carry out a learning activity
such as a lesson in a MOOC (Massive Open Online Course), or an ILS (Inquiry
Learning Space) as it will be detailed in Sect. 3.2.
Generally, an online experiment is conducted separately from pedagogical
contexts (lessons). Furthermore, web-based learning environments are not pre-
prepared to fully integrate remote laboratories. We recognise that for a remote
laboratory to be fully integrated in a platform, it should be able to: (i) retrieve
information regarding the context, (ii) provide action logging, and (iii) save
and retrieve data. These requirements are essential for an effective educational
experience because the user’s identity is necessary to be able to link generated
experimental data and actions to a specific person in a given context for later
awareness and reflection. Moreover, as in a physical hands-on sessions students
will be generating data (such as measurements), these data can be exploited by
various tools, such as visualization and archiving tools.
Existing remote lab solutions are in the form of standalone applications or
web applications. The basic solution for integration in learning environments is
to wrap the remote laboratory web interface in an HTML iFrame. This poses
a number of challenges to attain the integration goal, and recourse to ad-hoc
solutions. The aim of this work is to standardize the design and implementation
of pedagogically designed remote laboratories regardless of the target embedding
platform.
The paper is structured as follows: in Sect. 2 we present the proposed stan-
dardization layers. In Sects. 2.1 and 2.2 we present how a remote lab is respec-
tively standardized as an LaaS (Lab as a Service), and how it is later personalised
as an OER. In Sect. 3 we present two use cases of the proposed framework, and
we conclude in Sect. 4.

1
http://www.golabz.eu/.
2
https://www.oercommons.org/.

zamfira@unitbv.ro
876 W. Halimi et al.

2 Standardization Layers
Remote laboratories are highly interactive open educational resources. The main
goal of remote laboratories is to make available a real physical lab setup at
distance. To use a remote lab, users act on the system by sending commands,
consequently the lab responds by changing its physical state, and returning its
sensor values. Accordingly, remote labs are a wealthy source of different types
of data. Through the use of the remote lab, the students produce two types of
data: interaction data related to their actions and inputs on the user interface,
and experimental data which are the parameters they applied on the system and
the collected results as a consequence. It is unarguably important to make sense
out of the activity traces and experimental data which are generated. Therefore,
in addition to having both types of data, it is necessary to dispose of the context
information. The context is composed of the user identity, the course or the
pedagogical scenario in which the resources are utilised and any other details
which will contribute to a better perspective of the educational activity.

Fig. 1. Standardization layers for remote labs integration

Figure 1 shows our proposed standardization layers for integrating remote


labs in web-based learning environments. This architecture is based on the con-
cept of separation of concerns, where the system is composed of interconnected
yet independent components, which communicate through defined interfaces. To
this end, our architecture is three-layer and is detailed below.
The first layer (Fig. 1), encompassing the physical equipment of a remote lab
is abstracted as a set of software services. This is based on the Smart Device Par-
adigm that represents a remote lab as a set of services exposed on the Internet
through a well-defined API. The Smart Device paradigm enables the indepen-
dence between the two tiers of the traditional Client-Server architecture adopted
for remote labs [11]. Furthermore, it also enables the personalization of the User
Interface (referred to as UI and detailed later) in Layer 2. The API provides a
set of routines to read and write data from and to the remote lab respectively.
It accepts requests for data retrieval from the sensors reflecting the state of the

zamfira@unitbv.ro
Std. Layers for Remote Laboratories as Services and OER 877

lab, in addition to configuration data that puts the lab in a certain operational
mode, if supported. Moreover, there are other requests that the UI can send, for
example writing data requests on actuators for controlling the lab. At this level
we assume that the lab is capable of accepting requests and sending responses
about the sensors and the actuators. The lab as a service is a self-contained layer
that is operational regardless of a hosting platform. The information provided
by this layer is available to any platform trying to interact with the lab. The
concept of LaaS is detailed in Sect. 2.1.
Next, the remote lab is personalized as an Open Educational Lab (OEL) by
the development of a UI integrating the pedagogical elements required by the
context, and which is augmented with the necessary functionalities to insure
proper communication with the hosting platform and the interfacing with the
remote lab. This is done by calling adequate services of the LaaS. The concept
of OEL extending the notion of OER is detailed in Sect. 2.2.
Last in Layer 3, the OEL is integrated in a hosting platform while insuring
the propagation of contextual information, user activity traces, as well as data
related to the experimentation itself.

2.1 Lab as a Service


LaaS is a term derived from the XaaS series of terms, where “X” means every-
thing and “aaS” refers to “as a Service”. In this paradigm, the assumption is
that everything “X” is offered as a service over the Internet rather than at a
physical space. It is a notion derived from Service Oriented Computing, where
software is made available as a set of services, and hence hiding the dynamics
and only exposing the program through a well-described API [4]. “LaaS” refers
to Laboratory as a Service, where a laboratory is abstracted and made remotely
available through Internet as a software service.
The “Smart Device Paradigm” which aims at separating the two main com-
ponents which constitute a remote lab architecture: the Client Application and
the Lab Server is one way of implementing LaaS. The Client Application is the
software interface provided to a user in order to manipulate a remote experiment.
The Lab Server responds to the client requests by executing the commands on
the physical setup it is controlling. In order to separate the Client Application
and the Lab Server, the Lab Server side is equipped with some “intelligence”
and follows Service Oriented Computing principles to expose the lab side as
software services and hence decoupling the architecture [11,14]. A remote lab
standardized as Smart Device provides two types of services: internal and exter-
nal. The external services are the software interface representing the hardware
behaviour of the sensors and actuators that make up a lab. They are the end-
points that allow an outside communication with the physical lab, namely from
a client application. They are described through a well-defined and documented
API (the metadata). The metadata is formatted in JSON making it human read-
able and machine parsable. The metadata describing the external services can
be used to automatically generate user interfaces [7]. The internal services are
suggested mechanisms for lab providers to implement in order to help manage

zamfira@unitbv.ro
878 W. Halimi et al.

and protect their labs. They are not accessible to the Client Application. Further
details on LaaS and the Smart Device paradigm are in [12].

2.2 Open Educational Labs


Remote laboratories are highly interactive educational resources, where the user
action has an effect on the system, and which generates data belonging to two
categories: interaction data resulting from the use of the UI, and experimental
data which are the data sent to the actuators of the remote lab and received
from the sensors of the remote lab. Accordingly, collecting data which can be
linked back to the user is important for many goals: generated data from the
interaction with UI components is valuable for studying interaction patterns, and
experimental data are needed by the learners to check their results and possibly
use them in other tools. In order to support the full integration of remote labs
in target platforms, there is a need to specify the requirements and accordingly
develop remote labs as Open Educational Labs. Hence, in this work we consider
that access management, activity tracking, and data storage schemes for OELs
should be defined to guarantee a full integration of the remote lab in a hosting
platform.

Access Management. In this work, we are interested in the case where a


remote laboratory is part of a complete educational activity (i.e. a lesson). Given
that the assumption is that learners connect to the learning activity through an
online learning environment, it is usually the case that they dispose of a user
identity for authentication with the platform. Referring to Fig. 1, we consider
that the remote laboratory will be integrated in the platform through an inter-
facing module, which in most cases will be a third-party application. To prevent
the creation of multiple identities belonging to the same user, it is necessary to
propagate the user identity from the platform, to the OEL (Single-Sign On),
to the remote lab implemented as a LaaS. More specifically, when learners are
conducting their educational activity they should have a unique identity that
persists throughout the different sessions and the standardization layers. This
guarantees the consistency of reflecting the contexts, saving activity traces, and
collecting experimental data. In our proposal, a user authenticates with the plat-
form to get access to the OEL, the OEL authenticates with the LaaS to get access
to lab. In Sect. 3 we provide two examples of remote labs integrated in a LTI
consuming platform–edX, and a social media platform–graasp.

User Activity Tracking. With the surge of activity tracking in educational


settings– today referred to as learning analytics, it is necessary to track a
learner’s action. The saved information is considered very valuable for many pur-
poses. Using learning analytics, learner success can be predicted, experimental
behavior can be mined and understood, moreover adaptation and personaliza-
tion can be attained [13]. Many authors use learning analytics to understand the
behavior of the learner and use it as a feedback for other tools in the platform,

zamfira@unitbv.ro
Std. Layers for Remote Laboratories as Services and OER 879

such as recommendations [5]. When a remote laboratory is first abstracted as a


LaaS, then as an OEL to be integrated in a learning platform, it is clear that
there are several sources of activity traces. At the platform level, the log in and
log out times would indicate how much time a learner spend in the lesson for
example. At the LaaS level, keeping records of the different exchanged requests
and responses with the UI can help in bringing meaningful insight into lab usage.
Precisely, the experimental parameters can be used to extract use patterns for
a certain experiment and hence understanding how students are using the lab
when studying a certain concept. In our approach, since we want to support a
consistent identity of a user throughout the layers, we believe the most adequate
solution is to have a common repository for actions coming from the different
layers, where a user’s actions are identified by the identity coming from the host-
ing platform. The common repository could be proprietary to the platform or
external to it as it will be shown in Sect. 3.

Data Storage and Retrieval Mechanisms. When conducting an experi-


ment, students generate the results of applying parameters on the process of
the given lab. And just like in physical hands-on sessions, the data need to be
collected and archived for future use, such as for graphing or tabulating results.
Moreover, considering that the remote lab is embedded in a learning platform
in the context of a lesson, additional tools consuming the data could be added
for pedagogical purposes as it will be shown in Sect. 3.2. To improve user experi-
ence, it is necessary to specify mechanisms for data saving and retrieval. As for
the repository of learning activity tracks, the database keeping the experimental
data can be specific to the platform or external. But in both cases, the consistent
user identity through the standardization layers should be used as an identifier
for the data.

2.3 Integration in a Hosting Platform

Once a laboratory setup is abstracted from the physical world as a set of services
based on the Smart Device paradigm, it is ready to be personalised for use. This
is done by building an application which invokes the API calls of the LaaS to
gain access to it. At this stage, the remote laboratory can be exploited without
any context (i.e. without being part of a lesson). But if chosen to be used in
a pedagogical scenario, augmenting the LaaS with a UI (as a front-end), and
with user identity management, activity tracking, and experimental data man-
agement (as back-end) turns it into an OEL ready to be integrated in a hosting
platform. Needless to say that the interfaces used with the hosting platform for
the mentioned requirements will be specific and cannot be standardized. This is
the level where the user credentials are managed, the activities traces are con-
sumed, and the experimental data is saved. In the next section, we present two
implementations of two different remote labs integrated in two online learning
environments with different infrastructures.

zamfira@unitbv.ro
880 W. Halimi et al.

3 Use Cases
In this section, we present two examples of remote laboratories developed and
integrated in learning environments as per the proposed guidelines. We will first
present the example of a control system lab which is integrated in edX3 . Then
we will detail the Mach-Zehnder interferometer remote lab which is integrated
in an educational social media platform: graasp4 .

3.1 MOOLs for MOOCs

The lab we are considering for this example is a control systems lab designed
and implemented to service a large number of users. It is integrated as part of
a control systems course, designed and deployed on a local copy of edX5 for
EPFL. The complete infrastructure of the lab is made of multiple replicas of the
same lab setup serviced on the Internet by Smart Devices, an HTML UI to be
integrated in edX, a .cgi interface for LTI authentication, database and other
services, and an edX server.

Integration in edX Using Existing Standards. LTI (Learning Tools Inter-


operability) is a specification developed with the principal goal of standardizing
the integration of rich, third-party learning applications with educational envi-
ronments such as learning management systems, portals, learning object repos-
itories, or others. When talking about LTI, the learning applications are called
Tools (hosted by Tool Providers–TP) and the integrating platforms are called
Tool Consumers (TC). The main outcome of implementing the LTI specification
is enabling the seamless integration of remotely hosted third-party content in a
given online learning platform, while communicating the user identity and con-
text to the tool, without any ad-hoc solutions [1]. In our context, a Tool is the
OEL, and the Tool Consumer is the integrating learning platform edX.
The LTI implementation for this lab contains the following required parame-
ters: lti message type, lti version, and resource link id. In addition to the follow-
ing recommended parameters: user id, roles, and context id. More information
on the meaning of these parameters can be found in [1].
Given that the course structure was customized to group each learning activ-
ity (lesson) with its corresponding resources, there was a need other personal-
isation parameters. Hence the LTI specification was extended to include the
following fields:

– experiemental parameters: which will allow to invoke commands on the lab


once the LTI authenticated with the remote lab.
– experiment identifier : since in a single page characterized by a context id and
containing a lesson, we can have multiple tabs in which there are remote
3
http://www.edx.org.
4
http://www.graasp.eu.
5
http://www.edx.epfl.ch.

zamfira@unitbv.ro
Std. Layers for Remote Laboratories as Services and OER 881

lab UIs, there is a need to identify each tab for activity tracking and data
archiving.
– experiment duration: the teacher can set the time of the experiment through
edX in the LTI parameters. This will limit the amount of time students can
spend doing an experiment if others are waiting for turns.

At the time of the integration of the remote lab, edX didn’t implement an LTI
version which supports saving and retrieving data to the platform. Hence, a cgi
interface was put in place between edX and the external tools. The cgi interface
validates the LTI encoded request containing the edX user ID and other context
related information. Once the request is validated, the LTI module content is
integrated as an iFrame in the edX page (Fig. 2).

Fig. 2. OEL integration in edX

3.2 The Mach-Zhender Interferometer

This lab example is an interferometer to study light interference at high school.


It is implemented in order to be integrated in a Learning Inquiry Space (ILS)
which introduces the phenomena of light interference. In the context of the Go-
Lab project, an ILS is a tool embedding a pedagogical structure and resources
to complete an inquiry learning activity [6]. Concretely, the learning activity is
divided into five basic phases, through which students learn about science the
way scientist do it. Teachers build their ILS in Graasp by embedding different
educational resources, including remote labs [9].
The physical lab is made out of many sensors (e.g. photodiode) and actua-
tors (e.g. piezo controller) configured to experiment with several properties of
light diffraction and interference; the details of which are not of interest for the
purpose of this paper, for further information refer to [7]. The lab is abstracted
as a set of services and exposed through an API. A Smart Device implemented
on an embedded computer (myRIO6 ) interfaces the hardware and handles user
clients requests and responses.

6
http://www.ni.com/myrio/.

zamfira@unitbv.ro
882 W. Halimi et al.

Integration in Graasp with an Ad-Hoc Solution. Graasp– the hosting


platform, provides a mechanism for integrating third-party applications enabling
them to use its proprietary services (context information, user identity, activ-
ity tracking, saving and retrieving data). This is done by putting in place an
OpenSocial container which plays a proxy between graasp’s API and third-party
applications [2], in addition to the ILS library7 which takes care of ILS specific
mechanisms. Figure 3 shows the integration of the OEL in Graasp, within a
learning activity. Accordingly, an OpenSocial application which provides a UI to
control and observe the lab is implemented and integrated in Graasp. In addi-
tion to communicating with the Smart Device, the OpenSocial application is
aware of the user identity through the People API and saves associated activity
tracks (through the ActivityStreams API) and experimental data (through the
Documents API ).

Fig. 3. OEL integration in Graasp

When in an ILS, students start sequentially with the Orientation, Conceptu-


alization phases which respectively introduce the subject and ask the students to
hypothesis about it. Usually the practical work to validate or refute the hypoth-
esis is done in Investigation. The implemented OEL saves the interaction traces
with the UI in compatible formats of the platform8 , to be used later by other
apps. The experimental data is also saved in adequate formats to be used by
other tools, such as the Data Viewer9 tool. In the Conclusion phase the students
conclude about their experimentation results and hypothesis, then in Discussion
they can share with their instructor and peers their findings.

4 Conclusion and Future Work


In this paper we presented our standardization architecture for integrating
remote labs in online learning environments. Our approach is driven by the
need to support the development and deployment of remote labs which are eas-
ily integrated in different educational platforms. At a first level, a remote lab is
7
https://github.com/go-lab/ils.
8
https://github.com/go-lab/ils/wiki/ActionLogger.
9
http://www.golabz.eu/apps/data-viewer.

zamfira@unitbv.ro
Std. Layers for Remote Laboratories as Services and OER 883

abstracted as a set of web services accessible through APIs. This allows the per-
sonalisation of the OEL with a UI augmented with full-integration requirements
(context awareness, activity tracking, and experimental data storage) becomes
an Open Educational Lab. The resulting OEL is expected to interoperate with
the different services and other Open Educational Resources used in a learn-
ing scenario, hence providing an online learner what is a good experience. We
later present two remote laboratories integrated in two different online plat-
forms, with pedagogically sound resources and interaction features. The control
system lab is integrated in an LTI consuming platform. The lab is integrated
as an OEL by implementing the communication with the LTI container of the
platform (single-sign on for both platform and lab) and extending its properties
to support further user needs (data saving and retrieval). The other example,
is an interferometer integrated in an educational social media platform which
supports the requirements for integration. The lab is integrated as an OEL by
using the services provided by Graasp for user identity, activity tracking, and
data storage and retrieval. It is worth mentioning that the proposed solution for
edX could also reused with little effort in other environments supporting LTI
such as Moodle10 .

Acknowledgment. This research was partially funded by the European Union in


the context of Go-Lab (grant no. 317601) projects under the ICT theme of the 7th
Framework Programme for R&D (FP7). This paper is one of the contributions of the
task carried on in the framework the IEEE-SA P187 TM Working Group on “Standard
for Networked Smart Learning Objects for Online Laboratories”.

References
1. Learning Tools Interoperability - Implementation Guide (2016). https://www.
imsglobal.org/specs/ltiv1p2/implementation-guide Accessed 15 Jan 2017
2. Bogdanov, E., Limpens, F., Li, N., El Helou, S., Salzmann, C., Gillet, D.: A social
media platform in higher education. In: Global Engineering Education Conference
(EDUCON), 2012 IEEE, pp. 1–8. IEEE (2012)
3. Dasarathy, B., Sullivan, K., Schmidt, D.C., Fisher, D.H., Porter, A.: The past,
present, and future of moocs and their relevance to software engineering. In: FOSE
2014 Proceedings of the on Future of Software Engineering, pp. 212–224 (2014)
4. Duan, Y., Fu, G., Zhou, N., Sun, X., Narendra, N.C., Hu, B.: Everything as a
service (xaas) on the cloud: origins, current and future trends. In: 2015 IEEE 8th
International Conference on Cloud Computing, pp. 621–628. IEEE (2015)
5. Duval, E.: Attention please!: Learning analytics for visualization and recommen-
dation. In: Proceedings of the 1st International Conference on Learning Analytics
and Knowledge, pp. 9–17. ACM (2011)
6. Gillet, D., De Jong, T., Sotirou, S., Salzmann, C.: Personalised learning spaces
and federated online labs for stem education at school. In: Global Engineering
Education Conference (EDUCON), 2013 IEEE, pp. 769–773. IEEE (2013)

10
https://ieee-a.imeetcentral.com/1876public/.

zamfira@unitbv.ro
884 W. Halimi et al.

7. Halimi, W., Jamkojian, H., Salzmann, C., Gillet, D.: Enabling the automatic gen-
eration of user interfaces for remote laboratories. In: 2017 14th International Con-
ference on Remote Engineering and Virtual Instrumentation (rev). IEEE (2017)
8. Lowe, D.: Mools: massive open online laboratories: An analysis of scale and feasi-
bility. In: 2014 11th International Conference on Remote Engineering and Virtual
Instrumentation (REV), pp. 1–6 (2014)
9. Rodrı́guez-Triana, M.J., Govaerts, S., Halimi, W., Holzer, A., Salzmann, C.,
Vozniuk, A., de Jong, T., Sotirou, S., Gillet, D.: Rich open educational resources
for personal and inquiry learning: agile creation, sharing and reuse in educational
social media platforms. In: 2014 International Conference on Web and Open Access
to Learning (ICWOAL), pp. 1–6. IEEE (2014)
10. Saliah-Hassane, H., Reuzeau, A.: Mobile open online laboratories: a way towards
connectionist massive online laboratories with x-API (c-MOOLS). In: 2014 IEEE
Frontiers in Education Conference (FIE) Proceedings, pp. 1–7. IEEE (2014)
11. Salzmann, C., Govaerts, S., Halimi, W., Gillet, D.: The smart device specification
for remote labs. In: 2015 12th international conference on Remote engineering and
virtual instrumentation, pp. 199–208. IEEE (2015)
12. Salzmann, C., Halimi, W., Gillet, D., Govaerts, S.: Deploying large scale online labs
with smart devices. In: Cyber-Physical Laboratories in Engineering and Science
Education (under review) (2017)
13. Siemens, G., d Baker, R.S.: Learning analytics and educational data mining:
towards communication and collaboration. In: Proceedings of the 2nd International
Conference on Learning Analytics and Knowledge, pp. 252–254. ACM (2012)
14. Tawfik, M., Salzmann, C., Gillet, D., Lowe, D., Saliah-Hassane, H., Sancristobal,
E., Castro, M.: Laboratory as a service (LaaS): a model for developing and imple-
menting remote laboratories as modular components. In: 2014 11th International
Conference on Remote Engineering and Virtual Instrumentation (REV), pp. 11–20.
IEEE (2014)
15. UNESCO: What are Open Educational Resources (OER) (2016). http://
www.unesco.org/new/en/communication-and-information/access-to-knowledge/
open-educational-resources/what-are-open-educational-resources-oers/. Accessed
30 Dec 2016

zamfira@unitbv.ro
Present and Future Trends Including
Social and Educational Aspects

zamfira@unitbv.ro
Innovative Didactic Laboratories and School Dropouts
A Case Study

Carole Salis ✉ , Marie Florence Wilson, Fabrizio Murgia, and Stefano Leone Monni
( )

CRS4 – Centre for Advanced Studies, Research and Development in Sardinia, Pula, Italy
{Calis,marieflorence.wilson,fmurgia,stefano.monni}@crs4.it

Abstract. The Innovative Didactic Laboratories (IDLs) are part of a 3-year


comprehensive program named Iscol@, the objective of which is to address
school dropouts in the Sardinian Region. The Local Authority’s strategy tackles
the problem from different viewpoints, including the opportunity for pupils to
participate to Extracurricular Innovative Didactic Laboratories. This paper
describes the philosophy behind IDLs, their characteristics and report on the data
collected through monitoring the first year of activities (Academic year 2015–
2016).

Keywords: Innovative organizational and educational concepts · Characteristics


technology enhanced learning activities · School dropout prevention · Remote
laboratories

1 Introduction

Although Sardinia is the 4th Italian Region in terms of high quality education, it also
has the worst record of early school leavers with a rate of 24% [1]. The country’s average
rate is 17.6%. The European target to be reached by 2020 is 10% [2]; Italy’s is 16% [3].
The “Tutti a Iscol@” project draws inspiration from the “Diritti a Scuola” project of the
Apulia Region (Italy), which obtained the Inclusive Growth RegioStar Award in 2015
[4], but the originality of the Sardinian project is the attention paid to the introduction
of Technology in the Innovative Didactic Laboratories (IDLs) used in extracurricular
activities (ECA) [5]. The IDLs were developed based on pupil’s interest for technology.
Emphasis was placed on teamwork, cooperative learning, learning by doing, and social
learning. The project gives great importance to the interaction of schools, local techno‐
logical economic operators (SMEs, Cultural Associations, University Departments) and
the Institutions (RAS: the Autonomous Region of Sardinia, SR: the Regional R&TD
Agency and CRS4, a Multidisciplinary Research Centre).

2 Philosophy Behind the IDLs

Among the causes of school dropouts we find: disengagement towards school, absen‐
teeism, lack of interest on learning, reduced level of active participation. The IDLs want

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_82

zamfira@unitbv.ro
888 C. Salis et al.

pupils to live a positive experience at school, and develop a positive image of staying
at school. The underlying pedagogical approaches are constructivism and experiential
learning, pragmatic approaches in which learners actively construct their own knowl‐
edge from experience. Since learning requires to think, discuss and to confront others,
attention is also paid to learning by thinking, cooperative learning and problem
solving [6].
The teacher-centred approach can be ineffective in involving pupils in school activ‐
ities. A smart use of ICT can have a positive impact on pupils motivation, turn them to
active learners, and have a positive effect on interaction with peers [7]. The educational
world recognizes the potential of using digital technologies for school dropout reduction
and increasing the education levels. Since the ICT tools are used in all spheres of life,
the European Economic and Social Committee (EESC) believes that a digital approach
within education systems can help to improve the quality of education provided to the
community, particularly if used with common sense [8].
Over the last years, ICT has been one of the most powerful job creation tools. Digital
skills have become a must. This is why schools and technological enterprises should
collaborate to form students that are familiar with the skills required by the digital
economy. Since the Sardinian productive fabric has many technological start-ups and
SMEs we felt important to involve them in this program as this seems to be a significant
international trend. In this paper, they will be referred to as economic operators or
economic actors. See Fig. 1.

Fig. 1. Triangulation between the actors of the project

The interaction of schools, economic actors and Institutions is crucial as well as the
choice to act through extracurricular activities. Pupils engaged, under supervision, in
enriched learning experiences have less behaviour problems, build self-esteem, develop
positive attitude towards school, and are less likely to dropout [9]. The project requires
teachers to be present during the Lab activities, as a reference point for pupils and to
ensure transfer of technology. Funding for purchasing ICT tools is available to schools,
to enable them to carrying on activities beyond the life of the Labs.

zamfira@unitbv.ro
Innovative Didactic Laboratories and School Dropouts 889

3 Methodology

Seven technological areas, promising for their educational potential and technological
interest were selected (see Table 1). The economic actors were invited to conceive orig‐
inal scenarios for all school levels to be used in the ECA. Scenarios are pupil-centred
technological activities developed based on the pedagogical and technological guide‐
lines given by the Educational Technology team of CRS4. Prior to being included into
the online catalogue for schools to choose from, all scenarios were assessed by a panel
of experts. Ninety original proposals, conceived by 64 economic actors passed the
selection. Since some scenarios could be duplicated, schools had a total of 173 Labs to
choose from. Schools indicated their preferences and an algorithm made the match,
giving a higher priority to schools with higher ranking on the list issued by the Region’s
Department of Education. Ranking was made based on dropout rates and Pisa-tests in
Maths and Italian language and literature.

Table 1. The 7 technological areas chosen


No. Technology used Pedagogical impact and bibliographical references
1 Augmented Reality (AR) Learning spatial structures, contextual elements, language
association. Long term memory retention. Improved
collaboration and motivation [10, 11]
2 Internet of Things (IoT) Involvement in experiential learning, [12, 13]
3 QR-NFC technologies To favour mobile/outdoors learning which is student-
centred, active and collaborative [14]
4 Treasure hunt augmented To improve spatial skills and extend learning activities
beyond the walls of the classroom [15]
5 Knowledge Management To discover and promote one’s cultural surroundings, learn
(KM) & culture how to share data & knowledge and put it to use [16]
6 Coding To develop problem solving abilities, [17]
7 3D Food printing Learning by doing approach [18]

Pupils participating to the IDLs were chosen based on their special educational needs
that are not necessarily linked to having a handicap, such as lack of motivation, poor
school attendance, behaviour difficulties, poor sustain from their families [5].
The most successful technological areas were those using Augmented Reality
(33.1%) followed by mobile learning with QR codes and NFC technologies (19.7%).
The least successful was that of Treasure Hunt Augmented (4.7%).

4 Anticipated Outcomes

Through the IDLs the school offer will be enlarged, its ICT based tools modernized,
schools and representatives of the local economic market will interact. Pupils should
gradually realize that the technologies used in the IDLs are useful to understand how
new technologies work and they will become able to apply critical thinking to the main

zamfira@unitbv.ro
890 C. Salis et al.

functions of the innovative tools offered by the market. At the end of the 3-year
experience, the identified good practices will be transferred to curricular activities.

5 Monitoring Activities

We monitored the impact of the IDLs on participating students by checking their attend‐
ance, satisfaction and intention to participate to further similar activities. Face-to-face
interviews were carried out in 22 schools (~25% of schools) selected to cover all 7
technological areas, all stages of education, and the whole Sardinian territory. In each
school we interviewed 3 pupils, 1 teacher and 1 technological tutor (representative of
the economic actors who conducted Lab activities). Participation to the online, self-
administered questionnaire was done on a voluntary basis. 53/98 participating schools
replied to the questionnaire (618 pupils, 53 teachers, 42 technological tutors).

5.1 Results of the Face-to-Face Interview

Pupil’s feedback: interviewed pupils (56.1% male, 43.9% female) reported to have
enjoyed the IDLs: they had fun, found activities interesting and potentially useful for
their future. Most appreciated socialising with peers and the hands-on approach.
• Appreciation and participation: 100%
• They enjoyed teamwork: >92%
• Felt they improved in solving/resolving problems: >80%
• Reported to have learned new concepts: >92%
• Reported to have learned to plan/design/address: >86%
• Would repeat the experience: >95%
• Would suggest others to participate to similar activities: 100%
Teachers gave a positive feedback on the quality and relevance of IDLs. They
recognized the technological tutors competences to manage the Labs. They noted that
at times, students absent during curricular activities attended the Labs. A significant
number of schools used the opportunity to update their ICT tools, all claimed to be
willing to participate to similar new activities. - See Tables 2 and 3.

Table 2. Teachers feedback on quality of Labs and competences of the technological tutors
Relevance and quality of the Labs Competences of
technological tutors
Excellent Medium Poor No answer High Poor
21/22 0/22 0/22 1/22 22/22 0/22

zamfira@unitbv.ro
Innovative Didactic Laboratories and School Dropouts 891

Table 3. Teachers feedback on pupils participation and on the purchase of ICT tools
Any increase in pupils participation to Lab Did your institute purchase ICT Tools?
activities with respect to curricular activities?
Yes No Yes No
21/22 1/22 17/22 5/22

Tutors reported that pupils were most interested by the technology used in the activities
and that they generally displayed proper behaviour. – See Table 4.

Table 4. Tutors feedback on pupils behaviour during Lab activities


Group behaviour during activities towards: Were pupils proactive?
Negative Fair Positive Yes No
Teachers 0/22 1/22 21/22
Tutors 0/22 0/22 22/22 19/22 3/22
Peers 0/22 6/22 16/22

5.2 Results of Online Self-administered Questionnaire

Pupils: More than 50% claim that prior to the ECA, they had heard about the technology
used in the Lab they attended, but that they had not had the opportunity to use it. More
than 1 answer was accepted as pupils could find one activity to be both interesting and
easy to understand or fun and useful, etc.- See Fig. 2. The feedback is positive,
confirming the results of the face-to-face interview.

Fig. 2. Students feedback on Lab activities

Tutors: 42/94 tutors filled out the questionnaire. We investigated their evaluation of
both the competences of the teachers (see Table 5) and pupils (see Table 6). Tutors
reported that the presence of teachers during ECA was an asset, and that all pupils
managed to acquire basic skills in the technology used in the Labs. Despite pupils’ varied

zamfira@unitbv.ro
892 C. Salis et al.

educational needs, tutors managed to catch their attention and to establish a good
communication and information flow with them.

Table 5. Tutors evaluation of teachers approach


Aspect investigated Positive Neutral Negative
Teachers approach during Lab. activities 76.2% 19% 4.8%

Table 6. Tutors evaluate pupils


Aspect investigated Good Fair Little None
Technological knowledge gained by pupils 33.3% 57.1% 2.4% 7.1%
Communication between tutors and pupil 40.5% 40.5% 9.5% 9.5%%
Increase in students’ attention 31.0% 52.4% 9.5% 7.1%

Teachers: the most represented school stages of the sample are: High schools, (21/53
teachers), followed by the elementary school (17/53) and last, the junior high school
(15/53). This data underlines teachers interest for introducing technology in education
at an early age. We investigated teachers evaluation of the competences of the techno‐
logical tutors in their interaction with pupils (see Table 7). The feedback is positive, as
the guidelines given for the conception of Lab activities from both the pedagogical and
technological points of view were respected. We also enquired about teachers expect‐
ations (see Fig. 3).

Table 7. Teachers evaluation of the technological tutors


Aspect investigated Always Sometimes Never
Respect of pedagogical guidelines 71.7% 20.7% 7.6%
Respect of technological guidelines 71.7% 17% 11.4%
Enough time for pupils questions 86.8% 3.8% 9.4%

Fig. 3. Teachers expectation on Lab activities (more than 1 answer possible)

zamfira@unitbv.ro
Innovative Didactic Laboratories and School Dropouts 893

6 Conclusions and Future Work

The IDLs are part of a comprehensive strategy to reduce Sardinian pupils school drop‐
outs. Based on the partial positive results, which lead us to believe that the activities
proposed met the needs of all involved parts, the 2nd edition was enriched with 4 addi‐
tional areas: programming humanoid robots, exploring one’s environment with the help
of drones, drawing circuits through the use of conductive ink, fabbing (laser cut, 3D
printing,…) for problem solving.
We also are planning to give the economic actors the opportunity to develop Labs
that can be carried out in remote, mainly for the humanoid robot Laboratories where
schools and pupils can benefit the advantages of a remote Lab in terms of costs, the
absence of time and place restrictions, equipment sharing, and the possibility to share
not only the equipment but ideas and pedagogical scenarios developed by fellow students
at school, share the experience and scenarios developed by other institutions.

Acknowledgment. The authors gratefully acknowledge the “Servizio Istruzione of Direzione


Generale della Pubblica Istruzione of Assessorato della Pubblica Istruzione, Beni Culturali,
Informazione, Spettacolo e Sport of RAS” and “Sardegna Ricerche”.

References

1. Save the children: Liberare i bambini dalla povertà educativa: a che punto siamo?, May 2016.
https://www.savethechildren.it/sites/default/files/files/uploads/pubblicazioni/liberare-i-
bambini-dalla-poverta-educativa-che-punto-siamo.pdf
2. European Commission: Directorate-General for Communication: Europe 2020: A strategy
for smart, sustainable and inclusive growth. Communication from the Commission,
Bruxelles. http://ec.europa.eu/europe2020/europe-2020-in-a-nutshell/index_en.htm
3. Indire. http://www.indire.it/2016/03/25/dispersione-scolastica-in-italia-abbandono-precoce-
scende-al-15/
4. RegioStars Awards 2015 Category 3: Inclusive Growth - Diritti a scuola (Puglia, IT) (2015).
http://ec.europa.eu/regional_policy/en/projects/italy/tackling-school-drop-out-rates-and-
improving-results
5. Salis, C., et al.: First monitoring results of the “Tutti a Iscol@” Project - a Technology Based
Intervention to Keep Difficult-to-Motivate Pupils in the School Mainstream, system. In:
Proceedings of E-learn: World Conference on E-learning in Corporate, Government,
Healthcare, and Higher Education, pp. 726–731 (2016)
6. Jonassen, D.H.: Learning to Solve Problems: A Handbook for Designing Problem-Solving
Learning Environments. Routledge, New York (2010)
7. Passey, D., Rogers, C.: The Motivational Effect of ICT on Pupils, Research Report 523, with
Joan Machell and Gilly McHugh Department of Educational Research Lancaster University
(2004). http://portaldoprofessor.mec.gov.br/storage/materiais/0000012854.pdf
8. EESC: OPINION of the European Economic and Social Committee on the Communication
from the Commission to the European Parliament, the Council, the European Economic and
Social Committee and the Committee of the Regions on Opening up Education: Innovative
teaching and learning for all through new Technologies and Open Educational Resources
COM, 654 fina - GU C451/26 del 16.12.14 (2013)

zamfira@unitbv.ro
894 C. Salis et al.

9. Massoni, E.: Positive effects of extra curricular activities on students. ESSAI 9 (2011). Article
27. http://dc.cod.edu/essai/vol9/iss1/27
10. Radu, I.: Why should my students use AR? A comparative review of the educational impacts
of augmented-reality. In: Proceedings of IEEE International Symposium on Mixed and
Augmented Reality (ISMAR), pp. 313–314 (2012)
11. Radu, I.: Augmented reality in education: a meta-review and cross-media analysis. Pers.
Ubiquitous Comput. 18(6), 1–11 (2012). doi:10.1007/s00779-013-0747-7
12. Watson, C., Ogle, J.: The pedagogy of things: emerging models of experiential learning. Bull.
IEEE Techn. Committee Learn. Technol. 15(1), 3–6 (2013)
13. Salis, C., Murgia, F., Wilson, M.F., Mameli, A.: IoT-DESIR: a case study on a cooperative
learning experiment in Sardinia. In: Proceedings of International Conference on Interactive
Collaborative Learning (ICL), pp. 785–792 (2015)
14. Pegrum, M.: Mobile Learning: What Is It and What Are Its Possibilities? Teaching and Digital
Technologies: Big Issues and Critical Questions, p. 142 (2015)
15. Kohen-Vacs, D., Ronen, M., Cohen, S.: Mobile treasure hunt games for outdoor learning.
Bull. IEEE Tech. Committee Learn. Technol. 14(4), 24–26 (2012)
16. Lai, C., Salis, C., Murgia, F., Atzori, F., Wilson, M.F.: ANDASA, a web platform for
enhancing network of knowledge and innovation. In: Proceedings of 19th International
Conference on Computer Supported Cooperative Work in Design (CSCWD), pp. 36–41.
IEEE (2015)
17. Fee, S.B., Holland-Minkley, A.M: Teaching computer science through problems, not
solutions. Comput. Sci. Educ. 20(2), 129–144 (2010)
18. Canessa, E., Fonda, C., Zennaro, M.: Low-Cost 3D Printing For Science, Education and
Sustainable Development, free ICTP eBook (2013). ISBN 92-95003-48-9. http://
sdu.ictp.it/3D/book.html

zamfira@unitbv.ro
Intellectual Flexible Platform for Smart Beacons

Galyna Tabunshchyk1 ✉ and Dirk Van Merode2


( )

1
Zaporizhzhya National Technical University, Zaporizhia, Ukraine
galina.tabunshchik@gmail.com
2
Thomas More Mechelen-Antwerpen, Antwerp, Belgium
dirk.vanmerode@thomasmore.be

Abstract. The Smart Beacon System, is a system with Bluetooth Low Energy
devices, a back-end database with dedicated content management system (CMS),
provides a low-entry, easy to use solution for all creative people to dedicate
specific information to whatever object they desire, being paintings, statues,
shopping windows, garbage bins,… themselves. The Internet of Everything for
Everyone. A solution for dynamical maps storage, for usage in a mobile appli‐
cation, was investigated and an implementation suggested. There is also the idea
of smart interfaces implemented, which displaying content based on the user’s
preferences.

Keywords: iBeacon · BLE4.0 · iOS · Android · Intellectual content · Path


detection

1 Introduction

There is always the need to deliver information to the different visitors, teachers or
students of a university campus and the city these campuses reside in, both for day-to-
day use and for specific events [1]. What can be very helpful when you want to provide
additional information about different objects. In the university’s museum this would
help to add multimedia content and background information about the pieces on display.
For tourism this is useful for the highlights of the village or city of the university location,
in the university campus it shows visitors the way and directs them of the points of
interest, like interesting university labs, exhibitions or simply to the canteen.
The development of an attractive user-friendly mobile application with dedicated
back-end server with a content management system (CMS) is extremely relevant in this
view as it allows to bring an effective solution to the market at limited cost and entry
difficulties [2, 3]. The app-user is the client of information related to a certain beacon at
a certain location and our solution allow user to get this information in an attractive way
on his or her smart phone through a dedicated application. The app itself fetches the
information from the server, related to the unique user ID (UUID) the beacon broadcasts
on regular basis. On this server the information is added and edited by the beacon owners
through the developed CMS. The users can decide on groups of beacons which are
allowed to display their information. The total system, CMS-server-app, we refer to as
the Smart Beacon System.

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_83

zamfira@unitbv.ro
896 G. Tabunshchyk and D. Van Merode

After implementation of the Smart Campus solution [4], and analysing user feedback
our task was to add functionality to the Smart Beacon system which will allow adding
navigation functions, media content and an intellectual interface.
Improvement of the system and research were conducted in four domains:
1. The CMS should provide managing of maps development and storage it in various
ways.
2. There should be possibility to support diversity of media content attributed to one
beacon.
3. The mobile application should provide the search option to find the optimal path to
the selected beacon location.
4. The mobile application should provide an intellectual interface, which allows
selecting information based on user preferences.

2 Navigation System and Path Detection

One of the tasks after implementation of the Smart Beacon System was the development
of a navigation system, which can be used for indoor locations. The idea is to find the
location from one beacon to the others, for an interactive tour around the campus or to
guide visitors to their specific location of interest. To provide navigation, first a map of

Fig. 1. Map editor

zamfira@unitbv.ro
Intellectual Flexible Platform for Smart Beacons 897

the building should be provided or developed. Next is showing the appropriate path to
another beacon location. This is why the newly developed solution consists of two parts:
a map editor and path detection.
The map editor allows creating a map of a floor. You can use a background picture
of a known area or develop it from scratch with the easy-to-use editor. Next beacons
can be located at the necessary places (Fig. 1). Figure 1 shows an edited floorplan, with
beacons placed at specific rooms on this floor.
Four editors were created and tested on efficiency. Follow the architecture of built
editors with time for rendering/rendering in conjunction:
• React + Redux + HTML table [7] (5300–6400 ms/1000–1500 ms);
• React + Redux + SVG element [8] (5100–6200 ms/1000–1400 ms);
• React + Redux + Konva.js [9] (700–900 ms/15–20 ms);
• React + Redux + Pixi.js [10] (300–500 ms/5–15 ms).
All these options were tested on response time between demand for the map and
display on the app. In these tests, only the last solution “React + Redux + Pixi.js” showed
the requested delay time of 5–15 ms for feedback.
The maps belonging to a range of beacons are stored as a picture and as an array
[100,100], which is used for the mobile application. In the application they are used for
path detecting tasks.
When a beacon is detected in the mobile application, the client-user gets the infor‐
mation belonging to this beacon. He is notified with a buzz or a ping. This information
consists of a text, a picture and a link, which can house any additional multimedia infor‐
mation which might be needed (Fig. 2).

Fig. 2. App display

After you got buzzing from a certain beacon you can ask to show it on the map
(Fig. 3). This map gives the user a sense of where he is located and where other beacons

zamfira@unitbv.ro
898 G. Tabunshchyk and D. Van Merode

can be found. The other ones are given another colour on this map. The user can select
one and ask for the direction to this one.

Fig. 3. Path detection

3 Smart Interfaces

The next improvement was aimed to make user experience better. To goal is to keep the
app-user interested in what a beacon has to say. A good way of doing this is to change
the displayed information over time, to avoid beacon-boredom. In a later development
the display of information would become dependent on external parameters. The amount
of beacon registrations by a certain client-user is obviously a possible parameter, this
way the users get different content, each time he passes a certain location. This way,
certain locations could have much to say. Another parameter could be the date. This will
be very handy to provide “story-telling marketing” to the user. It could present the user
with the desire to pass the location each day of the story-week. Another application is
the week menu displayed at the cafeteria, which is also dependant on the date. In exten‐
sion the beacon could say something different each day of the year. Other parameters
could be user-specific, such as gender or frequented location, although privacy consid‐
erations are in place here. A last set of parameters could come from online resources
such as the weather. And of course a set of the above mentioned parameters could
determine the eventual information available. A woman, that passed the cafeteria for
the 5th time on a cold day, maybe cares for a cup of soup (Fig. 4).
If users do get bored by one beacon, they can put a beacon in a blacklist, which
prevents it from displaying any other information. Of course this operation can be
undone.
Another feature for augment intelligence in the presentation is the “favourites”
option. The user can decide to “like” a beacon, when pushing the heart symbol. This
puts the UUID of this beacon in a local database on the smartphone. This adds the

zamfira@unitbv.ro
Intellectual Flexible Platform for Smart Beacons 899

Fig. 4. Multi-content

possibility to receive updated information even when the user is not in the range of the
liked beacon (Fig. 5).

Fig. 5. Favourites

The options for multi-content, maps, blacklist and liking of beacons are imple‐
mented. The different contents of a beacon can be selected from within the application,
same goes for finding the next beacon within range.
Future work will consist of automating the process of displaying certain information
to certain users, adding artificial intelligence. The liked beacons should also be stored
in a local database. Other improvements could be to allow and automatic display multi‐
media content in the CMS and app. For now the app starts multimedia content auto‐
matically if the link points to this type of content. A last feature would be to allow adding
and editing beacons in the CMS through an additional application.

4 Conclusions

Suggested platform allows different application domains depending from its preferred
functions. It can be used as simple advertisement application, which differs from the

zamfira@unitbv.ro
900 G. Tabunshchyk and D. Van Merode

existing application by adjustable interface, with multi-content to a BLE-beacon based


on user’s preferences. The CMS contains an editor for adding maps, which allows saving
them in graphical view for users and changing them dynamical, which can be easily
used by the mobile application. The Android mobile application also contains optimal
path suggestion for indoor navigation. The CMS also provides tools for managing
different multi-content beacon input which can be used by the mobile application.
Future work is to make intelligence changing of the multi-content, store liked
beacons on the user’s phone and allow CMS-manipulation through mobile application.

Acknowledgment. This work is a result of common efforts of EmSys Group of Thomas-More


Mechelen-Antwerpen and the Software Tools Department of Zaporizhzhya National Technical
University within the framework of the European Tempus- project 544091-TEMPUS-1-2013-1-
BE-TEMPUS-JPCR “Development of Embedded System Courses with implementation of
Innovative Virtual approaches for integration of Research, Education and Production in UA, GE,
AM” [DesIRE] [11].

References

1. Tabunshchyk, G.: Flexible technologies for smart campus. In: Van Merode, D., Tabunshchyk,
G., Patrakhalko, K., Goncharov, Y. (eds.) Proceedings of XIII International Conference on
Remote Engineering and Virtual Instrumentation (REV 2016), Madrid, Spain, 24–26
February 2016, pp. 58–62. UNED (2016)
2. Tabunshchyk, G., Van Merode, D., Goncharov, Y., Patrakhalko, K.: Smart-campus
infrastructure development based on BLE4.0. J. Electrotechn. Comput. Syst. 18(94), 17–20
(2015)
3. Van Merode, D., Tabunshchyk, G., Goncharov, Y., Patrakhalko, K.: Staroverov VInteractive
university platform. Cyчacнi пpoблeми i дocягнeння в гaлyзi paдioтexнiки,
тeлeкoмyнiкaцiй тa iнфopмaцiйниx тexнoлoгiй: тeзи дoпoвiдeй VIII Miжнapoднoї
нayкoвo-пpaктичнoї кoнфepeнцiї (21–23 вepecня 2016 p., м. Зaпopiжжя). – Зaпopiжжя:
ЗHTУ (2016). 344 c
4. Van Merode, D., Tabunshchyck, G.: Multipurpose smart beacon solution. In: Proceedings of
International Symposium on Ambient Intelligence and Embedded Systems (2017)
5. Cochran, D.: Twitter Bootstrap Web Development [Teкcт]. Packt Publishing, Birmingham
(2012). 68 pages
6. Osmani, A.: Learning JavaScript Design Patterns. O’Reilly, Sebastopol (2012). 188 pages
7. Redux website. http://redux.js.org/
8. SVG introduction. https://www.w3.org/TR/SVG/intro.html
9. Konva.js - 2d html5 canvas library for desktop and mobile applications. https://
konvajs.github.io/
10. The HTML5 Creation Engine. http://www.pixijs.com/
11. Desire project Website. http://www.tempus-desire.eu/

zamfira@unitbv.ro
An Approach for Implementation of Artificial Intelligence
in Automatic Network Management and Analysis

Avishek Datta ✉ , Aashi Rastogi, Oindrila Ray Barman, Reynold D’Mello, and
( )

Omar Abuzaghleh
Department of Computer Science, University of Bridgeport, Bridgeport, CT 06604, USA
adatta@my.bridgeport.edu, oabuzagh@bridgeport.edu

Abstract. The fast development of the Internet and the huge number of gadgets
connected to it has immerged with the challenge of focusing on the requirement
for further computerization and automation in the system administration and
architecture of network field. It is becoming increasingly difficult for businesses
to afford a downtime in their Internet system and it is also very important to make
configuration management and error reduction easy [1]. Some of the tasks which
are involved in the management of network configuration include maintaining
configuration files, building a standard for all the device maintenance, repair,
replacement and upgrades, issue proper rollback commands and have a proper
backup archive [2].
There are certain assumptions, rather, wrong assumptions for networks, as
this leads to complicate structure for managing the overloaded content with a very
high time consuming nature. Thus, automated networks control the data or
packets around the heterogenous networks, providing higher response time for
better communication services [3].
Automatic management control permits to proficiently utilize the connection
and hub limits. A second point of interest is adaptable control when burden
condition changed quickly amid times of over-burden and hardware failures.
Automatic management network has been developed and has been improvised
for maintaining the flow control, proper memory utilization and improving the
routing capacity [6].

Keywords: Automatic network · Network management · Network analysis ·


Artificial intelligence

1 Introduction

According to Ren and Li, “A Network Management System (NMS) refers to a collection
of applications that enable network components to be monitored and controlled” [7].
In some cases, it becomes very difficult to manage and control a particular network
when it gets very large in size. In these cases, the need for an Automated Network
Management (ANM) System is required. This system helps in handling and management
of complex Internet Systems. The ANM system provides for certain tools consisting of
diverse network entities. It can help in the reduction of maintenance and operation
costs [7].

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_84

zamfira@unitbv.ro
902 A. Datta et al.

The features of ANM defined among the ISO standards community include Fault
Management, Configuration Management, Accounting, Performance Analysis, Security
and Resource Management/User Directory. The goals include managing diverse
network entities, providing assistance in analysis of the network to the user/operator,
provide a user-friendly interface to the operator, minimization of internet traffic, allow
progressive growth of network and prevent faults from occurring in the networks [7].

2 Related Works

Automated Network Management Systems has been a subject of extensive research and
previous work. A new reach in the automatic network management is the Semantic Web
Services based systems along with the formation OWL-S apart from OWL+SWRL in
addition to the integration with policy-based network management [8].
A definitive long term objective is to make a useful foundation that empowers the
system to end up self-arranging so that fundamental system wide re-configuration is
started by the change inside of the framework itself so, not by the organized activities
of groups of human administrators [5].
An Automatic Traffic Management Control System has been recently implemented
in networks with Stored Program Exchanged (SPC). The implementation of traffic
control systems allows one to efficiently use link and node capacities in an efficient
manner. A second advantage is dynamic control when peak conditions change rapidly
during the periods of overload and equipment failure [3].
There is a huge amount of work related to policy based management. There are a lot
of space restrictions; hence, we can only provide a general outline of the differences
between our work and those of others. The efforts based on the policy execution displays
confusion between goals and means; it is defined in terms of configuration parameters
of network elements such as rules of firewalls. The rules should not break the policy but
a single implementation of intent [9]. The usage of phrase, ‘Security Policy’ conveys
the intent, but it almost means to execute and enforce the intent, turning this on its head
by having our security policy conveying intent and nothing more. As an essential step
in policy specification is what concerns itself when generating the routing filters based
on lisp-like specification language in a logical framework [9].
A large subset of policy based management research is focused on linguistic issues
of policy. Policy is synonymous with rule sets. Rule based solution is often too simplistic
because network phenomena are highly correlated and implications of remote change
have to be derived by the composition of the other entire configuration that will form
the context. For example, whether a particular Telnet may not be explicitly forbidden
by the policy, but such a Telnet might allow access to an application on that machine
that is forbidden by the policy [5].
Also, there is additionally proposed, the procedure to prepare the framework with
programmed and versatile management capacities. Learned example information coop‐
erates with area information to perform versatile administration undertakings-forecast,
conclusion and control activity. This example learning catches the fundamental system
designs and refines the pre-specified space learning.

zamfira@unitbv.ro
An Approach for Implementation of Artificial Intelligence 903

The advancement of MIB and CMIP is advised. One fundamental issue in admin‐
istration data and framework is to give a window through which administration appli‐
cations can get to the worldwide administration data. Our answer for this is to develop
by rationale programming on the conveyed physical MIBs. Accesses to perspectives are
changed into CMIP instructions to MIBs [10].

3 Proposed Model

The management and analysis of any kind of network management system is very
difficult. It is primarily due to the fact that a lot of decision taking and rerouting is
required in order to efficiently reroute and send any data packet towards its destination.
Digital Logistic is one of the most primary and prominent filed of research in today’s
world where the data flow is in Terabytes every instant of time.
Looking from a Digital Network point of view, when looking at a Production Plan‐
ning and Control assignments from a market/logistic-oriented view, the primary task is
the dynamic coordination if the available resources is with another control over resources
[11]. This means that rerouting of the data from the routers based on the decision by the
tools is very important and time consuming. It is of our opinion that this is generally due
to the excessive flow of data and that the individual nodes are overflowing and are
overloaded with data flowing through it. Logically, distributing the data through all the
network would typically free up the nodes and therefore even out the load over a partic‐
ular region.
Another issue which we found to be very compelling is the use of tools and the
manual configuration of these tools in order to perform dynamic controlling of data.
Configuring each node to control data dynamically is a tedious job for any network
monitor/administrator. In this case, he would have to personally be in charge of config‐
uration and maintenance every time something happens to that particular node or he
would have to program certain protocols/subroutines into the nodes each time there is
any kind of incident such as a power surge, power failure etc. Such a system is very
difficult to manage and administer because of the volume of connected devices/nodes.
Considering the above-mentioned problems, we have to come up with our own
design in the Automated Network Management System. Our system involves the pres‐
ence of an Artificial Intelligence System in order to control the efficient routing [12]. In
our system, we use the tools of dynamic control systems already present within the
systems to learn the flow patterns and then use that kind of learning and adaptive nature
of the artificial intelligence system to find out he nature of data flow and also control the
propagation space of the entire dataset. We also use the concept of Field Programmable
Gate Arrays (FPGAs) in order to divide the data in small arrays which would virtually
make each bit of small enough to be transmitted in real time and without any kind of
delays or lags. This in turn would also be effectively transmitted to the network space
upon which the Artificial Intelligence System would be able to work on. The advantage
of using the concept of FPGA at the data level instead of using it at the signaling level
is that the data can be easily manipulated whereas, it is difficult to divide the exiting data
signals.

zamfira@unitbv.ro
904 A. Datta et al.

It may be argued that loading an Artificial Intelligence Framework within a node


might make the system too heavy to operate and the operating cost would increase many
folds. However, it should be noted that once the framework is trained in handling the
data and the traffic and the weights of the system are balanced, a decision can be taken
by the system in almost real time [13].
An Artificial Intelligence is the intelligent behavior or decision taking capability
shown by any machine, software or inanimate object. In this system, the devices or
software acts as an intelligent agent which perceives its environment to maximize its
chances of success. This term was coined by John McCarthy in 1955 [14]. In the
Networking scope, Artificial Intelligence (AI) can be used to make intelligence decisions
in rerouting and controlling either the flow of data or the nodes or components involved
within the system.
In our proposed system, we are using the AI to control the flow of the data by using
the traffic flow patterns and control of the nodes. Our system uses the entire bandwidth
and the entire network-space to exchange data between two devices (Source and Desti‐
nation). We divide the data into finite FPGAs. Then using the regular socket program‐
ming, we dump the entire data into the network space. The network is made up of
different controlling nodes/routers/servers. The Artificial Intelligence System which is
programmed within these nodes control and redirect the entire data from the entire
network-space to the destination hub and to the subsequent device.
Therefore, in a system which is running multiple instances of the Artificial Intelli‐
gence System, the entire data will be processed at a much faster time than the system
which would just use one particular system to transmit their data.
The Artificial Intelligence System Architecture in this system has three principal
components: The Input Data Processing and Data Packeting, the Flow Learning and
Flow Control Decision, and the Rerouting of Data Packets towards the final node. This
system would effectively cut down the entire propagation time and therefore would lead
to much more data control in the sphere of Internet. The basic architecture of our system
is as follows (Fig. 1):

Fig. 1. Basic architecture of our proposed system

zamfira@unitbv.ro
An Approach for Implementation of Artificial Intelligence 905

This picture essentially describes the basic architecture of our proposed system. As
mentioned above in the system description, the source transmits the data to the network
cloud. The cloud (Internet-space) contains several AI components in its various
switching devices which are trained on the different data sets and traffic flow patterns.
The AI modules take propagation flow decisions based on the existing traffic patterns
in the different regions of the Network-space. The different packets which had been
divided like FPGAs are usually sent into the Network Cloud and then are re-routed
according to the existing AI algorithm decision.

3.1 Training Artificial Intelligence System


Any Artificial Intelligence System needs to be trained initially in order for it to produce
intelligent results based on its ‘learning’ or training. In our proposed system, we are
talking about real time decision taking for the flow of the traffic. Our proposed system
has to be trained initially to take decisions based on the existing traffic flow patterns.
After that, the system will adapt to the traffic patterns with the regular flow of the data
packets. Due to the small size of the FGPAs, not much time would be required for the
iterating and processing of each data packet. The AI would essentially train on the prop‐
agation time of each packet from the source to the destination. In its learning algorithm,
it will consider itself as the sender and the destination port as the receiver. It will quantify
each of the result sending the data through various nodes and would ultimately decide
which node or path to send the data packet through. While it is true that a Network-space
might have a huge number of devices connected to it via its own unique connection, it
is also true that any number of node will have ‘n’ number of nodes at any given time
which is a finite quantity. This makes the calculations of the AI system much easier and
therefore easily manageable through the process of distributed networking. For the
purpose of introducing or proposing a model of AI for the system, we can use the ‘KNN
Classifier’ [15] for the system to classify traffic data through the AI. However, there is
always a margin of error (~0.0003–0.00005%) where the KNN might go wrong and
might send the data packet through a wrong path.
The KNN classifier is the most abundantly used classifier in case of image processing
and big data processing because of its multi-neural compatibility. This means that the
KNN is capable of performing even on large data sets which is highly required in any
real-time system like Internet Network Traffic Analysis.
The AI system will have to undergo periodic learning periods in order to update its
learning. Since, the learning/training of any AI system takes a long and tedious time;
we suggest the development of an Adaptive Artificial Intelligence System where the
entire system will not have to undergo periodic learning. Instead, the entire system goes
through the learning period of only the new data collected by the analysis module of the
data.
The system will be able to analyze the time when the traffic through any particular
node is light and can perform this operation during those periods. As an alternative, the
process of interrupted learning can also be followed. In this process, the Adaptive Arti‐
ficial Intelligence System (AAIS) can initiate the learning/training process at any instant
of low traffic. However, if the sensors sense the increase in traffic volume, the learning

zamfira@unitbv.ro
906 A. Datta et al.

process can be interrupted by a master controller for a small period of time till the traffic
subsides considerably beyond a threshold, following which the system might, by then
collect some more data from the ongoing network traffic, which is also used to train the
AAIS further. This would make sure that the data is always collected irrespective of the
learning/training status of the AAIS.

3.2 Testing and Analysis of the Active Traffic Data Through Artificial Intelligence
System
The Artificial Intelligence System which will be present in the nodes within the Network
Cloud (Internet Space), once trained, must be able to perform real time traffic analysis
of the active flowing traffic. The data packets once dumped by the source into the
Network Cloud passes through the AI nodes which calculate the instantaneous data
traffic load patterns within its regional or immediate network. Then, it will be able to
detect the path through which there is minimum data traffic at that instant. The AI then
reroutes that fragment of the data through that path. This process is followed till the
packet reaches its destination node. The advantage of having a lightweight, but heavy
processing capability like the KNN gives the AI the advantage of implement ‘n’ number
of hidden layers which would successfully parallelize the job of analyzing the traffic
pattern at an instant.
The instantaneous data traffic analysis by the nodes allows the AI module to instantly
decide the path to be used by the data packet for its next level of the journey. It is true
that data traffic might not be high at every single node which is the reason the entire
bandwidth is utilized, a much faster network can be achieved for the purpose of data
exchange. Another reason for the achievement of speed for this system is the breaking
down of the entire data packet into packets of small size of creating Field Programmable
Gate Arrays (FPGAs).
Once the data is classified by the KNN classifier, the AI module now has the decision
from the AI module about the path to be used by the packet for its travel to its next node.
Once the decision is made and has been confirmed, the node locks down that node for
the data transfer and dispatches the data into that node for the data transfer and dispatches
the data into that node for its travel to the next node. Then the AI module of the previous
node performs its analysis for the transfer of the next data packet. Essentially, the entire
analysis system of the AI undergoes a refresh operation before preparing for the next
data packet. The subsequent data packet undergoes the same operation for its travel.

4 Adaptive Artificial Intelligence System (AAIS)

Artificial Intelligence, as mentioned above, is the intelligent behavior or decision taking


capability shown by any machine, software or inanimate object. This AI system is
configured or programmed such that it can analyze the traffic flowing through that
particular node in order to assist in the decision-making process for the routing of the
data packets so that the entire bandwidth of the network is used instead of a particular
region.

zamfira@unitbv.ro
An Approach for Implementation of Artificial Intelligence 907

For the sake of experimentation, we consider the system to contain 100 layers of
neurons with each layer containing 50 neurons. For the purpose of this system, we are
avoiding the presence of hidden layers (although that can be implemented as future
work). Since this system is designed to give an answer as to which path to choose there‐
fore, we have to have the same number of outputs as the number of inputs in the first
layer.
According to our proposed model, the learning is done by the ‘K-Nearest-Neighbors’
(KNN) Algorithm which helps the number of comparisons or testing or learning to be
done. This process essentially computes the nodes which are hypothetically close to the
ideal solution/result. This way, the paths in this case, which are not close to the ideal
solution/result are not iterated or computed for that particular node. This does not mean
that that distant node is eliminated permanently from the system. It only means for that
particular iteration/problem, that distant node/path is not valid as it contains more traffic
than the rest of the possible candidates. As mentioned in the previous section, this
process would be running according to an Adaptive algorithm or an Interruptive Algo‐
rithm which would allow and accommodate both the processes of analyzing any new
data along with collecting new data and the process of learning/training the system with
the collected data. The output of the data contains the path to be chosen for transmission
of data in the binary format with the path to be used for transmission being marked as
‘1’ and the other as ‘0’.
The decision-making capability of the system works in tandem with its learning
counterpart. The testing is basically performed at a time on one single packet of data.
Each packet of data is essentially a FPGA of the original complete data which was
dumped into the Internet space from the source. Once the weights of the entire system
subject the data packets to the algorithm for testing purposes and comes up with the
most appropriate path for the data packet that instant. Along with learning within its
learning algorithm so that it can learn from each and every outcome of its decisions to
make better decisions in future.

5 Result

The proposed system is completely theoretical at this point as such a network is not yet
available for testing the proposed model. However, based on the previous work by
various authors, we can say that the proposed system is perfectly capable of working
the way we propose.
Instead of using one particular path to transmit the data from one node to another, it
is better to distribute the data throughout the network and then converge on the particular
node. This not only saves time but also increases efficiency of the entire network. This
process also inadvertently controls and monitors the traffic throughout the network.

6 Conclusion

The above proposed system introduces a new method of effective data exchange, giving
improved speed of propagation thereby reducing propagation time. Proper planning for

zamfira@unitbv.ro
908 A. Datta et al.

the regular capacity of each node must be kept in mind while designing the system. This
system dissolves the role of the administrator having to manually write protocols/
subroutines to deal with any kind of data transactions. While the roles of the network
administrator are reduced, it is not however dissolved. The administrator will still be
needed to service, troubleshoot and debug the system as well as conducting update
operations on the AI without which the system will not be able to identify future styles
of packets. If such a system is truly developed it would certainly automate the entire
process of Network Management and thereby really define Automated Network
Management and Analysis.
Future work in this system can be done in the process of improving the data rate by
the use of frames instead of individual data packets. However, if the frames have packets
going to different destinations, it will create a problem. Another method of optimizing
this procedure would be to parallelize this operation thereby reducing learning costs and
time. A third approach to this problem is to have a Master-Slave configuration where
the major learning and training is done in the Master node while the Slave nodes will
only conduct testing services.
Therefore, to conclude, this paper postulates a method of automating the entire process
of Automatic Network Management System with the help of an Adaptive Artificial Intel‐
ligence System which reroutes the data packets received by it towards the destination
nodes without the intervention of a human entity or a protocol every single time.

References1

1. Qu, Z., Deng, J., Keeney, J., van der Meer, S., Wang, X., McArdle, C.: Pattern mining model
for automatic network monitoring in heterogeneous wireless communication networks. In:
25th IET Irish Signals and Systems Conference 2014 and 2014 China-Ireland International
Conference on Information and Communications Technologies, ISSC 2014/CIICT 2014.
IET, pp. 286–291, June 2014
2. Klie, T., Gebhard, F., Fischer, S.: Towards automatic composition of network management
web services. In: 10th IFIP/IEEE International Symposium on Integrated Network
Management, 2007. IM 2007, pp. 769–772. IEEE, May 2007
3. Filipiak, J.: Analysis of automatic network management controls. IEEE Trans. Commun.
39(12), 1776–1786 (1991). K. Elissa―Title of paper if known,‖unpublished
4. Wang, Y.L., Yuan, A.S., Wu, Q.: Automatic event driven system for network management.
In: 2012 14th International Conference on Advanced Communication Technology (ICACT),
pp. 838–843. IEEE, February 2012
5. Burns, J., Cheng, A., Gurung, P., Rajagopalan, S., Rao, P., Rosenbluth, D., Martin Jr., D.M.:
Automatic management of network security policy. In: Proceedings of DARPA Information
Survivability Conference and Exposition II, 2001, DISCEX 2001, vol. 2, pp. 12–26. IEEE
(2001)
6. Wong, S.Y., Kochen, M.: Automatic network analysis with a digital computation system.
Trans. Am. Inst. Elect. Eng. Part I Commun. Electron. 75(2), 172176 (1956)

1
(A paper of this length is never possible without the help of the contributing authors. It was an
honor to read your ideas and your effort. This is the list of papers which helped us to reach our
goal of making this idea a success).

zamfira@unitbv.ro
An Approach for Implementation of Artificial Intelligence 909

7. Feridun, M., Leib, M., Nodine, M., Ong, J.: ANM: automated network management. IEEE
Netw. 2, 2 (1988)
8. Xu, H., Xiao, D.: Applying semantic web services to automate network management. In: 2nd
IEEE Conference on Industrial Electronics and Applications, ICIEA 2007, pp. 461–466.
IEEE, May 2007
9. King, I.P.: An automatic reordering scheme for simultaneous equations derived from network
systems. Int. J. Numer. Meth. Eng. 2(4), 523–533 (1970)
10. Lin, Y.D., Gerla, M.: A framework for learning and inference in network management. In:
Global Telecommunications Conference, Conference Record, GLOBECOM 1992.
Communication for Global Users. IEEE, pp. 560–564. IEEE, December 1992
11. Nyhuis, F., Pereira Filho, N.A.: Methods and tools for dynamic capacity planning and control.
Gestão Produção 9(3), 245260 (2002)
12. Bai, H.: A Survey on Artificial Intelligence for Network Routing Problems
13. Ng, A.: Sparse autoencoder. CS294A Lecture notes, vol. 72, pp. 1–19 (2011)
14. McCarthy, J.: Artificial intelligence, logic and formalizing common sense. In: Thomason,
R.H. (ed.) Philosophical Logic and Artificial Intelligence, pp. 161–190. Springer, Netherlands
(1989)
15. Acuña, E., Rodriguez, C.: The treatment of missing values and its effect on classifier accuracy.
In: Banks, D., McMorris, F.R., Arabie, P., Gaul, W. (eds.) Classification, Clustering, and Data
Mining Applications. Studies in Classification, Data Analysis, and Knowledge Organisation,
pp. 639–647. Springer, Heidelberg (2004)

zamfira@unitbv.ro
Investigation of Music and Colours Influences
on the Levels of Emotion and Concentration

Doru Ursuţiu1(&), Cornel Samoilă2, Stela Drăgulin3,


and Fulvia Anca Constantin4
1
AOSR Academy, Transylvania University of Braşov, Braşov, Romania
udoru@unitbv.ro
2
ASTR Academy, Transylvania University of Braşov, Braşov, Romania
csam@unitbv.ro
3
ARA, Transylvania University of Braşov, Braşov, Romania
dragulin@unitbv.ro
4
Department of Music, Transylvania University of Braşov, Braşov, Romania
fulvia.constantin@gmail.com

Abstract. Experimental evidence already demonstrated that music and colours


have an influence on emotion, attention and concentration. Due to the rapid
technical progress, new devices are available to evaluate even more the brain
signals, to map and compare them with reference values. Therefore, starting
from the fact that music and colours not only express emotion, but also produce
emotions at different levels depending on the psychological and mental health of
the user, we investigate with NeuroSky technology, precisely with the Mind-
Wave headset, the amplitude and spectral components of low frequency
brainwave signal, translated into a relationship between music, colour and the
outcome of mental activity. The aim of this paper is to investigate the music’s
and colour’s influence on the brain activity, mainly the increase on the con-
centration level noting the help to students enrolled in distance-learning pro-
grams. A LabVIEW application is developed in order to better monitor and do
the spectral analyses and statistics calculations of the music and/or colour
induced brain waves.

Keywords: Attention  Emotion  Frequency  LabVIEW  Music  MindWave


headset

1 Introduction

From 1875, when was first discovered the existence of electrical currents by Richard
Caton, up to now, when the Mindwave device headset is used to estimate the brain
activity in terms of eSense meter values, there were years of research and studies.
A short overview notes that, in 1924, Hans Berger was first to record the electrical
brain activity and the changes from relaxation to alertness. He called that term EEG
(electroencephalogram). Ten years later, Adrian and Matthew talked about human brain
waves. In time, EEG technology was used in numerous medical sciences, mainly after
Grey Walter discovered the utility of electrodes to record brain activity (in 1964).
Another important step was the discovery of a system determining the eye-gaze
© Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_85
zamfira@unitbv.ro
Investigation of Music and Colours Influences 911

direction to a certain point by Vidal 1973. And, since then, the neuroscience research
developed increasing understanding of brain activity.
The new developed technology is used to improve the brain activity, to improve
attention/concentration or to induce relaxation. Few examples are given below. Miller
already described EEG as the most common type of electrophysiological indicator used
for workload studies indicating various states or activity levels (Miller 2001).
Robolledo-Mendez proved that the levels of attention are associated to brain activity
mainly to the performance of a learner (Robolledo-Mendez 2008). Peters, based on
user’s behavior analysis, correlated visual to cognitive attention (Peters et al. 2009).
Crowley used psychological computer-based tests to measure meditation and attention
levels explaining the importance of stress in measurements (Crowley et al. 2010).
Furthermore, it was investigated how behavioral suggestions increase the level of
relaxation (Moslow et al. 2011), increase the driver’s brain signals regarding its level of
drowsiness/arousal using EEG (Khalilardali et al. 2012), and are improving players’
control (Gudmundsdottir 2011). A group of researchers from Mexico and Chile worked
on an experiment to determine if a low-cost BCI (Brain Computer Interface) device is
able to measuring the level of concentrating of a programmer during its tasks (Gonzalez
et al. 2015). Girase and Deshmukh found out that through a communication system a
person could use the device command to accomplish some intents. They stated, for
example, that a wheelchair can be totally controlled by human thinking (Girase and
Deshmukh 2016).
During time, there were developed several recording and measuring equipments with
results in investigating the level of emotion, attention and concentration. For example,
(i) BioRadio1 - a wearable medical device with programmable recording and transmission
of different combinations of signals, (ii) EEG Crystal and Crystal-Sleep2 medical devices
still in testing and accreditation that record the heart rate during sleep, (iii) MindWave3 - a
professional set used to measure the levels concentration and relaxation, and (iv) Muse4 - a
brain sensing headband that elevates the meditation practice. No doubt, these devices
connected to computers have become major working tool for professionals. They are
available in the market, easy to use and low-cost. Through BCIs (Brain Computer
Interfaces) we are capable of classifying the levels of attention and concentration, or
relaxation and meditation of individuals during diverse activities.

2 What Is MindWave and How It Works?

The Neurosky MindWave (Fig. 1) is a device used “to monitoring electrical signals
generated by neural activity in the brain” (Robbins and Stonehill 2014). It measures the
raw signals, the EEG power spectrum, precisely data regarding the user’s brain waves,

1
https://glneurotech.com/bioradio/physiological-signal-monitoring/wireless-eeg-research-analysis-
teaching/.
2
https://clevemed.com/.
3
http://store.neurosky.com/pages/mindwave.
4
http://www.choosemuse.com/.

zamfira@unitbv.ro
912 D. Ursuţiu et al.

and the eSense meters for attention (concentration) and meditation (relaxation). The
device consists of adjustable head band, sensor tip, ear clip, flexible ear arm, battery
area, power switch, sensor arm and inside ThinkGear chipset. The interface of the
MindWave headset as it appears on the screen can be seen below (Fig. 2).

Fig. 1. MindWave mobile Fig. 2. The interface of the MindWave headset

MindWave software has numerous applications (described by the producer)


including: Meditation Journal (attention, meditation and brainwave data), Speed Math
(teaching arithmetic skills), BlinkZone (blink detection), Schulte (monitoring attention
levels), SpadeA (optimizing reaction time and pattern recognition), MindHunter (at-
tention training), Man.Up (meditation brain – Game), MindtyAnt (ability to concen-
trate), and Jack’s Adventure (a problem solving game). Other applications such as: MDT
(Developer Tools), MRT (Research Tools) and Visualiser 2.0 are available on line.
Easy to work with, MindWave headset is having digital values communicated over
Bluetooth to a laptop where it could be seen the electrical activity of the brain in an
EEG dynamical evolution. With the meter value eSense (an algorithm for character-
izing the mental states) we obtain reports of the levels of attention and meditation.
Thus, with respect to attention, the eSense meter indicates the intensity of mental
focus’s (with values ranging from 0 to100 Hz) that occurs during intense concentration
and directed mental activity. For meditation, which indicates the level of relaxation, we
consider a scale of “neutral” for a value from 40 to 60 Hz, a scale of “slightly elevated”
for a value from 60 to 80 Hz, and “elevated” for values from 80 to 100 Hz. Data
received and processed is reported in terms of frequencies. Frequency is defined as the
number of waves oscillating in a determined place and in a specific amount of time,
usually a second.
The levels of attention and meditation, used as the user’s level of concentration or
relaxation are obtained by comparing mean scores when completing a task (studying,
listening to music, so on) against the scores when the user is engaged in rest periods.
This is a synthetic measurement to make evidence of the level of workload. The EEG
band frequencies, their characteristics and the matching colours are shown in Table 1
as they appear described in the program.

zamfira@unitbv.ro
Investigation of Music and Colours Influences 913

Table 1. Frequency ranges of EEG signal


Brainwave Frequency Characteristics of mental stage Associated
frequency type range color
Delta 0–4 Hz Stage of deep sleep, when there is no focus, the Red
person is totally absent, unconscious
Theta 4–8 Hz Deep relaxation, internal focus, meditation, Orange
intuition access to unconscious material such as
imaginary, fantasy, dreaming
Low Alpha 8–10 Hz Wakeful relaxation, conscious, awareness without Yellow
attention or concentration, good mood, calmness
High Alpha 10–12 Hz Increased self-awareness and focus, learning of Green
new information and performing
Low Beta 12–18 Hz Active thinking, active attention, focus towards Light blue
problem solving, judgment and decision making
High Beta 18–30 Hz Engaged in mental activity, also alertness and Dark blue
agitation (navy)
Low Gamma 30–50 Hz Cognitive processing, senses, intelligence, Violet
compassion, self-control
High Gamma 50–70 Hz Cognitive tasks: memory, hearing reading and Bright
speaking purple

3 The Approach and the Outcomes of the Experiment

The research focuses on improving the level of attention and concentration of the
students enrolled in distance education programmes. In order to obtain the best out-
come we use external factors such as music and colours noting the weak and the strong
points of the approach, and ways of dealing with the weak ones. It is structured as
follows:
• Preliminary analyses to encounter that the MindWave device could be used in
measuring the levels of emotion and concentration.
• Indentifying data, both data used in measurements and data about the individuals -
subjects of the research.
• Measuring results and LabVIEW model developing.
• Concluding over the results of the experiment.
The experiment started from understanding the brainwaves and the methods of
manipulating those according to the needs. Dr. Jeffrey Fannin explained the benefits of
increasing the brain waves, the unhealthy ways of increasing gamma brainwaves and
the problems associated with excessive increasing of those.
The analysed data is the result of several sessions of music listening involving a
number of ten persons of different ages, backgrounds and health statuses. In a working
environment, proper for mental activity, but also relaxing and free of environmental
stressors, the subjects/users were listening to diverse genres music from classical to
rock, from dance music to listening music, both instrumental and vocal, knowing that

zamfira@unitbv.ro
914 D. Ursuţiu et al.

the brain is organized to handle the aspects of music such as melody, harmony, rhythm
and timbre in the perception and cognitive processes. Music (a number of ten songs)
had been chosen carefully so it not only be preferred by the user and listened in a daily
basis. It was used the sample of music on a sample group, repeating the experiment
twice. The results encourage us to pursue to the actual experiment.
The measurements were done carefully, once at the time. To each song were
allowed 3 to 4 min. The user sat down working or/and listening. During the music
therapy session the user showed an increase in the level of concentration or quite
contraire in the level of relaxation, depending on the music genre he listened to. Once
the song ended, on the screen appeared the brainwave frequency type describing ranges
of activity, due to eSense calculation (Girase and Deshmukh 2016). The results were
saved for a further comparison. Note that there is measured the mental activity not the
physical one. Psychological factors such as mood or tiredness are being considered.
A first comparison between users showed a variety of data. The interpretation of the
results focused on the outcome resulted from the audition. For example, listening to the
beginning of the first movement of Beethoven’s Symphony no 5 presented diversity,
while De Lucia “Concierto Aranjuez” and Laura McKennit’s “Tango to Evora” showed
consistency, the same answer for all listeners.
In Table 2 we give few examples of the results, mentioning that we had chose the
most similar results appearing among listeners.

Table 2. Examples of results


Played song Person I Person II Person III
Beethoven: Symphony No. 5, I Theta Low Alpha Low Alpha
Ravel: Bolero Theta Delta Delta
Orff: Carmina Burana Low Alpha Theta Theta
Aerosmith: Crazy Low Beta Theta Theta
C. Cruz: La vida es un carnival High Alpha Low Alpha Low Alpha
Shostakowich: Waltz no. 2 High Alpha Delta Delta
McKennit: Tango to Evora Theta Theta Theta
Enya: Only time Low Alpha Low Beta Low Beta
Fanfare “Ten Poles”: Shukar High Gama Low Alpha Low Alpha
De Lucia: Concierto Aranjuez High Gamma High Gamma High Gamma

In the second experiment the listeners used BT headphones, and the third time we
repeated the same auditions, but this time a coloured panel was lightning the room. The
results showed that while orange and yellow do nothing to change the brain activity,
and blue and purple show a slightly change in the direction of relaxation, bright green
shows an improvement of about 20% increase of attention to the most subjects. This
translates in a higher level of concentration; therefore, we obtained higher work and
educational activity from just placing a coloured panel nearby, fact useful for the
students’ productivity and efficiency in studying from home. With respect to the use of
headphones, we can observe a slightly change in the level of concentration.

zamfira@unitbv.ro
Investigation of Music and Colours Influences 915

Table 3 presents the example of the same person’s measurements in all three
situations while was reading in the room: listening to music on speakers, listening to
music on headphones, and listening to music while the external factor “bright colour”
was exposed. We mention that between the experiments we had pauses, and the
subjects were exposed to natural light. Also, we keep the same list of songs played in
the same order. Could be seen that the brainwave frequency is changed by the use of
headphones, and so it is by being exposed to the green bright colour.

Table 3. Example of frequency ranges variation to one listener in all three situations
Person VII Person VII with Person VII
headphones (headphones + colour)
Yellow = Low Alpha Orange = Theta Yellow = Low Alpha
Red = Delta Red = Delta Orange = Theta
Orange = Theta Orange = Theta Orange = Theta
Orange = Theta Orange = Theta Yellow = Low Alpha
Yellow = Low Alpha Bright purple = High Orange = Theta
Gamma
Red = Delta Light Blue = Low Beta Orange = Theta
Orange = Theta Green = High Alpha Yellow = Low Alpha
Light Blue = Low Beta Green = High Alpha Orange = Theta
Yellow = Low Alpha Green = High Alpha Light Blue = Low Beta
Bright purple = High Gamma Orange = Theta Orange = Theta

The experiment was done in the University’s lab, and although the lab it has phonic
isolation, it is not free of noise. Thus, before repeating the experiment introducing
headphones and light, we were interested in listeners’ reaction to noise.
In Table 4 could be seen the associations between a certain noise and the type of
frequency and associated colour. We briefly explain the difference between the pink
and the white noise, both mentioned in the table above. The pink like noise (of 1/f
frequency) is the frequency considered to be the most relaxing frequency and occurs
widely in nature (e.g. the sound of the sea). On the other hand, the white noise is
created by a range of frequencies uniformly distributed, used to increase sensitivity to
regular surrounding sounds, or to cover background noises.
Laboratory Virtual Instrument Engineering Workbench (LabVIEW) is a system-
design platform and development environment developed by National Instruments that
integrates the creation of user interfaces (Panel) into the development cycle. LabVIEW
programs-subroutines are termed virtual instruments (VIs). Each VI has three com-
ponents: a block diagram (Diagram), a front panel (Panel), and a connector panel.
More LabVIEW developed not only to bring together data acquisition, analysis,
and logical operations, but also to understand how the gathered data has being mod-
ified, being offered for a variety of operating systems. Because its concurrent language,
it could be easily programming multiple tasks of grand benefit for test sequencing and

zamfira@unitbv.ro
916 D. Ursuţiu et al.

Table 4. The association between the noise and the type of frequency
Noise Type of frequency and associated colour
1/f noise Green = High Alpha
1/f noise + green light Bright purple = High Gama
White Noise Yellow = Low Alpha
White Noise + green light Yellow = Low Alpha

data recording. We used the application to gather data from different sensors (followed
by a statistical calculus) and to customize the analysis of the recorded signals.
Therefore, in the presented work, based on the literature references5,6 we developed
one special LabVIEW interface able to analyze the MindWave Mobile device Brain-
Wave signals on selected people exposed in different conditions of music immersion:
selected type of musical compositions, different type of noise and special active music
therapy sounds (composed music).
The LabVIEW application Panel and Diagram can be seen in Fig. 3. It presents the
statistics data on BrainsWave signals instantaneously (on BrainWave monitor) and
final statistics after selected time of listening (on BrainWave statistics) to the actual
selection of signals (music, noise, etc.).

Fig. 3. “Panel” and “Diagram” of LabVIEW application

5
MindWaveLabWiew driver: http://forums.ni.com/t5/NI-Labs-Toolkits/NeuroSky-LabVIEW-Driver/
ta-p/3520085.
6
EEG – Electroencephalography with LabVIEW and MindWave Mobile: http://cerescontrols.com/
projects/eeg-electroencephalography-with-labview-and-mindwave-mobile.

zamfira@unitbv.ro
Investigation of Music and Colours Influences 917

This preliminary LabVIEW application will be the base of one future multimedia
development able to: select and start the music and record and analyse the effects of
music therapy. We intend to add also to selected specific music some special lighting
condition (controlled from the application) and record and improve the students
capacity to learn on their homes based on e-LEARNING technologies (courses and
remote and virtual laboratories).
The main advantages offered by this LabVIEW application reside in the fact that it
facilitates the fast addition of new type of calculations, statistics, graphical presenta-
tions of signal and results. This flexibility it is necessary when we investigate a new
field, such as the field of music and colour therapy, in connection with the increased
learning capacity. In our case we plot the BrainWaves medium values (Delta, Theta,
Low Alpha etc., see Fig. 1) with reported percent relative to the highest level (con-
sidered 100%) and in the same times the maxim and medium values of attention and
meditation index. All this parameters were calculated and plotted with the intention to
see and monitor the effects of music therapy.

4 Conclusion

The anticipated results describe ranges of activity for each person in a comparison to
one another for the same stimulus, and the response of each person to different stimuli
(in our case music). The identified range of frequencies relates to particular mental
states and varies from delta to high gamma.
The report of the MindWave headset indicates through its eSense algorithms the
wearer’s mental state, and information about the brainwave frequency bands outputting
the EEG power spectrum. The influence of colour is visible, therefore, due to the
MindWave device measurements and LabVIEW analysis we conclude that brain sig-
nals of the users vary and could be controlled, being a function of the user’s health, his
taste in music, and/or his perception of colours, specific stimuli that determine
emotions.
The level of attention (concentration) could be increased for the benefit of intel-
lectual activity and we think will be one great addition in distance learning education
(eLEARNING programme).
Although our research is an evidence for positive results regarding the effects of
music and colours on the level of concentration and emotions, we continue the research
for further stronger findings. Though, it will be necessary to improve the developed
LabVIEW application, meaning to integrate it in one multimedia platform and to
perform a deeper statistical analysis together with a higher number of recorded signals
from other various types of sensors.

References
Crowley, K., Sliney, A., Pitt, I., Murphy, D.: Evaluating a brain computer interface to categorize
human emotional response. In: The 10th IEEE International Conference in Advances
Learning Technologies (2010)

zamfira@unitbv.ro
918 D. Ursuţiu et al.

Dhali, S.: A Study of Brainwave eSensing Activity, Overleaf Papers (2015)


Fannin, J.L.: Understanding your brainwaves, White paper
Graimann, B., Allison, B., Pfurtscheller, G.: Brain-computer interfaces: a gentle introduction. In:
Graimann, B., Allison, B., Pfurtscheller, G. (eds.) Brain-Computer Interfaces, pp. 1–27.
Springer, Berlin (2010)
Girase, P.D., Deshmukh, M.P.: MindWave device wheelchair control. Int. J. Sci. Res. (2016)
Gonzalez, V.M., Robles, R., Gongora, G., Medina, S.: Measuring Concentration While
Programming with Low-Cost BCI Devices: Differences Between Debugging and Creativity.
Lecture Notes in Computer Science, vol. 9183, pp. 605–615 (2015)
Gudmundsdottir, K.: Improving players control over the neurosky brain computer interface,
School of Computer Science, Reykjavik University (2011)
Khalilardali, S., Chavarriaga, R., Gheorghe, L.A., Millan, J.: Detection of anticipatory brain
potential during car driving (2012). PubMed, NCBI Resources
Miller, S.: Literature Review – Workload Measures. University of Iowa (2001)
Mostow, J., Nelson, C.K.: Toward exploiting EEG input in a reading tutor. In: Proceedings of the
15th International Conference on Artificial Intelligence in Education, AIED 2011 (2011)
Neurosky Mindwave User Guide, October 2016
Peters, C., Asteriadis, S., Rebolledo-Mendez, G. (eds.): 10th International Workshop on Image
Analysis for Multimedia Interactive Services. Institute of Electrical and Electronics
Engineering (2009)
Robbins, R., Stonehill, M.: Investigating the Neurosky MindWave EEG Headset. Published
project Report PPR726, Transport Research Laboratory (2014)
Robolledo-Mendez, G., Freitas, S.: Attention modelling using inputs from a Brain Computer
Interface and user-generated data in second life. Paper presented at the 10th International
Conference on Multimodal Interfaces (ICMI) (2008)
Salabun, W.: Processing and Spectral Analysis of the Raw EEG Signal from the MindWave.
West Pomeranian University of Technology, Szczecin (2014)
Shih, Y.N., Huang, R.H., Chiang, H.Y.: Background music: effects on attention performance.
Work 42(4), 7–8 (2012)
Vidal, J.J.: Towards direct brain–computer communication. Annu. Rev Biophys. Bioeng. 2, 157–
180 (1973)
Vidal, J.J.: Real-time detection of brain events in EEG. IEEE Proc. 65, 633–664 (1977)

zamfira@unitbv.ro
Framework for the Development of a Cyber-Physical
Systems Learning Centre

Dan Centea ✉ , Ishwar Singh, and Mo Elbestawi


( )

McMaster University, Hamilton, ON, Canada


{centeadn,isingh,elbestaw}@mcmaster.ca

Abstract. The paper presents a framework for the development of a Cyber-


Physical Systems Learning Centre that focuses on implementing Industry 4.0
concepts for teaching, training, and research. The Centre includes a series of
specialized learning labs that allow the development various technical skills
needed for production from the concept phase to the final product. The Learning
Centre is expected to complement students’ qualifications and abilities by
providing new technical skills that emphasize the inherent multidisciplinary
nature of smart systems and advanced manufacturing. The Centre will also be a
modern training facility for specialists from industry who are interested in the
advantages of implementing Industry 4.0 concepts in their facilities.

Keywords: Industry 4.0 · Cyber-Physical Systems · Learning factory · Cyber-


Physical Learning Centre

1 Introduction

The current trends in industry include the implementation of cyber-physical production


systems and Industry 4.0 concepts to produce components for smart systems. The level
of implementation of these concepts is different at various companies. Teaching the
current students about these concepts and training the current employees on imple‐
menting them are challenging tasks. The paper describes a framework used in the devel‐
opment of a Learning Centre that will teach these concepts with a strong experiential
leaning approach. The students and trainees are expected to use a combination of speci‐
alised labs and a learning factory that will demonstrate the implementation of cyber-
physical systems, will use Industry 4.0 concepts, and will allow the development of
simple smart components. The paper presents the development of this Learning Centre.
A brief description of some of the relevant research in cyber-physical systems,
Industry 4.0, smart systems, and learning factories in presented below in Sect. 1. The
goals of the Industry 4.0 vision and a suggested implementation approach are presented
in Sect. 2. Section 3 describes the implementation of the cyber-physical concepts in the
proposed Learning Centre. The learning facilities included in the Learning Centre are
presented in Sect. 4. Sections 5 and 6 describe the potential benefits and the impact of
the Learning Centre to undergraduate programs.

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_86

zamfira@unitbv.ro
920 D. Centea et al.

1.1 Cyber-Physical Systems


A significant number of large companies have implemented several businesses in several
locations. This global approach of development, production and storage facilities need
access to complex networks that connect these facilities. Furthermore, the cooperation
between businesses imposes the use of global networks that incorporate smart produc‐
tion and storing facilities in the shape of Cyber-Physical Systems.
Cyber-Physical Systems (CPS) “are systems of collaborating computation entities
which are in intensive connection with the surrounding physical world and its ongoing
processes providing and using, at the same time, data-accessing and data-processing
services available on the internet” [1]. These systems are able to autonomously exchange
information, and are capable of controlling each other independently. Such systems are
also used in modern smart factories to improve production, engineering, and supply
chain management.
Thiede et al. show that, for production systems, the implementation of elements from
cyber-physical systems technology leads to cyber-physical production systems. Such a
system consists of a physical component, a virtual component, and the employee. The
connection between the physical and cyber world is accomplished by sensors and actua‐
tors that allow data acquisition from physical to cyber elements and a feedback from
cyber to physical elements [2].
CPS involves several layers of digitization. The production and assembly machinery
are expected to be equipped with many industrial sensors. Berger et al. define the cyber-
physical sensor system and characterizes the specifications of the sensor system. They
present three examples of cyber-physical sensor systems and show their capabilities for
modern manufacturing [3].

1.2 Industry 4.0

Recent concepts such as the Internet of Things (IoT), cloud-based manufacturing and
smart manufacturing address a vision of digitally enabled production commonly
subsumed by the concept of Industry 4.0 [4]. In Industry 4.0, dynamic engineering
processes enable fast changes to production and respond flexibly to disruptions and
failures. In addition, Industry 4.0 is expected to address challenges such as resource and
energy efficiency.
Although many technologies included in the Industry 4.0 concept, like information
and communication technology, are available for use in industry, the employees are
generally not prepared for a successful implementation of Industry 4.0. Prinz et al.
demonstrate that learning factories can make a substantial contribution toward the
understanding of Industry 4.0 [5]. They present a variety of learning modules that enable
participants to transfer learned knowledge directly to their own workplace.
A key element of Industry 4.0 is Additive Manufacturing (AM) which “has evolved
over the past three decades and has been nothing less than extraordinary. AM has expe‐
rienced double digit growth for 18 of the past 27 years … to a market that was worth
over $4 billion in 2014. The AM market is expected to grow to more than $21 billion
by 2020” [6].

zamfira@unitbv.ro
Development of a Cyber-Physical Systems Learning Centre 921

1.3 Smart Systems


The evolution of smart systems and progress in advanced manufacturing holds signifi‐
cant potential and promise for positive economic impact and long-term growth. The
heart of the advanced manufacturing paradigm shift is the digitization of manufacturing
processes. A key aspect of this digitization is the application of modern information and
communication technologies involving mobile devices, additive manufacturing, CPS,
smart sensors, the IoT platform, augmented reality and wearables, cloud computing,
location detection technologies, multilevel customer and supply chain interaction, big
data analytics and machine learning, advanced human-machine interface (HMI), authen‐
tication and fraud detection as foundations of the Industry 4.0 vision. Two other tech‐
nologies that will have a significant impact on manufacturing efficiencies are the new
generation of Automated Guided Vehicles AGVs and collaborative robots. An ABI
Research study predicts the collaborative robotics market will surge to $1 billion by
2020, populating industry with more than 40,000 collaborative robots [7].

1.4 Learning Factory

Wank et al. noted that the complexity and effort for developing, implementing and
managing production systems that implement new technological trends will increase
continuously. They observed that many companies in the mechanical engineering and
plant engineering field “view Industry 4.0 with caution and skepticism. Therefore, it is
crucial that the benefits of these developments are demonstrated and evaluated. This
situation causes an urgent demand for research and learning facilities to offer new work‐
shops, trainings and other events to target the specific needs and production environ‐
ments.” The authors present the project “Effiziente Fabrik 4.0”, started in 2014, that
involve 12 company partners and two research institutes. The results of the project were
the design and implementation of Industry 4.0 concepts in the process-learning factory
“CIP” at TU Darmstadt [8].
In 1994 the U.S. National Science Foundation awarded a grant to develop a “learning
factory”. This term, initially used in this grant, was referred to interdisciplinary hands-
on senior engineering design projects with strong links and interactions with industry.
A partnership of Universities in the USA collaborated in the development of practice-
based curriculum able to provide an improved educational experience [9].
Fulfilling individual customer demands with affordable products requires flexible
and adaptable production processes [10]. These forms of production control and flexible
manufacturing increase the complexity of production systems. Current automation solu‐
tions cannot face these challenges. To meet these challenges and prepare future engi‐
neers for related issues, several universities have developed learning factories that deal
with Cyber-Physical Production Systems. Gräßler et al. [10] list some of the learning
factories developed in Germany and briefly describe the most advanced ones.
Industry needs graduates who have enough flexibility to become ideal employees.
According to Schreiber et al. a learning factory is a learning environment that can provide
this flexibility. Students can make practical experiences with the industrial a reality in
an experimental environment, and are trained to successfully handle unanticipated

zamfira@unitbv.ro
922 D. Centea et al.

complexity. They are encouraged to experiment and learn from their mistakes without
penalty. This helps them to keep their curiosity and flexibility [11].
Abele et al. observed that the use of learning factories has recently increased, partic‐
ularly in Europe, and has taken many forms of facilities varying in size and sophistication
aiming to enhance the learning experience of trainees in one or more areas of knowledge
[12]. However, the term learning factory covers a variety of learning environments.
Although no learning factory resembles another nor are they used in the same way,
several of the implementations developed in the last 10 years are generally used for
research, education, and training of industrial employees.
A typical learning factory includes a series of learning instruments used for training
students and people from industry. A question that often arises is the selection of the
processes and instruments that provide the best training for various stakeholders. Plorin
et al. define a conceptual framework termed “advanced Learning Factory (aLF)”. They
identify the major modules, the module configuration, major interaction modes and
transfer mechanisms in a generic learning factory. The aLF-framework provides the
processes necessary to define and run the training instruments installed in their learning
factory according to the demands. After delivering their training several times, they have
found that hands-on interactions foster a memorable learning process [13].
Tisch et al. consider that a “learning factory must be based on a didactic-technolog‐
ical approach, which supports the development of self-organized acting”. They propose
a Learning Factory Curriculum Guide that “offers a systematic approach to design
action-oriented, competency-based Learning Factories” [14].
A learning factory does not have to be a unique lab. The learning factory developed
in Germany at TU Braunschweig [2] includes three parts: Research Lab, Experience
Lab and Education Lab. The Research Lab is mainly focused on research and industrial
projects, the Education Lab focuses on occupational training, while the Experience Lab
is utilized as a learning and training area for university student and professional training
and education.

2 Industry 4.0 Goals and Implementation

The achievement of Industry 4.0 requires the implementation of three key features [1]
• Development of inter-company value chains and networks through horizontal inte‐
gration
• Digital end-to-end engineering across the entire value chain of both the product and
the associated manufacturing system
• Development, implementation and vertical integration of flexible and reconfigurable
manufacturing systems
The CPS development presented in this paper includes these three key features as
fundamental elements of an Industry 4.0 implementation, as shown in the centre of
Fig. 1. This implementation is achieved through several foundation technologies
presented on the circle on Fig. 1.

zamfira@unitbv.ro
Development of a Cyber-Physical Systems Learning Centre 923

Fig. 1. Industry 4.0 foundation technologies for the Learning Centre

The foundational technologies are included in educational programs curricula or are


covered in various workshops. The list of the technologies presented in Fig. 1 delivered
in workshop format includes design and manufacture of Printed Circuit Boards (PCB)
electronic circuit simulations, App development, collaborative robot demonstration, IoT
model development, and 3D printing. The other foundational technologies presented in
Fig. 1 require either lecturing or using online learning resources.

3 SEPT Implementation of Cyber-Physical Systems

The W Booth School of Engineering Practice and Technology (SEPT) is an educational


unit in the Faculty of Engineering at McMaster University that delivers its programs
with a strong emphasis on student-centered learning. The school includes several under‐
graduate programs with strong emphasis on engineering technologies and a number of
specialized graduate programs with a focus on engineering practice.
The success of the undergraduate programs offered within SEPT relies on the
School’s ability to be responsive to industry needs and to the changing educational

zamfira@unitbv.ro
924 D. Centea et al.

landscape. As increasingly more university programs recognize the need to integrate


experiential education into the curriculum, the SEPT undergraduate programs must also
adapt to ensure that they are targeting niche educational markets and industry sectors.
These programs have the flexibility to modify, enhance, and add new curricula to address
the changing needs of industry and employers as outlined above for achieving the goals
of Industry 4.0.
This paper presents the development of a Cyber-Physical Systems Learning Centre,
the first of its kind at a Canadian university. This Centre complements the students’
qualifications and abilities by providing new technical skills that emphasize the inherent
multidisciplinary nature of smart systems and advanced manufacturing through the
development of a Learning Factory for education and training.
The purpose of the SEPT Cyber-Physical Systems Learning Centre is to address
three major educational components: teaching, training, and research. The teaching
components focuse on undergraduate studies; the training component focuses on demon‐
strating the Industry 4.0 concepts to industry; the research component is addressed
through graduate studies and summer research internships. These components are
covered in a series of specialised labs that include:
• Learning factory - production line that implements Industry 4.0 concepts and research
facility for additive manufacturing
• Design labs
• Simulation and analysis labs
• Prototyping labs
• Specialized Cyber-Physical Systems applications labs that will include components
of smart production, smart vehicles, smart energy, smart connectivity, smart home
and city, and smart health (see Fig. 2).

Fig. 2. SEPT CPS Learning Model

zamfira@unitbv.ro
Development of a Cyber-Physical Systems Learning Centre 925

The fundamental approach implemented in the leaning factory is the digitization of


a production line. While producing a physical object, a series of sensors will collect
information from the production line modules, will transmit the information to cloud-
based servers using various types of communication networks, and will use controllers
and actuators to automatically control other modules of the production line. The systems
will perform computations (data analytics) and provide information to the user for
monitoring and control. This approach is shown in Fig. 2.

4 Learning Facilities

The undergraduate SEPT programs include substantial experiential learning compo‐


nents. The majority of technical courses have a lab component that varies between one
and three hours per week. Experiments are carried out in specialized teaching labora‐
tories. The existing facilities will be included in the proposed Centre. Grouping them
together will give the facilities a defined purpose: aligning some of the experiments with
modern applications related to various smart systems. The goal is to redefine, to a limited
extent, an existing independent teaching and learning lab with a specialized lab that is
part of a larger unit. There is, however, the need to update some of the equipment used
in various labs.
The core of the CPS Learning Centre is the Leaning Factory. It will be used for under‐
graduate courses related to manufacturing processes and for capstone design courses for
various programs. The main reason for assembling together the learning factory with
specialised CPS labs is to develop small smart system in all the specialization field of the
SEPT undergraduate programs. Furthermore, the CPS learning centre will demonstrate
both metal additive manufacturing concepts and new manufacturing, assembly and storage
approaches related to Industry 4.0 to students and people for industry.
Choosing an assembly that can be manufactured and assembled in the learning
factory and whose production is developed with Industry 4.0 concepts is not an easy
task. Each university that has a learning factory chose an assembly that proves their
implementation of the Industry 4.0 concepts. The assemblies planned to be manufac‐
tured in the early stages of the CPS Learning Centre development include a linear
actuator, component of steering mechanism (see Fig. 3) and a solenoid valve assembly
(see Fig. 4).

Fig. 3. Steering mechanism components Fig. 4. Components of a solenoid

zamfira@unitbv.ro
926 D. Centea et al.

The learning factory is being designed and equipped to be a model of a flexible


manufacturing facility. It will enable the students to design, prototype, manufacture
similar products, assemble, perform quality control tests and package these products.
The Learning Factory will also be used for group projects related to the development of
small AGVs, electric vehicles, micro-satellites, and any other industrial customer
designed products.
A conceptual structure of learning factory components is given in Fig. 5. The process
of making a product (see examples on the upper left of the Fig. 5) starts with a concept
proposed either by a group of students or a professor, and is followed by the development
of requirement and specification. Software design tools (PLM, CAD/CAE, Simulink,
etc.) are to be used for prototype and 3D model development and simulation. The
management of the whole process will require the use of software components and
products for Enterprise Resource Planning (ERP), Manufacturing Execution Systems
(MES), Manufacturing Operation Management (MOM), and Product Lifecycle
Management (PLM). All of these tools and services will be interconnected using web
software tools. IoT devices, such as sensors and cameras (see a list on the upper right
side of Fig. 5) will be used for monitoring of various processes in the learning factory.
Some the equipment that will be needed is given on the lower bottom side of Fig. 5.
Robots will be used for assembly and pick and place tasks. Microcontrollers, PCs, and
or Programmable Logic Controllers (PLCs) will be used for the control of various
manufacturing cells and communications. The triangle in this figure shows the tradi‐
tional control pyramid in a factory and indicate that the processes and controlled through
sensors, controller, actuator and software components by MES and ERP.

Fig. 5. A conceptual structure of learning factory components

zamfira@unitbv.ro
Development of a Cyber-Physical Systems Learning Centre 927

A crucial feature of the learning factory will be the use of tools based on web tech‐
nologies such as web sockets and HTML5 for system integration and remote access of
physical resources, design tools, data for data analytics, alerts and alarms, and so on.
The IoT data and components are accessible using the internet and mobile devices.
A physical layout of the learning factory is given in Fig. 6. The major components
are; metal 3D printer, 3D printer for plastics, 5-axis CNC machine, collaborative robots,
AGVs, various specialised stations that include post-processing, joining, marking/
labeling and assembly, and a packaging station.

Fig. 6. A physical layout of the learning factory

SEPT offers seven undergraduate programs. The CPS Learning Centre will offer
each program the possibility to develop applications related to smart systems, as follows:
• Automation: IoT, smart factory, building automation, electronics
• Automotive: vehicle to vehicle technology (V2V), vehicle to everything technology
(V2X), electric vehicles, autonomous and connected vehicles
• Biotechnology: bioinformatics for health
• Manufacturing: additive manufacturing, smart factory
• Civil & Infrastructure: smart transportation, smart buildings and structures
• Software: networking and infrastructure, Big Data, Data Analytics, security, web
programming
• Energy: alternative energy, smart grid
The Learning Centre will include the following facilities and resources: Learning
Factory; Mechatronics lab; Robotics lab; Power Systems and Networking Lab; Auto‐
mation and IoT Lab; Software Development Lab; Design Lab; Smart Vehicles Lab; and
Manufacturing Lab.
The Learning Centre will be supported by a series of resources. They will provide
students with a skill set that will prepare them for future jobs. Some of these resources
are listed in Fig. 7.

zamfira@unitbv.ro
928 D. Centea et al.

Fig. 7. Resources in support of the Learning Centre

5 Benefits for Undergraduate Programs

It is expected that all SEPT undergraduate programs will include in their curriculum the
design and development of smart devices and system components that can be manu‐
factured in the Cyber-Physical Systems Learning Centre. Every year, each program, will
add new curriculum elements to continue to bolster the stated educational goals.
The Centre will enrich the program by adding smart systems components to existing
laboratory experiments. There will be no increase in the number of units or teaching
hours per week for any program. However, grouping the labs into one entity will allow
a much easier integration of the facilities for the intended outcome. For instance, Auto‐
motive students currently use mechanical/automotive labs to build their automotive-
related applications. Meanwhile, most of the capstone project designs include electronic
equipment integration. Labs that are currently used only by Automotive students (with
significant mechanical emphasis) or by the Automation students (with significant auto‐
mation and electronic emphasis) will be used for both Automotive and Automation
students to build the mechanical and the electronic components of their projects.
The CPS Learning Centre will allow graduates to be more marketable and have
employable skills related to multiple engineering disciplines, and will relate directly to
today’s industry needs. The process of implementing Industry 4.0 concepts in the work‐
place is complex and unclear. The graduates will be attuned to these obstacles and
opportunities related to Industry 4.0 and will be sought after by managers in industry.

zamfira@unitbv.ro
Development of a Cyber-Physical Systems Learning Centre 929

6 Impact on the Undergraduate Curriculum

The CPS Learning Centre will enrich the undergraduate program by replacing some
existing laboratory experiments with developments of smart systems. Each undergrad‐
uate program is expected to develop smart applications related to their specialization in
the learning factory (e.g. smart vehicle for the Automotive program, smart home and
smart transportation for the Civil program, smart health for the Biotechnology program,
smart manufacturing for the Manufacturing program, and so on).

7 Summary

This paper presents the framework for developing a CPS Learning Centre used to teach
undergraduate and graduate students and to train specialists from industry in Industry
4.0 concepts. It is also expected to be a hands-on training facility that will allow small,
medium and large companies to be competitive in the current market by using modern
manufacturing approaches that implement various levels of digitization and cyber-
physical systems.
The Learning Institute is expected to allow the development of applications that
include elements of smart production, smart vehicles, smart energy, smart connectivity,
smart home and city, and smart health.

References

1. Monostori, L.: Cyber-physical production systems: roots, expectations and R&D challenges.
Procedia CIRP 17, 9–13 (2014)
2. Thiede, S., Juraschek, M., Herrmann, C.: Implementing cyber-physical production systems
in learning factories. Procedia CIRP 54, 7–12 (2016). Tisch, C.H., Cachay, J., Abele, E.,
Metternich, J., Tenberg, R.: A systematic approach on developing action-oriented,
competency based learning factories. Procedia CIRP 7, 580–585 (2013)
3. Berger, C., Hees, A., Braunreuther, S., Reinhart, G.: Characterization of cyber-physical sensor
systems. Procedia CIRP 41, 638–643 (2016)
4. Erol, S., Jäger, A., Hold, P., Ott, K., Sihn, W.: Tangible industry 4.0: a scenario-based
approach to learning for the future of production. Procedia CIRP 54, 13–18 (2016)
5. Prinz, C., Morlock, F., Freith, S., Kreggenfeld, N., Kreimeier, D., Kuhlenkötter, B.: Learning
factory modules for smart factories in industrie 4.0. Procedia CIRP 54, 113–118 (2016)
6. Thompson, M.K., Moroni, G., Vaneker, T., Fadel, G., Campbell, R.I., Gibson, I., Bernard,
A., Schulz, J., Graf, P., Ahuja, B., Martina, F.: Design for additive manufacturing: trends,
opportunities, considerations, and constraints. CIRP Ann. Manufact. Technol. 65, 737–760
(2016)
7. Collaborative Robotics: State of the Market/State of the Art, ABIresearch (2015). https://
www.abiresearch.com/market-research/product/1022012-collaborative-robotics-state-of-
the-market/ABIresearch. Accessed 11 Oct 2016
8. Wank, A., Adolph, S., Anokhin, O., Arndt, A., Anderl, R., Metternich, J.: Using a learning
factory approach to transfer industrie 4.0 approaches to small- and medium-sized enterprises.
Procedia CIRP 54, 89–94 (2016)

zamfira@unitbv.ro
930 D. Centea et al.

9. Hadlock, H., Wells, S., Hall, J., Clifford, L., Winowich, N., Burns, J.: From practice to
entrepreneurship: rethinking the learning factory approach. In: Proceedings of the 2008 IAJC-
IJME International Conference, Paper 081, ENT P 401 (2008)
10. Gräßler, I., Pöhler, A., Pottebaum, J.: Creation of a learning factory for cyber physical
production systems. Procedia CIRP 54, 107–112 (2016)
11. Schreiber, S., Funke, L., Trachta, K.: BERTHA - a flexible learning factory for manual
assembly. Procedia CIRP 54, 119–123 (2016)
12. Abele, E., Metternich, J., Tisch, M., Chryssolouris, G., Sihn, W., ElMaraghy, H., Hummel,
V., Ranz, F.: Learning Factories for research, education, and training. Procedia CIRP 32, 1–
6 (2015)
13. Plorin, D., Jentsch, D., Hopf, H., Mueller, E.: Advanced Learning Factory (aLF) – method,
implementation and evaluation. Procedia CIRP 32, 13–18 (2015)
14. Tisch, M., Hertle, C., Cachay, J., Abele, E., Metternich, J., Tenberg, R.: A systematic approach
on developing action-oriented, competency-based learning factories. Procedia CIRP 7, 580–
585 (2013)

zamfira@unitbv.ro
Applications and Experiences

zamfira@unitbv.ro
The Use of eLearning in Medical Education
and Healthcare Practice – A Review Study

Blanka Klimova(&)

University of Hradec Kralove, Rokitanskeho 62,


Hradec Kralove, Czech Republic
blanka.klimova@uhk.cz

Abstract. Nowadays, information and communication technologies (ICT)


influence all spheres of human life, including learning. Thanks to ICT, traditional
learning approaches such as teacher-centred learning, mass instruction, once pace
to all, using only textbooks and learning in classrooms, have radically changed.
Students’ learning started to be supported electronically in the form of eLearning.
The aim of this article is to explore the use of eLearning in medical education and
healthcare practice and discuss its advantages and disadvantages to help deliver
better care for patients and populations. The methods include a method of literature
search of available sources describing this issue in the world’s acknowledged
databases Web of Science, Scopus, ScienceDirect, and MEDLINE, and a method
of comparison and evaluation of the findings in the selected studies on the research
topic. The findings of this review study indicate that eLearning is an important tool
for medical education and healthcare practice in terms of the dissemination of
knowledge, understanding particular health issues, continuous education, and
training of busy healthcare professionals.

Keywords: eLearning  Medical education  Healthcare  Review

1 Introduction

At present, information and communication technologies (ICT) are commonly used in


all different human activities. They have influenced all spheres of human life, including
learning. With the help of ICT, traditional learning approaches such as teacher-centred
learning, mass instruction, once pace to all, using only textbooks and learning only in
classrooms, have radically changed. Students’ learning began to be supported elec-
tronically in the form of eLearning. And the learning approaches have become learner
centred, flexible in sense of accessing it from anywhere and anytime, collaborative, or
interactive [1].
Currently, eLearning is defined as the use of new multimedia technologies and the
Internet to improve the quality of learning by facilitating access to resources and
services as well as remote exchanges and collaboration [2]. However, it is now pre-
dominantly used in the so-called blended learning – a combination of traditional,
face-to-face teaching and eLearning. It has become a well-established methodology in
recent years. [3–5] since it provides greater accommodation for learners and teachers of
diverse backgrounds and tries to meet their immediate needs. In addition, it can be

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_87
zamfira@unitbv.ro
934 B. Klimova

accessed at any place (provided that learners have access to the Internet), at any time,
and on one’s own pace. It also seems more cost-effective and supports distance edu-
cation [6]. No wonder that it found its place in medical education and healthcare
practice.
There are other eLearning modalities, which are also used in medical education and
healthcare [7]: asynchronous/synchronous audio or video; chat/video conferences;
computer-aided learning; computer based testing; educational online games; electronic
problem-based e-learning; electronic portfolio; online collaboration; online discussions
forums; repository and hypertext; virtual laboratories; and virtual learning environ-
ments. L’Engle, Raney, D’Adamo [8] suggest that eLearning may improve health
service delivery and the reach of health promotion activities all over the world, but
especially in developing countries. In fact, the main reason of using eLearning in
healthcare are to train geographically dispersed workforce, lower costs and higher
learning retention. Moreover, 95% of the respondents said that they exploited custom
designed online modules and 80% of the respondents reported that they used eLearning
courses as part of a blended learning program [9].
The aim of this study is to examine the use of eLearning in medical education and
healthcare practice and discuss its advantages and disadvantages to help deliver better
care for patients and populations.

2 Methods

The methodology of this study is based on the study by Moher, Liberati, Tetzlaff,
Altman [10]. The main method included a systematic review whose goal was to identify
the research studies on the basis of the key words in four databases Web of Science,
ScienceDirect, Scopus, and MEDLINE. This review was performed in the period from
2013 to October 2016 for the following key words: elearning AND medical education
AND healthcare practice. Most of the studies were found in ScienceDirect – 338
studies. In the Web of Science only 3 studies were detected, in Scopus 6 studies and in
MEDLINE 5 studies were identified. Thus, altogether 352 publications were detected in
the databases. The titles of all studies as well as their duplicity were then checked in
order to discover whether they focus on the research topic. 94 studies remained for
further analysis. After that, the author checked the content of the abstracts whether the
study examined the research topic. 31 studies/articles were selected for the full-text
analysis, out of which the findings of 17 research studies were then used in the
manuscript for the comparison of the findings in the part of Discussion, as well as in the
Introductory part to discuss the topic, and only 14 studies could have been then used for
the detailed analysis of the research topic.
The study was included if it matched the corresponding period, i.e., from 2013 up
to October 2016; if it included medical students, doctors, nurses, or other healthcare
practitioners; if it focused on the use of eLearning; and if the study was written in
English. The selection period starts with the year of 2013 since several reviews (e.g.,
[11, 12]) were published on this topic before this period.

zamfira@unitbv.ro
The Use of eLearning in Medical Education and Healthcare Practice 935

3 Results

This review study altogether comprises 14 original research studies. Twelve studies
were descriptive studies, one was prospective educational study and only one was a
randomized trial. Table 1 below describes these studies which were identified on the
basis of the review and their main findings and limitations. The table comprises seven
studies examining the use of eLearning in medical education, six studies exploring its
use in healthcare practice and one study investigating the cost effectiveness of
eLearning. The studies are arranged according to the alphabetical order of their first
author.

Table 1. Overview of the selected original research studies on the use of eLearning in medical
education and healthcare practice
Study Findings Limitations
Blake and Healthcare professionals, healthcare A relatively small sample of the
Gartshore [13] educators and pre-registered healthcare overall population to whom the
descriptive study students had a positive attitude towards online learning was offered;
online learning tools which appeared to geographically limited study
be engaging and improved their
knowledge of important public issues in
workplace
Chong et al. [14] Organizational support seems to be A geographically limited study
descriptive study important to promote accessibility of
information and communications
technology facilities for Malaysian
nurses to motivate their involvement in
e-learning
Corner et al. [15] eLearning appears to be an effective A few methodological
prospective study mode of delivering education in a large limitations
geographical area on the consistent and
reliable use of the Chelsea Critical Care
Physical Assessment (CPAx) functional
assessment tool. In addition, eLearning
modules may have utility as evaluation
tools
de Lazzari et al. The implementation of a software More in-depth evaluation from
[16] descriptive simulator into the e-learning the participants; no sufficient
study environment means opportunities for participants’ confidence with a
accelerated learning; lower costs in simulator
comparison with in vivo experiments;
no medical related accidents; and
increased attention
Elmore [17] eLearning seems to be an effective tool A geographically limited study;
descriptive study with a minimal investment a small number of participants
(continued)

zamfira@unitbv.ro
936 B. Klimova

Table 1. (continued)
Study Findings Limitations
Lahti et al. [18] eLearning helps to transfer adopted A small sample of respondents;
knowledge successfully into practice a short time span
Munch-Harrach Eight podcasts were produced on the A lack of in-depth evaluation
et al. [19] eLearning platform at little expenses.
descriptive study They also contributed to the external
presentation of the faculty
Murphy et al. [20] eLearning platform helped healthcare More in-depth evaluation from
descriptive study professionals to improve the provision the participants; maintenance of
of nutrition and lifestyle advice for the sustainability of current
cancer survivors eLearning platform
Polly et al. [21] eLearning intervention using virtual A lack of objective assessment
descriptive study laboratories has a big potential for the of diagnostic skills
improvement of students’ diagnostic
skills; it is close to reality; and there is a
high level of interactivity and feedback
Reid et al. [22] Medical students may face three key A geographically limited study;
descriptive study obstacle when using eLearning: only qualitative survey
injustice, passivity and lost at sea
Rider et al. [23] eLearning as a tool in online trainings A lack of in-depth evaluation.
descriptive study that model key prevention strategies
may play an important role in translating
policy into improved outcomes
Sissine et al. [24] When using a blended eLearning Different infrastructure between
descriptive study approach, there were significant cost countries; insufficient access to
savings (67%) in comparison with a the Internet in developing
traditional didactic method countries; costs inputs may
vary in a different setting
Thorne et al. [25] eALS (Advanced Life Support) course A geographically limited study
descriptive study is equivalent to the traditional,
face-to-face learning. Moreover, it
increases candidate autonomy; it is cost
effective; it decreases instructor’s
burden and standardization of course
material
van de Steeg et al. elearning course on delirium aimed at A potential delay in the
[26] randomized nursing staff had a positive influence on intervention effect
trial improved delirium care provided by
nurses; it also decreased the number of
older patients diagnosed with delirium
and broaden nurses’ knowledge on
delirium
Source: authors’ own processing

zamfira@unitbv.ro
The Use of eLearning in Medical Education and Healthcare Practice 937

4 Discussion

The findings in Table 1 show that eLearning significantly contributes to the faster
dissemination of knowledge into practice [13–16, 18, 21, 23, 26]. Furthermore, it
enhances inter-professional care, [13, 15, 19, 23, 26] interactivity and autonomy of
students’ learning, [13, 14, 21, 25] cost-effectiveness, [16, 17, 25] and may also serve
as a good evaluation tool [15]. However, as the study by Reid et al. [22] state, students
may face three key obstacles when using eLearning. These include injustice (a sense of
resentment: the idea that they were somehow being ‘done out of’ the education that
they deserved), passivity (students may experience a lack of control – that they were
‘passive recipients’ of eLearning material), and lost at sea feeling (unfamiliarity with
the eLearning approach).
Nevertheless, majority of the findings from Table 1 indicate that eLearning is a
significant tool both for medical education, for which it has been in use since 1990s,
and healthcare practice. George et al. [27] claim that eLearning can be equivalent,
possibly superior to traditional learning for healthcare professionals’ education.
eLearing courses might also improve healthcare practice in terms of better communi-
cation between healthcare team members, enhanced quality of care, and better out-
comes for patients [23].
Although there is an increasing trend to use eLearning in different branches of
medical education, [23, 28] the findings show the prevalence of the use of eLearning
particularly in the studies on nursing education and profession [14, 20, 26]. Further-
more, the research studies emphasize the importance of eLearning for the dissemination
of knowledge, [14, 15, 18, 23] understanding particular health issues, [13, 29] con-
tinuous education, [14] and training of busy healthcare professionals who wish to
access educational programs to maintain or extend their knowledge in response to
service needs. [23, 30].
The most critical drawback of the reviewed studies is that only one study was
randomized controlled trial, other studies were descriptive, which might result in the
overestimated effects of eLearning and biases in these publications and have a negative
impact on the validity of these research studies. Therefore future research should
concentrate on a longer time span randomized controlled studies to verify the efficacy
of eLearning and compare against standard teaching.

5 Conclusion

Overall, the findings of this review study indicate that eLearning seems to be an
effective mode for medical education and healthcare practice. However, as Walsh [31]
suggests, eLearning technologies should be used purposefully and wisely in order to
help deliver better care for patients and populations.

Acknowledgments. This review study is supported by SPEV project 2017, Faculty of Infor-
matics and Management, University of Hradec Kralove, Czech Republic. The author thanks the
SPEV students, especially Josef Toman for his help with the collection of the data.

zamfira@unitbv.ro
938 B. Klimova

References
1. Klimova, B., Poulova, P.: Learning technologies and their impact on an educational process
in the Czech Republic. In: Proceedings of the International Conference on Computer Science
and Information Engineering (CSIE 2015), pp. 429–434. Destech Publications, Lancaster
(2015)
2. eLearning Action Plan (2001). http://www.aic.lv/bolona/Bologna/contrib/EU/e-learn_
ACPL.pdf
3. Klimova, B.: Developing ESP study materials for engineering students. In: Proceedings of
2015 IEEE Global Engineering Education Conference (EDUCON 2015), pp. 52–57. Tallinn
University of Technology, Tallinn (2015)
4. Klimova, B., Poulova, P.: Forms of instructions and students’ preferences – a comparative
study. In: Proceedings of the 7th International Conference (ICHL 2014), pp. 220–231.
Springer, Heidelberg (2014)
5. Frydrychova Klimova, B.: Blended learning. In: Mendez Vilas, A., et al. (eds.) Research,
Reflections and Innovations in Integrating ICT in Education, pp. 705–708. FORMATEX,
Spain (2009)
6. Graham, C.R., Allen, S., Ure, D.: Benefits and challenges of blended learning environments.
In: Khosrow-Pour, M. (ed.) Encyclopedia of Information Science and Technology I-V. Idea
Group Inc., Hershey (2003)
7. Jawaid, M., Aly, S.M.: E-learning modalities in the current era of medical education in
Pakistan. Pak. J. Med. Sci. 30(5), 1156–1158 (2014)
8. L’Engle, K., Raney, L., D’Adamo, M.: mHealth resources to strengthen health programs.
Glob. Health Sci. Pract. 2(1), 130–131 (2014)
9. Nine Lanterns. http://elearninginfographics.com/elearning-in-healthcare-infographic-2/
10. Moher, D., Liberati, A., Tetzlaff, J., Altman, D.G.: The PRISMA Group. Preferred reporting
items for systematic review and meta-analysis: the PRISMA statement. PLoS Med. 6(6),
e1000097 (2009)
11. Fahlman, D.: Educational leadership for e-learning in the healthcare workplace. IRRODL 13
(2), 236–246 (2012)
12. Frehywot, S., Vovides, Y., Talib, Z., Mikhail, N., Ross, H., Wohltjen, H., et al.: E-learning
in medical education in resource constrained low- and middle-income countries. Hum. Res.
Health 11, 4 (2013)
13. Blake, H., Gartshore, E.: Workplace wellness using online learning tools in a healthcare
setting. Nurse Educ. Pract. 20, 70–75 (2016)
14. Chong, M.C., Francis, K., Cooper, S., Abdillah, K.S., Hmwe, T., Sohod, S.: Access to,
interest in and attitude toward e-learning for continuous education among Malaysian nurses.
Nurse Educ. Today 36, 370–374 (2016)
15. Corner, E.J., Handy, J.M., Brett, S.J.: eLearning to facilitate the education and implemen-
tation of the Chelsea critical care physical assessment: a novel measure of function in critical
illness. BMJ Open 6(4), e010614 (2016)
16. de Lazzari, C., Genuini, I., Pisanelli, D.M., D’Ambrosi, A., Fedele, F.: Interactive simulator
for e_learning environments: a teaching software for healthcare professionals. BioMed. Eng.
OnLine 13, 172 (2014)
17. Elmore, J.M.: CEVL e-learning teaches GUMS method to score hypospadias preoperatively
and predict postoperative outcomes. J. Pediatr. Urol. 11(5), 234–238 (2015)
18. Lahti, M., Kontio, R., Pitkanen, A., Valimaki, M.: Knowledge transfer from an e-learning
course to clinical practice. Nurse Educ. Today 34(5), 842–847 (2014)

zamfira@unitbv.ro
The Use of eLearning in Medical Education and Healthcare Practice 939

19. Munch-Harrach, D., Kothe, C., Hampe, W.: Audio podcasts in practical courses in
biochemistry-cost-efficient e-learning in a well-proven format from radio broadcasting.
GMS Z Med. Ausbild. 30(4), doc 44 (2013)
20. Murphy, J., Worswick, L., Pulman, A., Ford, G., Jeffery, J.: Translating research into
practice: evaluation of an e-learning resource for health care professionals to provide
nutrition advice and support for cancer survivors. Nurse Educ. Today 35(1), 271–276 (2015)
21. Polly, P., Marcus, N., Maguire, D., Belinson, Z., Velan, G.M.: Evaluation of an adaptive
virtual laboratory environment using Western Blotting for diagnosis of disease. BMC
MedEduc. 14, 222 (2014)
22. Reid, H.J., Thomson, C., McGlade, K.J.: Content and discontent: a qualitative exploration of
obstacles to eLearning engagement in medical students. BMC Med. Educ. 16, 188 (2016)
23. Rider, B.B., Lier, S.C., Johnson, T.K., Hu, D.J.: Interactive web-based learning: translating
health policy into improved diabetes care. Am. J. Prev. Med. 50(1), 122–128 (2016)
24. Sissine, M., Segan, R., Taylor, M., Jefferson, B., Borrelli, A., Koehler, M., et al.: Cost
comparison model: blended eLearning versus traditional training of community health
worker. Online J. Public Health Inform. 6(3), e196 (2014)
25. Thorne, C.J., Lockey, A.S., Bullock, I., Hampshire, S., Bugum-Ali, S., Perkins, G.D.:
E-learning in advanced life support – an evaluation by the Resuscitation Council (UK).
Resuscitation 90, 79–84 (2015)
26. van de Steeg, L., Ijkema, R., Langelaan, M., Wagner, C.: Can an e-learning course improve
nursing care for older people at risk of delirium: a stepped wedge cluster randomized trial.
BMC Geriatr. 14, 69 (2014)
27. George, P.P., Papachristou, N., Belisarion, J.M., Wang, W., Wark, P.A., Cotic, Z., et al.:
Online eLearning for undergraduates in health professions: a systematic review of the impact
on knowledge, skills, attitudes and satisfaction. J. Glob. Health 4(1), 010406 (2014)
28. Jayakumur, N., Brunckhorst, O., Dasgupta, P., Khan, M.S., Ahmed, K.: eLearning in
surgical education: a systematic review. J. Surg. Educ. 72(6), 1145–1157 (2015)
29. Kelly, C., Reid, E., Lohan, M., Alderdice, F., Spence, D.: Creating an eLearning resource to
improve knowledge and understanding of pregnancy in the context of HIV infection. Int.
J. Environ. Res. Public Health 11(10), 10504–10517 (2014)
30. Delf, P.: Designing effective eLearning for healthcare professionals. Radiography 19(4),
315–320 (2013)
31. Walsh, K.: The future of e-learning in healthcare professional education: some possible
directions. Ann. Ist. Super. Sanita. 50(4), 309–310 (2014)

zamfira@unitbv.ro
Efficiency and Prospects of Webinars
as a Method of Interactive Communication
in the Humanities

Natalya Nikolaevna Petrova1, Lyudmila Pavlovna Sidorenko2,


Svetlana Germanovna Absalyamova3,
and Rustem Lukmanovich Sakhapov4(&)
1
Kazan National Research Technical University, Kazan, Russia
natnic18@yandex.ru
2
Plekhanov Russian University of Economics, Moscow, Russia
milabelokon@yandex.ru
3
Kazan Federal University, Kazan, Russia
s.absalyamova@yandex.ru
4
Kazan State University of Architecture and Engineering, Kazan, Russia
rustem@sakhapov.ru

Abstract. The improvement of the effectiveness of communicative interaction


between the participants in the learning process is an important and necessary
task of our time. The article classifies the methods that allow to optimize the
forms of active interactive learning, offer models of technologies aimed at
creation of so-called positive process of learning from the teacher’s decision
making to the process implementation.
The learning process inevitably involves a certain gap between the real
professional and academic activities. The article analyzes the possibilities of the
webinar as a method, allowing to some extent to remove this opposition.
Authors develop principles of saturation the content of teaching with profes-
sional skills necessary for the future specialists.
Synthetic nature of psycho-pedagogical methods and tools of webinars
involves the use of an integrated methodology, which includes elements of
pedagogy, philosophy, psychology, ethics, physiology, applied research, etc. the
Use of this technique causes the need for multidimensional measurement of the
effectiveness of the educational process. The novelty of the method lies in the
fact that such modeling method as geometric connotation study of the problem is
used on the basis of philosophical, ethical and psychological tools.
For the first time article displays the chain to reveal the process of erroneous
actions resulting from lack of competence, inability to present the material and
also the influence of the environment.
The authors reveal the mechanisms of possible erroneous actions which can
lead to negative moments of learning, analyse their causes and make recom-
mendations for their elimination.
The practice of holding webinars has identified a number of specific diffi-
culties of distance learning in on-line mode. The article analyzes specific
problems arising in the course of webinars, practical recommendations to
overcome them.

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_88
zamfira@unitbv.ro
Efficiency and Prospects of Webinars 941

The article developed guidance on how to apply this form of learning in


educational environment of the University and virtual mobility, improve the
professional level of teachers and workers of other sectors at the regional level.

Keywords: Online-learning  Webinar  Communicative interaction  Features


of training in the form of webinar  Learning efficiency

1 Introduction

The requirement of time is to increase the efficiency of communicative interaction of


the teacher and the student. The successful acquisition of knowledge and development
of sustainable skills of its usage greatly increases if in addition to traditional teaching
methods we use information and computer technology. Online Training greatly
expands the functions of traditional classes, creates new perspectives in the method-
ology of the educational process. Such online training method as webinar became
especially popular. It is an important tool of the internationalization of education
through the development of virtual mobility of students.

2 Materials and Methods

In 2012, Moscow Economics and Statistics University (MESI – MGUESI) conducted


lots of webinars in the Humanities. There were mainly lectures and seminars. For
example, there were delivered webinars on the rhetoric. KNRTU – KAI (Kazan) hosts
webinars on philosophy and other social Sciences. Nowadays webinars are actively
practiced in Kazan Federal University, both for humanitarian and technical disciplines.
A huge experience in conducting webinars was accumulated in Kazan State University
of Architecture and Engineering. Also there were examined studies on webinars and
their use in improving access to education [1, 6]. Thus, the experience of webinars in
several major Russian universities has allowed to reveal problems of their realization
and to identify the ways of improvement of methods of their organization. Great
attention during the research was focused on virtual mobility as a form of academic
mobility.

3 Webinar as a Method of On-line Learning

The title of the webinar comes from the English words web and seminar. Despite the
name, the software designed for organizing webinars can be used for other forms of
educational process.
Aristotle pointed out that the impact of the speaker on the listener has a specific
purpose, the formation of a certain ideal, aimed at the prosperity of the state, the
development of personality. The learning process ultimately has the same goal.
Every teacher has a certain social order from the society - to educate students in
accordance with the highest standards of education, to lay them as much knowledge
and skills so they can benefit people and themselves.

zamfira@unitbv.ro
942 N.N. Petrova et al.

Realization of this goal involves a certain pattern of technology adoption decisions


by teacher (Fig. 1):

Fig. 1. A certain pattern of technology adoption decisions by teacher

As a teacher can be viewed in two ways: as a wise mentor and as a person with a
wealth of experience and knowledge and transmitting information “by inheritance” to
his disciples, this scheme is divided into two components.
The first component of the scheme (Fig. 2):

Fig. 2. Negative process of education

This phenomenon can be called a positive process of education.


The second component of the scheme (Fig. 3):

Fig. 3. Negative learning process

This phenomenon can be called positive learning process.


The technology of decision-making may vary depending on the teacher and his
specific properties: knowledge of the subject taught, ability to present the material, his
moral qualities, from exposure to third parties, e.g. parents, students, or employees of
the institution. Then the process of pedagogical influence on the pupil or student is
impaired. The cycle of decisions technology adoption consciously or subconsciously
transformed, actions are inadequate to achieve the goals.

zamfira@unitbv.ro
Efficiency and Prospects of Webinars 943

The first component of the scheme then takes the following form (Fig. 4):

Fig. 4. Negative process of education

This phenomenon can be called a negative process of education.


The second component of the scheme (Fig. 5):

Fig. 5. Negative learning process

This phenomenon can be called negative learning process.


When the real goals are replaced by others that do not correspond to state interests,
nor the interests of the students themselves, there is a phenomenon of the opposite
effect: students do not have the necessary knowledge, proper education, and, ulti-
mately, full inclusion in the society.
Unlike traditional forms of training, webinars help to avoid possible mistakes in
case of wrong technology decisions.
Webinars can be a lecture, when the teacher opens a topic and the students watch
the video lecture, and then make their comments. A webinar as an interactive training
session may be another option: students can make their reports, ask each other and the
teacher questions and debate. Then the webinar will present a workshop in a computer
network.
Online resources are a form of remote interactive training, and computer and
technology training is firmly established in high schools. Today’s students value their
time, have much more possibilities to find the right material for the discipline alone in
comparison with students from previous years. Webinars can be a lecture courses, and
in this case a teacher carries the main burden. A teacher has an opportunity to
demonstrate the materials (slides, websites, text documents, the desktop of computer),
draw and write formulas on a virtual “blackboard”. Webinars are such a form of
training that creates the opportunity to significantly increase the activity of students
than a traditional lecture. Webinar participants can watch, listen and ask questions in a
different form. Students are one-on-one with the teacher, but at the same time away
from him, which causes more relaxed communication.

zamfira@unitbv.ro
944 N.N. Petrova et al.

Creativity of this kind of communication, especially for students, is obvious.


Webinars are almost always written, and thus it is possible to view the material at any
time and even multiple times. Therefore, it is the most effective tool that helps to
perceive the right material to reproduce and comprehend it. The webinar is also a great
prospect in the organization state policy and the education of persons with disabilities.
There are different variants of the communication process between the lecturer and
remote students. The first option for the lecturer is to have their own (live audience) and
the virtual audience, the second option is the lecturer only has a virtual audience and
have the opportunity to see it, a third option is the lecturer has a virtual audience that is
not visible [9].
Organization of the webinar can face several difficulties:
1. Compared with the traditional lesson, the teacher finds hard to keep in sight of all
students, to notice their reaction time to answer questions.
2. The success of the webinar will largely depend on how technically equipped with
all the participants as they possess information and computer technologies.
The availability of webinars as an online method of communication is not in doubt.
The success of webinars largely depends on how the teacher will be able to interest
students. Therefore, online learning increases the demands on the teacher. The effec-
tiveness of the webinar is determined by the following most important qualities of an
instructor conducting webinar:
First, the teacher must possess a high theoretical level of training, deep knowledge
of the subject.
Second, professional experience of conducting lectures is very important. The
effectiveness of such training depends not only on methodological and scientific eru-
dition of the teacher, not less important is the ability to arouse strong interest and
activity of students during discussion of problems. Therefore, the teacher should have
good attitude, be able to quickly change webinar tactics. It is important not only to
articulate the law, clarificate the term, but to notice the reaction of the students answer
questions, etc.
Thirdly, for the success of the webinar the good mood of the teacher and a sense of
confidence is very important. The teacher’s speech must be well-delivered, because a
good speaker always enjoyed great success among the audience. After the lesson, the
lecturer may watch his performances, see himself and analyze his every word and
gesture.
The Internet because of its unique features creates a comfortable environment that
complements the interior and exterior of the individual. It can act as the personal space
of the experiment. Participation in the webinar opens up new opportunities for students,
but at the same time, imposes new requirements and raises the level of responsibility.
Many students express their desire to participate in scientific work, using various
forms of the Internet environment. But one student and the other was his opportunity.
They are used mainly to remember or to do work by analogy. But to think by them-
selves is difficult. Therefore, they can be offered a temporary consultations as a
preparatory stage to participate in webinars. As a rule, students are eager for such
advice. Students like to create something, they’re committed to creativity. It is very
important that the teacher develop in the students the ability to think independently. It

zamfira@unitbv.ro
Efficiency and Prospects of Webinars 945

will be a good platform for their future scientific work. In recent years, webinars are
becoming more common when creating a virtual scientific laboratories, inter-University
creative groups, conducting on-line conferences, student research forums and other
forms of scientific inter-University cooperation of students [5].
Both prior and subsequent work on the webinar requires serious preparation for
teachers and students. But sure it will give positive results. In the light of the ideas of
humanistic psychology and pedagogy, webinars open new possibilities of interaction
between teacher and students. They help to build good relations of cooperation, in
which open and friendly attitude to each other encourages openness, desire commu-
nication in the process of learning, independent thinking, reflection.
Direct “live” communication of the lecturer with the audience is hardly ever fading.
But webinars in our opinion, will occupy an important place in a number of promising
educational technologies in relation to their very simple, time saving and big oppor-
tunities of development of creative abilities of students [2, 3]. In the future, diverse
forms of organization of webinars will further extend and improve. Webinars have a
great development in network education as a form of functioning of network education
platform [4, 10]. They improve access to educational services, promote the develop-
ment of virtual mobility and flexible forms of employment [7].
Also, we conducted a survey in Kazan high schools such as KNRTU KAI, KFU
and KSUAE. The results are shown on the Fig. 6.

Fig. 6. Webinar survey

As can be seen from the presented data, the share of respondents always attending
webinars is 7.8%. More than a third of students try not to miss the webinars, but it is
not always possible at the time 36.5% and a quarter prefer to work with recordings of
webinars 24.2%, which may be a consequence of inconvenient class schedules. Each
sixth Respondent is not satisfied with the quality or content of the webinars (the total
figure in the total sample, 15.6%).

zamfira@unitbv.ro
946 N.N. Petrova et al.

4 Webinars as a Form of Virtual Mobility

Modern information and communication technologies, the provision of most university


teachers and students by means of communications create conditions for the organi-
zation of virtual mobility. Bologna documents say that “virtual mobility is not a
substitute for physical mobility” [1]. However, studies have shown that the greatest
effect of participation in academic mobility is achieved by combining the use of
physical mobility and elements of virtual learning.
Organization of Virtual Mobility includes the following tasks:
1. Formation of electronic educational resources in accordance with agreed standards.
2. Entry in the world market of educational services, the system of international
telecommunication space.
3. Partnership in researches, implementation of joint projects.
4. Further training of teachers.
5. Improvement of the language training of both students and teachers.
The main advantages of virtual education are:
1. The individualization of education process.
2. Massiveness, there is no limit on the number of participants in training.
3. Absence of the need for physical movement, of binding to place and time.
4. High degree of students training flexibility: the opportunity to build individual
educational trajectories, the total set of credits with a choice of courses of different
universities.
5. The ability to pre-testing, testing of selected high school curriculum.
6. The ability to continue their education after completing the program of academic
mobility.
The disadvantages of virtual mobility are in the impossibility of full immersion into
the educational process of other universities, because the primary place of studying is
student’s own university. Difficulties arise during the work on different information
platforms that implement distance learning. Also, there are inconsistencies of organi-
zation models of educational process in participant-universities and difference in the
estimation of the role of the teacher, tutor or curator of the educational process, the
necessary degree of interactivity, etc.
Today we can speak about significant empirical experience in the use of remote
technologies in the academic mobility [9].
Active use of these technologies is caused by the following factors:
• Information is available through a standard web browser;
• All the tools and interfaces as simple and understandable.
The necessity of installation of additional modules in some cases is performed
automatically and does not require costs. Thus, it should be noted that there is a
significant increase in the requirements for network speed and hardware audio and
video streams.

zamfira@unitbv.ro
Efficiency and Prospects of Webinars 947

The most interesting solutions in the virtual mobility are Adobe Acrobat Connect
and Open meetings. Both technologies provide streaming video to the user’s web
browser and can be used for a wide range of problems related to distance learning
(e-learning). IT systems provide the following areas of eLearning:
• Virtual Classrooms;
• On-line training;
• The performance improvement system based on LMS (Learning Management
System).
Recognition of the possibility of virtual mobility is supported by various interna-
tional projects, which are actively involved in the Russian high schools [6, 8]. Along
with the classic e-learning tools there are some recent tools:
• Creation of special websites about the basic educational resources and mobility
programs, the introduction in educational institutions of the special chat-rooms for
the organizers of the mobility of students, teachers, etc., to discuss and exchange
information;
• E-voting system, designed to create interactive lectures, increase the participation of
students in the discussion of the proposed issue or point of view;
• Peer review system and technology of collective creation and editing (wiki);
• Asynchronous communication tools (forums and blogs);
• Social bookmarking and social networking;
• Podcasting and online lectures, video streaming etc.
All of these technologies are the components of an open virtual platform based on
Web 2.0, that allow you to organize an effective online collective and individual work
and integrate seamlessly into the virtual learning environment, providing the academic
mobility of both students and teachers.

5 Conclusion

The webinar is the most effective method of teaching to the maximum students a
particular topic, as it covers a lot of forms of influence on students. This is in contrast to
traditional forms; audio tools: communicate in real-time and placement of various
materials in the form of audio recordings; video tools: videos and other materials from
the Internet. This is the presentation of the teacher, which he demonstrates in the
Internet environment. It’s messaging not only between teacher and students but also
between students themselves and, of course, is the live chat as a means of exchanging
messages with one click.
Webinar in many ways helps to replace direct communication between teacher and
student, as absolutely to exclude the teacher from the educational process as a person is
impossible. Method of webinar’s work is indirectly-direct, as this online communi-
cation via the Internet may be seen as communication of close, free and personal. The
teacher for the student is actually “here” and “face to face”. Webinars offer the
opportunity to see and hear each other, even at a quite large distance. A conducting of a
webinar with a pre-filled audience with the second teacher there is also seems

zamfira@unitbv.ro
948 N.N. Petrova et al.

perspective. He helps to maintain order and also participates in the webinar that is more
costly for the University, but more useful for students. Two teachers is the ability for
greater control and deeper reasoning and active discussions.
Webinars in virtual mobility of students are also important. They provide access to
international educational resources at the minimum cost. The results of the study show
the growing impact of virtual mobility to improve the quality and accessibility of
educational services.

References
1. Absalyamova, S.G., Absalyamov, T.B.: Remote employment as a form of labor mobility of
today’s youth. Mediterr. J. Soc. Sci. 6(1S3), 227–231 (2015)
2. Apple, M.W., Kenway, J., Singh, M.: Globalizing Education: Policies, Pedagogies and
Politics. Peter Lang, New York (2005). 311 pages
3. Byram, M., Dervin, F.: Students, Staff and Academic Mobility in Higher Education.
Cambridge Scholars Publishing, Cambridge (2008). 320 pages
4. Dall Alba, G., Sidhu, R.: Australian undergraduate students on the move: experiencing
outbound mobility. Stud. High. Educ. 40(21), 721–744 (2015)
5. Petrova, N.N., Sidorenko, L.P.: Challenges and prospects of webinars as online-learning
method. Becтник Чyвaшcкoгo гocyдapcтвeннoгo пeдaгoгичecкoгo yнивepcитeтa им.
И.Я. Якoвлeвa. № 3 (91). C. 134–139 (2016)
6. Sakhapov, R.L., Absalyamova, S.G.: The use of telecommunication technologies in
education network. In: Proceedings of 2015 12th International Conference on Remote
Engineering and Virtual Instrumentation, REV 2015, Bangkok, pp. 14–17 (2015)
7. Sidorenko, L.P., Petrova, N.N.: Art of the lecture in real and virtual space. In:
Psycho-Pedagogical Research of the Quality of Education in the Conditions of Innovative
Activity of Educational Organization: VII All-Russia Scientific-Practical Conference,
1-Slavyansk-on-Kuban, pp. 163–167 (2014)
8. Loveland, E.: Student mobility in the European Union. Int. Educ. 17, 22–25 (2008)
9. Vázquez, L.K., Mesa, F.R., López, D.A.: To the ends of the earth: student mobility in
southern Chile. Int. J. Educ. Manag. 28, 82–95 (2014)
10. The Bologna Declaration of 19 June 1999, Berlin-Bologna-Web-Page. http://www.bologna-
berlin2003.de/pdf/bologna_declaration.pdf

zamfira@unitbv.ro
Port Logistics: Improvement
of Import Process Using RFID

Ignacio Angulo(&), Unai Hernandez-Jayo, and Javier García-Zubia

Department of Industrial Technologies, University of Deusto,


Avenida de las Universidades 24, 48007 Bilbao, Spain
{ignacio.angulo,unai.hernandez,zubia}@deusto.es

Abstract. This paper describes a new system developed to improve the import
process of steel coils driven into a port terminal in the Port of Bilbao. A new
RFID based system minimizes mistakes in identification of the coils during the
inland movements of goods.

Keywords: RFID system  Tracking system  Port logistics  Steel coil

1 Introduction

Logistic has become a defining factor in industry 4.0. Around 90% of the worlds
merchandize and commodity trade is transported by ship. During last years, ports have
made an enormous effort on digitization, developing infrastructure to apply advanced
techniques for collection and analysis of information [1, 2]. However, the investment
cannot be profitable while port operators lack systems to automatically integrate
information about their processes [3].
This paper describes a new system developed to improve the import process of steel
coils driven into a port terminal in the Port of Bilbao. The large number of coils
transported complicates all different tasks associated with its intermediate storage until
they are sent to destination by road or rail. This translates into a significant error rate in
shipments while coil improper handling can cause defects in the goods. The project
developed uses radio frequency identification to validate the various stages in the
import process and provides the technological infrastructure for real-time reporting of
the execution of tasks [4].

2 Problem Description

Although historically the city of Bilbao has a great tradition in the manufacture of steel,
now the crisis of the sector demands the importation of a great number of coils to
supply the industry, mainly automotive, of the north and center of Spain. The Port of
Bilbao is one of the main entrees of the country for steel coils, that are transported by
road or rail to destination, accounting for 15% of total port traffic.

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_89
zamfira@unitbv.ro
950 I. Angulo et al.

Although the average stay in the harbor of a coil is less than two weeks, some coils
can remain in the port warehouse for several months. During this period, intermediate
movements often take place to facilitate access to adjacent coils. These actions, usually
executed by the terminal staff without notification, lead to a progressive uncontrolled
stock that can cause errors when identifying a specific coil. Main features of coils
consist in material, thickness of turn, weight and width. The inland operations can
cause damages in the 2D codes that turn the identification in a very hard task. It is quite
common that during the final inspection, which is carried out when the coils are loaded
on the trucks or wagons, errors are detected that entail an important loss of time until
the real coils demanded are finally located and loaded. It is therefore necessary to
develop a system to maintain the location of the coils updated and ensure the correct
identification of each coil transported. In addition, one of the main requirements is to
achieve this objective without significantly altering the operation of the terminal staff.

3 Functional Description

The project covers all the activity of the port terminal, guaranteeing the traceability of
the coils since they are unloaded from the vessels until they leave the port. For this
purpose, the project is divided into three fundamental stages: ship unloading, inter-
mediate movements and terminal exit.

3.1 Ship Unloading


Unfortunately, each supplier uses a different identification system, always consisting of
simple or two-dimensional bar codes. Currently the project is limited to the coils
intended for the automobile industry that are purchased from supplier Tata Steel. Due
to the quality and the high price of these coils, the terminal stores them in an intelligent
warehouse that constitutes a key piece of the project.
The integration of the project with the port ERP deployed in the terminal is necessary.
Technicians from the Basque company ADUR, developers of ERP TRANSKAL, have
participated in the definition of the project to facilitate its integration and portability to
other terminals.
This stage covers the unloading of the coils from the vessels and includes the
attachment to the coils of the RFID tag to achieve the traceability during the rest of
stages. For its execution, an application has been developed on the mobile computer
used by the worker in charge of supervising the unload operation.
During this stage the worker must perform next tasks with the help of the mobile
computer:
• Once the application is opened, the worker must authenticate and select the vessel to
unload. ERP provides a REST service providing all the features of the expected
coils.

zamfira@unitbv.ro
Port Logistics: Improvement of Import Process Using RFID 951

• The operator must board the ship and proceed with a visual inspection of each coil.
The application allows to associate each coil to the list through the bar code being
able to add considerations and photographs on the status of the coil prior to dis-
charge when any defect is detected.
• Once the preliminary inspection is finished, starts the unload. This task is carried
out entirely by the port staff, hired by the terminal. Each time a coil lands on the
dock, the operator must perform another visual inspection to detect any defects
caused during the unload. Again, the application includes incident report with
multimedia. Once the coil is marked as unloaded, the operator takes out a RFID tag
from a bag and places it in a front of the coil, preferably over a freight. The mobile
computer includes an UHF module allowing to map the RFID EPC with the bar
code in the project data base.
Every time a coil is unloaded, the application reports in real time not only to the port
terminal but also to the port authority, providing important info that can improve the
planning about the vessel stay in the port.

3.2 Intermediate Movements


This stage includes all the displacements of the coils during their stay in the port.
Transfers are always done by forklifts or by the automated warehouse crane. Therefore,
the coil boom of the trucks and the crane hook have been equipped with an embedded
RFID reader capable of detecting the loaded coil.
An embedded platform is powered on when the truck has started. The RFID reader
includes two miniatured ceramic antennas placed in both sides to detect the tag on
either side of the coil. The reader performs a continuous interrogating and when a tag is
detected, the system saves the position and keeps interrogating until the tag is missed.
Then the embedded platform posts to the cloud server initial and end positions, logging
the travelling and updating the real position of the coil. As the GPS signal is lost when
a truck enters the warehouse, in that case the system assumes it has been placed in to
the internal storage area, waiting to be stored by the crane.
Although the terminal owns an intelligent warehouse, always a crane operator
oversees the process. Every time the crane is started it gets from the ERP a list of tasks
to be done. The list of coils that need to be moved is also sent to the embedded platform
that manages the RFID reader placed over the hook crane. If the reader detects a coil
not included in the list, a red light is turned on to alert the crane operator of a possible
mistake. The functionality of the crane RFID system is like the one deployed over the
forklifts. Every time a tag is missed, the system posts the coordinates of the crane to
update the coils map of the warehouse.
The crane operator can always access to the information of the raised coil.
Sometimes, as the coils are stored up to three levels, a movement of a specific coil
implies an intermediary displacement of those are over it. In this case, Although, such a
situation turns on the alert, the operator can obviate it and new location of the displaced
coils are automatically updated.

zamfira@unitbv.ro
952 I. Angulo et al.

Finally, the operation differs between the coils being transported by road or rail. The
open trucks can enter the warehouse and the crane is responsible for loading the coils
directly. However, the coils which are transported to destination by rail are arranged in
the transit area of the warehouse to be loaded on the railroad wagons by forklifts.

3.3 Terminal Exit


Currently the shipping process is similar to the unloading process. Through an appli-
cation, an operator of the port terminal oversees the coils loaded in each trailer or
wagon to document through photographs any defects that are identified in them.
Finally, the operator enters the license plate of the truck or wagon and scans the loaded
coils. The application sends the information to the server receiving confirmation or
notifying an error about the schedule. Once the load of a truck or railway is completed
and verified, application notifies the port terminal and port authority the exit of the
transport.

4 Implementation Details

The development of the global system has required to implement two main subsystems:
the handheld application and the RFID embedded reader. In addition, the conditions of
stored merchandise, steel coils, posed a technological challenge when using a tech-
nology based on radiofrequency. It was necessary to perform an analysis of the
behavior of different RFID tags in the project scenario.

4.1 RFID Tag Analysis


Although designing RFID tags for metallic objects with satisfactory performance is still
a challenge [5, 6], there are several alternatives in the market designed to operate under
these characteristics. Far from designing a custom antenna, the analysis carried out
within the project aims to validate the behavior of some UHF frequency RFID tags for
commercially widespread metal surfaces. The study was carried out by analyzing the
behavior of each label located on the outer surface of the coil, on one of its faces and in
the inner hole. Table 1 shows the behavior of each tag type evaluated at different
distances with the tag placed over the frontal side of the coil. For the tests, a VEGA
reader from Thingmagic has been used as RFID reader with an omnidirectional 9dbi
antenna (Fig. 1).

Table 1. RSSI signal received in the RFID reader at different distances by tested tags
Tag 1m 2m 3m 4m 5m
Confidex Ironside −26 −30 −39 - -
Confidex Ironside Micro −29 −36 - - -
Confidex Survivor −26 −29 −33 −36 −39
M 116431 Gao −28 −32 −37 −39 -
Confidex Carrier Tough −29 −32 −36 −41 -

zamfira@unitbv.ro
Port Logistics: Improvement of Import Process Using RFID 953

Fig. 1. Image captured during the analysis of the tags behavior.

As can be seen in Table 1, the results vary markedly among the labels used. As a
result, the label chosen for the development of the pilot, due to its significantly cheaper
price, was the “Confidex Carrier”. A label for conventional use that is glued to the coil
using a methacrylate insulation with 1 cm thick.
One of the main problems was to locate the optimum position of the RFID tag on
the coil. Unfortunately, any possible position includes potential risks. The location on
the inside of the coil was ruled out by its poor electromagnetic behavior. In addition,
the frictions caused with cylinder of the forklift can damage the device. In this sense, as
it is usual to stack coils at various levels, and they are located supported by each other,
it was decided to avoid their placement on the outside of the coil. Therefore, the best
alternative is to place the label on one side of the coil. This involves placing two
antennas at both ends of the forklift cylinder and on both sides of the crane hook.

4.2 Handheld Application


For the mobile application, the help of the developers of the ERP system deployed in
the port terminal was required. Instead of developing an application from scratch, was
carried out an update of the current application that was being used to validate the
unload of the coils. In the ERP database, a new field was created in the table storing the
coils to include the EPCGlobal identifier of the RFID tag associated with each coil. The
“Zebra Workabout 4” (Fig. 2) mobile computer provides an RFID reader in UHF band,
which avoids using an external reader complicating the task of the operator. The read
range of the handheld can be set up to only access to the tag attached to nearest coil
avoiding mistakes in the matching [7]. After the update, the operator must identify the
coil using the bar code and associate the RFID tag with the integrated reader.

zamfira@unitbv.ro
954 I. Angulo et al.

Fig. 2. Operator of the port terminal validating the unloading of a set of coils.

4.3 Embedded RFID Reader


A forklift and a crane have been equipped with an embedded RFID reader during the
pilot (Fig. 3). System is composed of an M6e RFID reader from Thingmagic and a
Raspberry Pi 3, responsible for providing the system functionality. This component of
the system is fundamental to maintain an updated map of the warehouse [8].

Fig. 3. Image of the crane and forklift used in the pilot.

The M6e embedded reader is a small size high performance RFID reader that
provides Support for two monostatic RF antennas and read and write levels, command
adjustable from −5 dBm to +30 dBm. Connection with the Raspberry Pi is done by an
UART port.

zamfira@unitbv.ro
Port Logistics: Improvement of Import Process Using RFID 955

Algorithm in the embedded platform is divided in separated processes. A daemon


process is continuously interrogating for RFID tags and provides, using the distributed
object middleware for Python Pyro4 [9], current tags in the scope of the reader and the
Received Signal Strength Indication (RSSI). Another Pyro4 enabled process, provides
the current location of the forklift from a GPS module or the X-Y-Z coordinates of the
crane provided by the SIEMENS PLC deployed in the crane through a Modbus con-
nection. A third process, responsible of the business layer analyses the coils detected
and location of the system to detect the coil movements performed. The ERP provides a
web service to push every movement detected over an individual coil to guarantee the
real-time tracking of all labelled coils in the warehouse.

5 Conclusion

To optimize the warehouse management, every single movement of any coil is con-
veniently reported to the adopted Port ERP, keeping the location always updated. This
automatic generation of a map of coils drastically reduces the time spent by operators in
locating the requested units and the number of intermediate movements in the ware-
house. Also, failures due to misidentification of coils are removed. The port community
system of the Port of Bilbao receives detailed information about the unload of ships
process allowing anticipate disagreements with planning that may affect the transit of
other ships. Finally, the port authority and customs authority receive updated input and
output of trucks in the terminal.
Furthermore, RFID eradicates mistaken coil transport reducing significantly the
error rate.

Acknowledgments. This research was supported by Basque Government, Grant ER-2015/00024.

References
1. Wu, Y., Xiong, X., Gang, X., Nyberg, T.R.: Study on intelligent port under the construction
of smart city. In: 2013 IEEE International Conference on Service Operations and Logistics,
and Informatics (SOLI), Dongguan, pp. 175–179 (2013). doi:10.1109/SOLI.2013.6611405
2. Wang, Z., Subramanian, N., Abdulrahman, M.D., Cui, H., Wu, L., Liu, C.: Port sustainable
services innovation: Ningbo port users’ expectation. Sustain. Prod. Consumption, 23 August
2016. doi:10.1016/j.spc.2016.08.002. ISSN 2352-5509
3. Fontana, C.F., Papa, F., Marte, C.L., Yoshioka, L.R., Sakurai, C.A.: Intelligent transportation
system as a part of seaport terminal management system. Int. J. Syst. Appl. Eng. Dev. 8, 41–
46 (2014)
4. Kim, M., Kim, K.: Automated RFID-based identification system for steel coils. Prog.
Electromagnet. Res. 131, 1–17 (2012)
5. Jung, S.-C., Kim, M.-S., Yang, Y.: Baseband noise reduction method using captured TX
signal for UHF RFID reader applications. IEEE Trans. Ind. Electron. 59, 592–598 (2012).
ISSN 0278-0046
6. Kuo, S.K., Chen, S.L., Lin, C.T.: Design and development of RFID label for steel coil. IEEE
Trans. Ind. Electron. 57(6), 2180–2186 (2010). doi:10.1109/TIE.2009.2034174

zamfira@unitbv.ro
956 I. Angulo et al.

7. Ukkonen, L., Sydänheimo, L., Kivikoski, M.: Read range performance comparison of
compact reader antennas for a handheld UHF RFID reader. In: 2007 IEEE International
Conference on RFID, Grapevine, TX, pp. 63–70 (2007). doi:10.1109/RFID.2007.346151
8. Bong, G.H., Chang, Y.S., Oh, C.H.: A practical algorithm for reliability-based RFID event
management considering warehouse operational environment. Int. J. Adv. Logistics 3(3),
100–108 (2014)
9. Douglas, B., Kumar, D., Meeden, L., Yanco, H.: Pyro: a python-based versatile programming
environment for teaching robotics. J. Educ. Res. Comput. (JERIC) 4(3) (2004)

zamfira@unitbv.ro
Integration of an LMS,
an IR and a Remote Lab

Ana Maria Beltran Pavani(&), William de Souza Barbosa,


Felipe Calliari, Daniel B. de C Pereira, Vanessa A. Palomo Lima,
and Giselen Pestana Cardoso

Pontifícia Universidade Católica do Rio de Janeiro, Rio de Janeiro, Brazil


{apavani,dpereira,vpalomo,gpestana}@puc-rio.br,
wsbarbosa@ele.puc-rio.br,
felipe.calliari@opto.cetuc.puc-rio.br

Abstract. For over two decades an IR – Institutional Repository (at the time
referred to as Digital Library) and an LMS – Learning Management System
have been developed and integrated under the Maxwell System at Pontifícia
Universidade Católica of Rio de Janeiro (PUC-Rio). It supports traditional
face-to-face courses and offers distance and blended learning options. It is also a
publishing platform. This model has proved very practical for many reasons
mentioned in this work. To enhance the options for traditional, blended and
distance learning, a Remote Lab was added to the Maxwell System. Adding a
Remote Lab is an enhancement to the learning environment since it is a “real”
equipment and not only a software for numerical computation. This work
addresses this new integration and how it benefits from the original infrastruc-
ture of an IR and an LMS implemented as a single platform.

Keywords: Remote labs  Learning Management Systems  Institutional


repositories  Digital learning resource  Learning technologies

1 Introduction

ICT – Information and Communication Technology has provided a large number and a
wide variety of tools to support teaching and learning. Engineering Education has
benefitted from these tools. Many software products have allowed students to simulate
and/or solve problems. The same products help students get ready to go to the “real”
lab by simulating experiments in advance. Videos, interactive courseware, animations,
texts, etc. also support learning and are the contents that help blended learning
(b-learning) and distance learning (e-learning) to be accomplished.
Engineering Education requires experimentation. In Electrical Engineering, there
are lab classes for Electric Circuits, Analog Electronics, Digital Electronics, Control
Systems, Eletromechanical Energy Conversion, etc. They offer experiments with “real”
equipment and components, and prepare future engineers to deal with “real”
physical/technical problems. Remote Labs are a fairly new resource that is meant to be
added to the options to be used in Engineering Education. Remote Labs are “real” labs
that can remotely be used through computer networks, including the Internet. They can

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_90
zamfira@unitbv.ro
958 A.M. Beltran Pavani et al.

be used in addition to traditional labs to provide one more step of preparation before the
traditional lab classes.
The use of ICT tools requires the management of multiple resources and different
platforms. This is of paramount importance for the students and instructors to easily use
them and commute among them in a seamless way. This integration – platforms,
resources and users – is a task for the technical staff. The technical staff must work very
closely with the users in order to provide solutions that suit their needs. This work
addresses such an integration. It presents the results of integrating a Remote Lab to a
systems that is at the same time an Institutional Repository and a Learning Manage-
ment System.
Section 2 introduces some technical definitions. The context at the university is
addressed in Sect. 3 that is very important due to its long experience in using ICT for
both digital resources management and as a learning support. Section 4 explains the
integration of the Remote Lab to the local platform. The use of the Remote Lab is
presented in Sect. 5 and final remarks are in Sect. 6.

2 Some Definitions

The title of this article contains two acronyms and an expression that must be defined
so that their uses are made clear.
• LMS – Learning Management System
Wright et al. [1] defined a Learning Management System as “An LMS is com-
prehensive, integrated software that supports the development, delivery, assess-
ment, and administration of courses in traditional face-to-face, blended, or online
learning environments.”
An LMS is a software environment to support different types of learning processes.
There are many products available. Some are commercial solutions and others are free
and open source products. There are also many “home grown” LMSs.
• IR – Institutional Repository
Lynch [2] created the expression Institutional Repository as “A university-based
institutional repository is a set of services that a university offers to the members
of its community for the management and dissemination of digital materials
created by the institution and its community members. It is most essentially an
organizational commitment to the stewardship of these digital materials, including
long-term preservation where appropriate, as well as organization and access or
distribution.”
Currently it is used worldwide and has taken the place of the expression digital
library that was very popular in the 1990s. It is very broad since it aims at digital
materials created by the institution – this means that articles, ETDs (Electronic Theses
and Dissertations), senior projects, monographs, etc. can be included. But not only
these – digital learning materials can be on the IR too.
An interesting aspect of this definition is that it addresses digital materials; this
means that an IR is not a catalog of non-digital items. It hosts both the descriptions and

zamfira@unitbv.ro
Integration of an LMS, an IR and a Remote Lab 959

the digital files of the documents that belong to the collection. Since digital learning
materials were mentioned, it is necessary to present some definitions related to them.
The first is LO – Learning Object, the second is SCO – Shareable Content Object and
the third is Asset. They follow:
• LO – Learning Object
The definition of a Learning Object comes from IEEE LTSC – The Institute of
Electrical and Electronics Engineers Learning Technology Standards Committee in
page 1 of its IEEE Standard for Learning Object Metadata [3]: “For this standard, a
learning object is defined as any entity – digital or non-digital – that may be used
for learning, education, or training.”
This definition is important in the context of Engineering Education because it
allows non-digital artifacts to be classified as LOs.
• SCO – Shareable Content Object
SCORM – Shareable Content Object Reference Model is defined as: “The
Shareable Content Object Reference Model (SCORM) is a model that references
and integrates a set of interrelated technical standards, specifications, and guide-
lines designed to meet high-level requirements for e-learning content and systems.”
in [4] page 3-3. The SCORM defines SCO as: “SCOs are the smallest logical units of
information you can deliver to your learners via an LMS.” [4] page 11-4.
SCOs are always digital since they are to be delivered via an LMS. This is a
difference they present when compared to LOs. At the same time, SCOs and LOs have
two common characteristics – they are units with educational purposes and are “seen”
by LMSs.
• Asset
Assets are defined by SCORM as: “Assets are electronic representations of
media, texts, images, sounds, HTML pages, assessment objects, and other pieces of
data. They do not communicate with the LMS.” [4] page 3-2.
In 2000, the terms reusable chuncks of instructional media, reusable instructional
components, reusable digital resources, reusable learning objects (LO) were introduced
by Wiley [5]. Later on, in 2009, the term Reusable Learning Object (RLO) was used by
Alsubaie [6].
Except for Asset, SCO, LO, RLO, instructional media, reusable instructional
components and reusable digital resources, have much in common and, therefore, fuzzy
boundaries. At the same time, reusable digital resources can be used for Assets.
• Remote Lab
The expression Remote Lab is associated with VRLs – Virtual & Remote Labs.
Virtual and Remote Labs are different from one another. Heradio et al. [7] presented
four possibilities of labs according to their physical natures and the ways they are
accessed:
• Local Access – Real Resource
• Local Access – Simulated Resource

zamfira@unitbv.ro
960 A.M. Beltran Pavani et al.

• Remote Access – Real Resource


• Remote Access – Simulated Resource
While Virtual Labs rely on Simulated Resources, Remote Labs use Real Resources,
i.e., equipment and components found in traditional labs.
This work addresses the third case, i.e., a real resource that is remotely accessed
using computer networks, including the Internet. This is the meaning of Remote Lab in
this article.

3 The Context at the University

The context at the university is presented in three steps. The first is the system that is at
the same time an IR and an LMS; it is a single platform that hosts both functionalities.
The second is the first integration that was implemented with an external system. The
third is the Remote Lab that was integrated with the System.

3.1 The Institutional Repository and the Learning Management System


The IR and the LMS are implemented on a single platform called The Maxwell System
(http://www.maxwell.vrac.puc-rio.br/). It is also integrated with SciLab® (http://www.
scilab.org/); this will be discussed later in this section.
The Maxwell System started being deployed in the middle of the 1990s as a digital
library of courseware in Electrical Engineering. It is important to remark that course-
ware at that time was very simple since IT was quite limited. In 1999, the system was
registered by the university at the Brazilian Patent Office.
As time went by, new functions were added and new versions of the system made
available. The current version is 4.0 and it is accessible to the blind and the visually
impaired. The main IR and LMS functions are presented in the following subsections.
• Institutional Repository Features
The Maxwell System hosts over 22 K titles of digital contents. There is a large
variety of types/subtypes. ETDs (Electronic Theses and Dissertations) are the largest
collection, with over 8,600 items. The second largest collection is that of Senior
Projects with over 4,100 and the third largest is the articles collection with over 1,500
items. The courseware collection (texts, videos, interactive modules, simulators, etc.)
has over 2,500 items. There are many other types/subtypes on the system.
In order to properly describe the items, the system is compliant with three metadata
standards: DCMES – Dublin Core Metadata Element Set (ISO 15836) (http://www.
dublincore.org/), ETD-ms – an Interoperability Metadata Standard for Electronic
Theses and Dissertations (http://www.ndltd.org/standards/metadata) and MTD2-BR –
Padrão Brasileiro de Metadados para Teses e Dissertações (http://oai.ibict.br/mtd2-br/
MTD2_Fev2005.doc). The last two are specific for online theses and dissertations
(ETD); one is international and the other is Brazilian. The description of courseware
has many elements of the LOM Standard [3].

zamfira@unitbv.ro
Integration of an LMS, an IR and a Remote Lab 961

The IR characteristic of the system allowed the creation of a collection of Assets


that are shared by different items of courseware [8]. The LMS does not manage Assets
but the IR does. Assets are still images (block diagrams, schematic representations,
photographs, graphics), interactive quizzes, MATLAB® code, SciLab® code, html
pieces, videos, animations, etc. Currently there are over 700 Assets but a little overt
500 have been described and uploaded to the system so far. The Assets have a very
high ratio of reuse since the courseware collection is focused on Electrical Engineering.
The Assets collection has had additions due to the implementation of the Remote Lab;
this will be discussed later in this paper.
• Learning Management System Features
The Maxwell System started as a digital library of courseware in Electrical Engi-
neering meant to be used by the students of the university; some access control
functions began being implemented then. They grew in number and sophistication.
Then, it was decided that the system could support the learning process of students in
traditional face-to-face courses. As time went by, e-learning and b-learning courses
started being offered from this platform with new added functions.
Currently, it supports traditional, b-learning and e-learning courses. It is integrated
with the university administrative system.
It offers a “Classroom” (to support traditional courses) and a “Virtualroom” (for e-
and b-learning). The last has more functions. Among the functions, the following can
be mentioned: access to course materials, access to recommended bibliography, bul-
letin board, list of participants with photos & short bios (optional), discussion forums,
chats, agenda, access to grades (individual grades and statistical data on the whole
class), instructions on how to use the system and contacts. The first version of online
tests were implemented in the second term of 2016.
Since students and faculty are used to the system, the decision was to integrate the
Remote Lab to this platform, so that they could have the functions of the Remote Lab
available from a platform they feel comfortable.
One important feature must be mentioned at this point because it impacted the way
the Remote Lab was integrated to the system – the scheduling of activities. Activities
are scheduled and they have many attributes – initial time and date, final time and date,
place (it can be the system), if grades are assigned and their types, etc. All this
information is recorded on the database. Students are informed of the activities to
participate through two different applications – Atividades (activities) that lists all
activities of a given course and Agenda that has three options to see all the activities of
all courses a student is enrolled plus office/tutoring hours; one of the options is a
calendar. When the online tests were implemented, the Atividades application got a
new function – it connects the student to the activity if it is in the scheduled time; the
online tests are run from the Maxwell System.
• Institutional Repository + Learning Management System
This model has proved very practical for many reasons but the most important is
that resources are course independent – they are independent items of the digital
collection that are described with a detailed metadata set. The items can be:
(1) courseware – resources developed to fulfill the needs of syllabi; (2) learning objects

zamfira@unitbv.ro
962 A.M. Beltran Pavani et al.

– self contained topics that can be used as references or to support other items;
(3) simulators and interactive exercises – items that allow students to practice; and
(4) texts of various natures (theses, dissertations, senior projects, monographs, articles)
that are at the same time products of the educational process and inputs to it.

3.2 SciLab® – the First Integration


Simulation is an interesting tool in the learning process. In order to develop Learning
Objects with this characteristic – Simulator Objects – the Maxwell System was inte-
grated with SciLab®. Simulator Objects are created with some theoretical background
and access to pages where users can choose parameters, functions and scales, and
submit to SciLab®. The results (graphical and/or alphanumerical) are returned to the
system and presented to the user. The user does not “see” SciLab® since it user is the
Maxwell System. The SciLab® server was installed and the communication between
the two systems was implemented.
The first object was made available in May 2015. Currently, there are 13 different
objects with a total of 50 simulating modules and over 100 Assets with extension .sce.
Additional Simulator Objects have been developed to support the Remote Lab
experiments. They are related to the topics and circuits the students use in the course.

3.3 VISIR – the Remote Lab – the Second Integration


The LMS and the IR have been available for a long time and are solid tools for learning
and teaching. Simulation was added in 2015. The number of learning materials can be
counted by the hundreds and they are of various natures.
It was then necessary to add Remote Labs to enhance the ICT support to Engi-
neering Education at the university. The objective of the faculty involved in the project
was to have Remote Labs in addition to all other available tools to prepare for the
traditional lab activities. The Remote Lab in the context of this work is defined as
“Remote Access – Real Resource” [7].
Marques et al. [9] summarized the advantages of Remote Labs presented in the
literature as: (1) accessibility; (2) availability; and (3) safety. The advantages are out of
question even when e- and b-learning are not under consideration. Students in tradi-
tional courses can use a Remote Lab when they are not at the university or when the
traditional labs are closed. This yields more opportunities to learn.
The Remote Lab that was integrated is VISIR – Virtual Instrument Systems in
Reality, a Remote Lab for Electric and Electronic Circuits. Tawfik et al. [10] presented
a good description of the main technical aspects of VISIR. Alves et al. [11] addressed
the integration of VISIR with Moodle (http://www.moodle.org/), a free and open
source LMS. It is important to remark that the focus of [11] is on the pedagogical
aspects of the integration by assessing students performances. This is a big difference to
this work which addresses the informational and technical aspects of the integration.
VISIR is the available Remote Lab; if another Remote Lab equipment were used, the
conceptual solution would be the same, though the informational and technical aspects
would probably be different.
The next section presents the integration of VISIR and the Maxwell System.

zamfira@unitbv.ro
Integration of an LMS, an IR and a Remote Lab 963

4 The Integration of VISIR and the Maxwell System

The integration of VISIR and the Maxwell System had different aspects that are
complementary. All of them are necessary to achieve the model of use.
The model of use is based on three premises: (1) the Remote Lab, as the Maxwell
System, is an institutional resource that must be prepared to be used in different courses
with different instructors; this has consequences on the integration of both systems;
(2) the Remote Lab is part of the learning resources offered to students and faculty, and
for this reason is to be integrated with the platform where all other resources are made
available; and (3) digital materials are to be available for students to study and be
prepared for the use of VISIR, and to afterwards go to the traditional lab. The three
premises are discussed in the following subsections. Another subsection addresses the
installation of VISIR and the communication with the Maxwell System.

4.1 VISIR as an Institutional Resource


VISIR requires many actions to be performed before it can be used for a set of
experiments. The actions are a consequence of its architecture and implementation. It is
not a “plug and play” resource. It is neither a software that one downloads and installs,
and it is ready to use. This subsection is a little technical but it is necessary to
understand the solution concerning the technical documentation required for the VISIR
operation.
VISIR is made of a set of protoboards that host instruments (source, signal gen-
erator, multimeter and oscilloscope) and components that allow the experiments to be
performed. In order to be ready to use, the components must be installed on the
protoboards and the technical documentation uploaded on the VISIR server. The
documentation is:
• Component List
The Component List is a text file that contains all the elements that are physically
mounted on VISIR. Each element is described with attributes: type, number of the
protoboard where it is mounted, number of the relay in which it is installed, names of
the nodes it is connected and a description. Elements can be resistors, capacitors,
inductors, diodes, transistors, integrated circuits and wires. Nodes are identified as 0, A,
B, C, D, E, F, G and H.
The Component List is a result of all components that are installed on the proto-
boards, thus it is independent of the experiments. There is only one Component List for
each VISIR installation at a time. This is an important characteristic of this file because
it determines the Collection it belongs.
• Max Lists
VISIR uses Max Lists to “authorize” circuits that can be mounted. A circuit that
belongs to a Max List is considered to be safe not to harm the equipment. A Max List is
a text file describing:

zamfira@unitbv.ro
964 A.M. Beltran Pavani et al.

– The sources that can be used in the corresponding experiment and their limits of
voltage and current;
– The components that can be used in the corresponding experiments. They are
subsets of the components listed in the Component List.
A Max List is associated with an experiment. Therefore, there is one Max List for
each experiment to be performed.
• Equipment Configuration File
The Equipment Configuration File is a text file identified as filename.cir. It is
generated by VISIR when used in the mode that allows experiments to be created. It
contains the following information:
– Equipment and components to be used in the experiment related to the Equipment
Configuration File;
– The components that can be used in the corresponding experiments. They are
subsets of the components listed in the Component List.
An Equipment Configuration File is associated with an experiment. Therefore,
there is one Equipment Configuration File for each experiment to be performed.
When an experiment is created in VISIR, the Equipment Configuration File is
automatically generated. The person creating it must save it on the local computer. In
the case addressed by this work, the creator must send it to the information processing
staff for the file to be described, added to the corresponding Collection (next para-
graphs) and stored on the system. The creator of the experiment can ask the staff to
upload the file to the Maxwell System.
The Equipment Configuration File on the Maxwell System is of paramount
importance in the integration process. This will be addressed in Subsect. 4.4.
It is clear the Component List is a document of VISIR. On the other hand, the Max
Lists and the Equipment Configuration Files are documents that refer to experiments.
In order to store, make available and preserve these documents, the IR character-
istic of the Maxwell System was used. The DCMES Standard classifies types of
resources, one of the types is a “collection”, which means a set of resources with
specific characteristics. The Maxwell System uses two other attributes to classify
resources – “subtype” and “nature”. Subtype adds a more specific characterization of
resources; for example, “text” is very wide and needs additional specification, such as
article, manual, ETD, etc. “Nature” is used to specify a focus to the resource; two
examples are “technical” and “educational”. Combining the three possibilities –
type/subtype/nature – two sets of collections were created:
• Documentação Técnica do VISIR – Virtual Instruments System in Reality
(Collection/Technical Documentation/Technical)
This collection holds the Component List, photographs of the protoboards with the
components, schematic representations of the components on the protoboards, the Data
Sheet of VISIR provided by the manufacturer (with the necessary authorization) and
the Manual Técnico de Utilização do VISIR [12] written by Barbosa.

zamfira@unitbv.ro
Integration of an LMS, an IR and a Remote Lab 965

Two remarks are important. The first is that the manual contains information about
VISIR and its installation at PUC-Rio. The second is that all the documents, except the
Data Sheet are updated along the time according to changes in the configuration.
Changes in the configuration may occur do to adding and/or deleting experiments.
• Name of the Experiment(*) (Collection/Remote Experiment/Educational)
There is one such collection for each experiment to be performed. Each one con-
tains the Experiments Descriptions & Assignments (text), a set of support digital
resources, the Max Lists for the experiments and the Equipment Configuration Files.
(*)
An example of title for a collection is Circuitos de Primera Ordem (First Order
Circuits).
Figures 1 and 2 show the catalog descriptions of two collections – the Technical
Documentation Collection and a Remote Experiment Collection. There are as many of
the second type collection as the number of experiments installed on VISIR.

Fig. 1. Description of the Technical Documentation Collection.

A Remote Experiment Collection may hold all types of learning materials the
instructor chooses from the ones available on the Maxwell System. More on this
subject will be presented when the third premise is discussed.
At this point, there is a connection with the Assets that were presented in Sect. 2.
Some of the items of the collections do not have educational functions and have no
meaning outside the collections they belong; some examples are the Max Lists, the
Component List and the photos of the protoboards. For this reason, they were classified
as Assets. On the other hand, there are many resources that are not Assets; some
examples are the Experiments Descriptions & Assignments and the Manual Técnico.

zamfira@unitbv.ro
966 A.M. Beltran Pavani et al.

Fig. 2. Description of the First Order Experiment Collection.

4.2 VISIR Is Another Learning Resource


The university has traditional labs for the engineering courses that require them. It also
offers access to MATLAB® and CircuitLab® through licenses that students can use.
The Maxwell System has a large collection of educational resources in Electrical
Engineering. VISIR came as an additional resources to students.
Currently, there are two types of uses of VISIR: (1) laboratory classes of Electric
and Electronic Circuits, which is a mandatory course in the curricula of the Computer,
Control & Automation and Electrical Engineering careers; and (2) extracurricular
activities for students who do not take Circuits.
In the first case, VISIR is one of the resources to be used by students. This can be
seen by examining the experiment description of First Order Circuits [13]. It clearly
indicates that students are required to study the theory on the topic, simulate with
CircuitLab®, use VISIR and then go to the traditional lab.
A similar situation is described in one of the cases analyzed by Marques et al. [9],
i.e., VISIR as one of the resources available to students.

4.3 Digital Materials to Be Used by Students


Abundant learning resources were available from the Maxwell System before VISIR
was deployed. This happened because the system had been used to support traditional
and b-learning courses. For example, 35 videos with the complete syllabus of the
Electric and Electronic Circuits course [14] were published in 2013–2014, 51 Learning
Objects in Electrical Engineering [15] were published in 2012–2017 and 15 Simulator
Objects [16] were published in 2015–2016.
When VISIR started, a new series [17] started too. Its aim is to organize resources
developed for the use of VISIR. Additional learning materials have been developed to

zamfira@unitbv.ro
Integration of an LMS, an IR and a Remote Lab 967

support the experiments. The reason for this is that before VISIR, the lab classes used
the Maxwell System only for support – hosting experiment descriptions, posting
agendas, etc. One example of such new resource is the Simulator Object, developed
using SciLab® Circuitos RLC de Segunda Ordem em Diferentes Topologias [18].
Figure 3 shows the opening screen of the object and Fig. 4 one of the configurations
that the object supports.

Fig. 4. An internal screen showing one of


Fig. 3. Opening screen of a simulator.
the configurations that the simulator object
supports.

4.4 Installation of VISIR and Integration to the Maxwell System


Before VISIR was installed, the Maxwell System and NI LabVIEW® were integrated
using another process to emulate VISIR. When NI PXI® and VISIR arrived, the
installation was concluded. The challenge was to be able to command VISIR from the
Atividades application presented in Subsect. 3.1. It had to be enhanced in order to be
able to offer a “door” to enter VISIR. This was accomplished by creating a link to the
Asset that is the Equipment Configuration File for the corresponding experiment. This
Asset is on the IR feature of the system and during the process of creating the Activity
(on the Maxwell System LMS feature) the program asks for an Asset of
Type = Equipment/Subtype = Equipment Configuration. The user chooses the file
corresponding to the experiment; it becomes a link as Fig. 5 shows. The link is active
when the dates and times specified to perform the experiment are valid; the scheduling
procedure is the same that has been used for discussion forums, chats and online tests.
What is new is that it links to VISIR. This is possible because the technical docu-
mentation is on the system.

zamfira@unitbv.ro
968 A.M. Beltran Pavani et al.

Fig. 5. Remote Lab environment on the Maxwell System showing the link to “Entrar” (Enter),
in the ellipse, to perform Remote Experiment 3.

5 Using VISIR

VISIR is used by three different players – instructors, students and technical staff. Each
has a particular and specialized set of functions and the system must support all of
them. The Maxwell System has always been used by the same three players. Before the
profiles of the users are presented, it is important to remark that the system has always
identified persons by their roles and each role has a specific profile of functions and
authorization levels. Thus, only new functions had to be created.
As mentioned in Sect. 3, the decision to integrate the Remote Lab to the Maxwell
System yielded the saving of a lot of work – all users are already identified and have
profiles, information comes from the university administrative system, courseware with
defined levels of access are on the IR, tables of courses and classes are on the LMS, etc.
The players and the adjustments made to suit them follow.
• Technical Staff
The functions available to technical staff were not impacted since this set of users
has been managing persons and resources for many years. The integration of VISIR
and Maxwell was very comfortable for this group.
• Instructors
Some work has been devoted to add functions for instructors to use and manage
VISIR from the System. Two were adjustments of functions already available for the
use and management of the “Classroom” and/or the “Virtualroom”: (1) configuration of
the environment to suit the needs and/or preferences of the instructors; and
(2) assessment of accesses by students. Two were new functions related to information

zamfira@unitbv.ro
Integration of an LMS, an IR and a Remote Lab 969

on the IR: (1) browsing Remote Experiments Collections; and (2) browsing Technical
Documentation collections. One was a completely new function that is performed from
the system on the VISIR equipment – the Creation of a New Experiment. To perform
this function VISIR has a feature that is not available to students – a button with a “+”
sign that allows the inclusion of components. Figure 6 shows the VISIR interface when
a remote experiment can be created – the “+” sign is in the ellipse.

Fig. 6. VISIR interface available for instructors to create remote experiments.

The other functions – posting news on the bulletin board, mailing list, posting
grades, assigning activities, posting bibliography, etc. – were not affected. They are
used as they have always been.
Figures 7 and 8 show, respectively, the instructors menu and an online accesses
report (students’ names were erased).
• Students
Students maintained the same functions they have always had and got a new one –
access to VISIR. Figure 5 shows the list of remote experiments where only the last is
available due to the scheduling defined by the instructors. Figure 9 shows the envi-
ronment offered to students to get ready to experiments – the courseware is organized
according to the configuration created using the function in the Instructors’ menu. The
upper part contains the Reference Resources and the lower part (partially shown)
contains the Experiments Assignments.

zamfira@unitbv.ro
970 A.M. Beltran Pavani et al.

Fig. 7. Instructors’ menu.

Fig. 8. Accesses report.

zamfira@unitbv.ro
Integration of an LMS, an IR and a Remote Lab 971

Fig. 9. Materials available for the remote experiments.

6 Final Remarks

This is a project under way. The implementation is compliant with the premises that
were stated.
The first run of VISIR at the university was in the second semester of 2016 which
was the first semester of the integration. It may happen that new functions will be
necessary and current functions will need enhancements.
At the end of the term, questionnaires were handed to students and are currently
under analysis. Adjustments, new functions, etc. can be implemented as consequence
of the surveys.
Beginning next March the course of General Electricity will start using VISIR, thus
new instructors and their students will be able to contribute with suggestions.

References
1. Wright, C.R., Lopes, V., Montgomerie, T.C., Reju, S.A.: Selecting a learning management
system: advice from an academic perspective. EDUCAUSE Rev., 21 April 2014. http://
www.educause.edu/ero/article/selecting-learning-management-system-advice-academic-
perspective. Accessed 05 Feb 2015
2. Lynch, C.: Institutional Repositories: essential infrastructure for scholarship in the digital
age, ARL Bimonthly report, 226, United States, February 2003. http://www.arl.org/
resources/pubs/br/br226/. Accessed 05 Feb 2015
3. IEEE Standard for Learning Object Metadata, 1484.12.1TM (2002). http://dx.doi.org/10.
1109/IEEESTD.2002.94128. Accessed 05 Feb 2015

zamfira@unitbv.ro
972 A.M. Beltran Pavani et al.

4. Advanced Distributed Learning: ADL Guidelines for Creating Reusable Content with
SCORM 2004, July 2008. http://www.adlnet.gov/wp-content/uploads/2011/07/ADL_
Guidelines_Creating_Reusable_Content.pdf. Accessed 05 Feb 2015
5. Wiley II, D.A.: Learning object design and sequencing theory. Ph.D. dissertation, Brigham
Young University, United States, June 2000. http://opencontent.org/docs/dissertation.pdf.
Accessed 07 Feb 2015
6. Alsubaie, M.: Reusable objects: learning object creation lifecycle. In: Proceedings of the
Second International Conference on Development of eSystems Engineering, DeSE 2009,
Abu Dhabi, UAE, pp. 321–325 (2009). http://dx.doi.org/10.1109/DeSE.2009.63. Accessed
07 Feb 2015
7. Heradio, R., de la Torre, L., Galan, D., Cabrerizo, F.J., Herrera-Viedma, E., Dormido, S.:
Virtual and remote labs in education: a bibliometric analysis. Comput. Educ. 98, 14–38
(2016). http://dx.doi.org/10.1016/j.compedu.2016.03.010
8. Pavani, A.M.B.: Creating a collection of assets in electrical engineering – a project under
way. In: Proceedings of ICEE 2015 – International Conference on Engineering Education on
Flash Memory, Croatia, pp. 315–322 (2015). ISBN 978–953-246-232-6. http://icee2015.
zsem.hr/images/ICEE2015_Proceedings.pdf
9. Marques, M.A., Viegas, M.C., Costa-Lobo, M.C., Fidalgo, A.V., Alves, G.R., Rocha, J.S.,
Gustavsson, I.: How remote labs impact on course outcomes: various practices using VISIR.
IEEE Trans. Educ. 57(3), 151–159 (2014). doi:10.1109/TE.2013.2284156
10. Tawfik, M., Sancristobal, E., Martín, S., Gil, C., Pesquera, A., Lozada, P., Díaz, G., Peire, J.,
Castro, M., García-Zubia, G., Hernández, U., Orduña, P., Angulo, I., Costa-Lobo, M.C.,
Marques, M.A., Viegas, M.C., Alves, G.R.: VISIR deployment in undergraduate engineer-
ing practices. In: Proceedings of the 2011 First Global Online Laboratory Consortium
Remote Laboratories Workshop, USA, pp. 1–7, October 2011. http://dx.doi.org/10.1109/
GOLC.2011.6086786
11. Alves, G.R., Viegas, M.C., Marques, M.A., Costa-Lobo, M.C., Silva, A.A., Formanski, F.,
Silva, J.B.: Student performance analysis under different Moodle course designs. In:
Proceedings of the 2012 15th International Conference on Interactive Collaborative
Learning, Austria, pp. 1–5, September 2012. http://dx.doi.org/10.1109/ICL.2012.6402181
12. Barbosa, W.S.: Manual Técnico de Utilização do VISIR, October 2016. http://www.
maxwell.vrac.puc-rio.br/27695/27695.PDF
13. http://www.maxwell.vrac.puc-rio.br/27810/27810.PDF
14. http://www.maxwell.vrac.puc-rio.br/series.php?tipBusca=dados&nrseqser=8
15. http://www.maxwell.vrac.puc-rio.br/series.php?tipBusca=dados&nrseqser=5
16. http://www.maxwell.vrac.puc-rio.br/series.php?tipBusca=dados&nrseqser=12
17. http://www.maxwell.vrac.puc-rio.br/series.php?tipBusca=dados&nrseqser=14
18. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=27280@1

zamfira@unitbv.ro
Artificial Intelligence and Collaborative Robot
to Improve Airport Operations

Frédéric Donadio, Jérémy Frejaville, Stanislas Larnier(&),


and Stéphane Vetault

AKKA Research Toulouse, 6 impasse Michel Labrousse,


31000 Toulouse, France
{frederic.donadio,jeremy.frejaville,
stanislas.larnier,stephane.vetault}@akka.eu

Abstract. Since air traffic is increasing, airport operations have to be more


efficient and obviously still stay safe. To do so, it is important to find innovative
solutions to improve those operations. Two projects are presented in this paper.
The first one is an intelligent video surveillance to monitor airport operations.
The second one is a collaborative mobile robot to improve maintenance time and
traceability of maintenance operations. Those two solutions are the first steps in
direction of the airport of the future. Management of the operations, autonomous
vehicles, non-destructive testing and human-machine collaborations will evolve
and change the airport activities.

Keywords: Intelligent video surveillance  Collaborative mobile robot 


Non-destructive testing  Airport operations  Aircraft maintenance

1 Introduction

As air traffic continues to grow, security remains strict and capacity is stretched to the
limit, the need for efficient and safe airport operations is of first importance. The
problems created by poorly run systems are felt by everyone involved in air travel.
Time and safety are critical issues. Therefore reducing and improving airport operations
are ones of the biggest challenges which airlines are facing in these days when the
market is oversaturated with competition.
In order to optimize occupation of stopover zones and avoid airport congestion, the
European CO-FRIEND project, coordinated by AKKA Technologies, aims to use the
latest technologies in video-surveillance, video-tracking and artificial intelligence to
monitor airport operations. If the advancement of the operation schedules is updated in
real time, the actions to stay on schedule can be envisioned.
Airplanes are inspected periodically during maintenance operations on an airport
between flights. The reduction in inspection time is a major objective for airlines. If
maintenance operations are faster, this will optimize the availability of aircraft and
reduce operating costs. Nowadays, the inspection is performed by human operators
mainly visually, sometimes with some tools to evaluate defects. The French
multi-partner AIR-COBOT project of the Aerospace Valley, led by AKKA Tech-
nologies, aims to improve maintenance time and also traceability.

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_91
zamfira@unitbv.ro
974 F. Donadio et al.

Sections 2 and 3 introduce respectively the CO-FRIEND and AIR-COBOT pro-


jects. Section 4 provides an overview of actual and anticipated outcomes. Section 5
concludes with the prospects of AKKA Technologies for the airport of the future.

2 CO-FRIEND

The CO-FRIEND - COgnitive & Flexible learning system operating Robust Interpre-
tation of Extended real sceNes by multi-sensors Datafusion - project uses a series of
fixed and dome cameras, to automatically detects and monitors all stopover operations
around the aircraft and therefore contributes to improving the safety of people and
equipment. This project is a European project. The partners are AKKA Technologies,
Inria of Sophia Antipolis, University of Hamburg (Cognitive Systems Laboratory),
University of Leeds, University of Reading (Computational Vision Group),
Toulouse-Blagnac Airport. It begins in February 2008, lasts three years and follows up
the AVITRACK project [1–3].
The main objectives of this project and its predecessor are to develop techniques to
recognize and learn automatically all servicing operations around aircraft parked on
aprons. Both projects support a more efficient, safer and prompter management of
apron areas, in which ground operations such as fueling, baggage loading, etc., can
have a negative impact on the functioning of the airport, including traffic time delays.

2.1 Equipment
A number of main events, both static and dynamic, can be recognized and interpreted
by the system: aircraft, vehicles and people. In order to support this surveillance, the
system uses a set of fixed and Pan-Tilt-Zoom cameras with overlapping fields of view.
Figure 1 presents six fields of view from the airport cameras. All the cameras are
placed at strategic points in the apron areas. The camera streams are temporally syn-
chronized by a central video server.

Fig. 1. Six fields of view from the airport cameras

zamfira@unitbv.ro
Artificial Intelligence and Collaborative Robot 975

2.2 Scene Tracking


Object categories are extracted from the camera information. This data is further
processed and classified on the basis of pre-defined object categories (people, vehicles,
aircraft, etc.), associated with a function, and shaped in a three-dimensional form,
through a data-fusion method [1]. Algorithms have to be efficient in different weather
conditions, see Fig. 2.

Fig. 2. Different weather conditions are fully supported

The tracking of objects is achieved by means of bottom-up tracking, which is


composed of two sub-processes, motion detection and object tracking. Video-event
recognition algorithms analyze the results of the tracking procedure in order to rec-
ognize high-level activities taking place within the target scene.
There are frame-to-frame trackers which implement special algorithms capable of
distinguishing foreground from background, moving from stationary objects, and
possible object interactions. As stated above, a bottom-up approach was deployed in
order to categorize the various object types: objects, people, and vehicles. A top-down
method is deployed to apply three-dimensional features to the detected objects.

2.3 Scene Understanding


Relevant activities, usually carried out in apron areas, were previously modelled and
then used as a form of “knowledge pool” with which the real-time data are compared.
The basic idea, then, would be to automatically report information on the apron area
which matches relevant scenarios to the competent human operators. A schematic
description of the main idea underlying the CO-FRIEND is shown in Fig. 3.
All the information related to the scene is then fused in a data-fusion module and
provided to the scene-understanding phase. To simplify, this last phase involves a
video event recognition which recognizes which events are taking place within the
video-streaming information. In Fig. 4, the system is able to recognize between
unloading and loading procedures.

zamfira@unitbv.ro
976 F. Donadio et al.

Fig. 3. CO-FRIEND overview

Fig. 4. Scene understanding module is able to understand if it is unloading or a loading activity.

Spatial and temporal properties from detected mobile objects are modeled
employing soft computing relations, that is, spatial-temporal relations graded with
different strengths. This system is composed of three modules: the trajectory speed
analysis module, the trajectory clustering module, and the activity analysis module.
The first is aimed at segmenting the trajectory into segments of fairly similar speed.
The second is aimed at obtaining behavioral displacement patterns indicating the origin
and destination of mobile objects observed in the scene. It is achieved by clustering the
mobile trajectories and also by discovering the topology of the scene. The latter module
is aimed at extracting more complex patterns of activity, which include spatial infor-
mation coming from the trajectory analysis and temporal information related to the
interactions of mobiles observed in the scene, either between themselves or with
contextual elements of the scene. Spatial and temporal properties from detected mobile

zamfira@unitbv.ro
Artificial Intelligence and Collaborative Robot 977

objects are modeled employing soft computing relations. These can then be aggregated
employing typical soft-computing algebra. A clustering algorithm based on the tran-
sitive closure calculation of the final relation allows finding spatial-temporal patterns of
activity. This approach has been applied to dock-station monitoring at the
Toulouse-Blagnac airport [4–7].

2.4 Management and Maintenance


Specific vision tasks are activated on demand or based on the occupancy of the scene,
to robustly recognize specific activities. Typically, the module will utilize zoomed
views from several Pan-Tilt-Zoom cameras to achieve specific tasks like observing the
refueling (pipe being plugged/unplugged), the deposit of the chocks, the
loading/unloading of containers, etc. Those operations can be monitored to check if
there are on schedule.
Different types of aircraft can park on the apron. Apart from small size differences,
the airplane models can be differentiated through their model type number. The type is
written above the right front cargo door of the plane which is always at the same spot
for any of them, allowing a unique preset. After checking the type of aircraft, the
system can zoom to check the presence or absence of the chocks or the state of some
items on the aircraft. Figure 5 illustrates the interest of a zoomed image.

Fig. 5. At left, full image. At right, zoomed image on the front landing gear.

3 AIR-COBOT

Previous robotic solutions for aircraft inspection focus on aircraft surface skin
inspection with robot crawling on the airplane surface. The AIR-COBOT - Aircraft
enhanced Inspection by smaRt & Collaborative rOBOT - projet chooses a different path
which leads to a collaborative mobile robot with cameras and a three-dimensional
scanner. Thanks to its acquisitions, a database dedicated to each airplane containing
images and scans, will be updated after each maintenance check. Researches have been
made on three main problematics which are: autonomous navigation, Non-Destructive
Testing (NDT), Human-Robot Interaction (HRI).

zamfira@unitbv.ro
978 F. Donadio et al.

The partners of this French project are AKKA Technologies, Airbus Group,
LAAS-CNRS, Institut Clément Ader of Albi, Stéréla, M3 Systems and 2MoRo
Solutions. It begins in January 2013, lasts three years. At the end, the partners decided
to keep the demonstrator in activity to continue research on these domains and improve
the cobot.
To navigate in the airport, the robot can go to an airplane parking thanks to
geolocalization data, or by following its human operator. To autonomously navigate
around the airplane, the robot is able to use laser and vision methods to localize itself
compared to the aircraft. Obstacle recognition and avoidance are also use in navigation
mode. The robot can inspect visually some items of the aircraft such as probes, static
ports, trapdoors, latches and scan some fuselage parts. It has a tasks checklist to follow.
The human operator controls the inspection diagnoses on its tablet. He also checks
visually the aircraft and can request additional NDT checks.

3.1 Robot and Controls


The electronics equipment is carried by the 4MOB mobile platform manufactured by
Stéréla, see Fig. 6. Equipped with four-wheel drive, it can move at a maximum speed
of 2 meters per second (7.2 km per hour). Its lithium-ion battery allows an operating
time of 8 h. Two obstacle detection bumpers are located at the front and the rear. They
stop the platform if they are compressed. On the remote control, it is possible to follow
the battery level and receive 4MOB platform warnings.

Fig. 6. At left, 4MOB platform. At right, AIR-COBOT is equipped with many sensors.

In case of a problem, two emergency shutdown devices are accessible on the


platform and another is present on the remote control. The duo human-robot is sup-
posed to work at a relative close range. If the platform moves away too much from the
remote control carried by the operator then there is an automatic emergency shutdown.
The full robot, see Fig. 6, is equipped with navigation sensors: four Point Grey
cameras, two Hokuyo laser range finders, Global Positioning System (GPS) receiver,
Inertial Measurement Unit (IMU); and NDT ones: Pan-Tilt-Zoom (PTZ) camera
manufactured by Axis Communications, Eva 3D scanner manufactured by Artec.

zamfira@unitbv.ro
Artificial Intelligence and Collaborative Robot 979

The open source framework Robot Operating System (ROS) has been used for
integrating device drivers and navigation algorithms. The robot has two industrial
computers, one running on Linux for the autonomous navigation module and the other
on Windows for the NDT module. The whole cobot weighs 230 kg.
The tablet interface provides several control panels to perform different actions:
changing the mission tasks or the navigation mode; checking the pose estimations or
the NDT results; reading robot warnings or interaction requests. Figure 7 presents a
view of the control panel for the NDT sensors.

Fig. 7. At left, control panel for the NDT sensors. At right, 3D scan visualization on the tablet.

At the end, the robot provides its diagnoses and asks the human to validate or refute
them. The operator can easily manipulate the pictures or the 3D scans for zooming or
rotating, see Fig. 7. Color representations of the results are put on the pictures or the 3D
scans to help the user comprehension.

3.2 Autonomous Navigation


The robot has two different types of navigation to perform: in the airport to reach the
aircraft parking and around the aircraft to reach the checking positions. For safety
measure, two methods of localization have been considered in each case. AIR-COBOT
is also able to detect, track, identify and avoid obstacles [8].
In the airport, the robot navigates in dedicated corridors and has to respect speed
limits. The first time, the human operator has to teach the trajectory to the robot by
moving it in remote control mode or follower mode. Waypoints are built from this
trajectory. Georeferenced maps of the facility with areas (forbidden, limited speed…)
are also provided and taken into consideration. In an outdoor environment, the robot is
able to go to the aircraft parking by localizing through GPS data. The GPS device
allows the use of geofencing. A visual localization based on Simultaneous Localization
And Mapping (SLAM) approaches to propose a complement to the GPS one is cur-
rently evaluated [8].
To perform the inspection, the robot has to navigate around the aircraft and go to
checkpoints. The position of the aircraft in the airport or factory is not known precisely;
the cobot needs to detect the aircraft in order to know its pose (position and orientation)
relative to the aircraft. To do this, the robot is able to locate itself, either with the laser
data from its laser range finders, or with image data from its cameras [9–11].

zamfira@unitbv.ro
980 F. Donadio et al.

Near the aircraft, a point cloud in three dimensions is acquired thanks to the laser
scanning sensors fixed on pan-tilt units. Matching between the model of the aircraft and
the scene point cloud is performed to estimate the static pose of the robot. Figure 8
provides an example in an in-door context [9].

Fig. 8. Robot is located in back left of the aircraft in an inside environment. At left, 3D data is
acquired with a Hokuyo laser range finder moved thanks to a pan-tilt unit. At right, the matching
result is made of data (blue) with model (red).

The robot moves and holds this pose by considering the IMU, the wheel odometry
and the visual odometry. Laser data are also used horizontally in two dimensions. Pose
estimation of the robot is computed when enough elements from the landing gears and
engines are visible [9].
For visual localization, the robot estimates its pose relative to the aircraft using
visual elements (doors, windows, tires, static ports etc.) of the aircraft. Pattern recog-
nitions or extractions of features are used to detect those visual landmarks [10, 11]. By
detecting and tracking them, see Fig. 9, in addition to estimating its pose relative to the
aircraft, the robot can perform a visual servoing [12].

Fig. 9. Tracking of visual features for pose estimation.

zamfira@unitbv.ro
Artificial Intelligence and Collaborative Robot 981

A first confidence index is computed based on the number of items visible in laser
data. A second confidence index is computed based on the number of visual features. If
good data confidence is achieved, the pose is updated. Artificial intelligence arbitrating
between those pose estimation results is in development [9, 11].
The laser data coming from laser range finders and visual data coming from the
cameras are used for detection, classification (moving, motionless) and recognition
(human, vehicle, other) of the obstacles [12]. The detection and the classification are
easier in the two-dimensional laser data, while identification is better in the images. The
two methods are complementary. Three kinds of avoidances are considered: stop and
wait for a free path, spiral obstacle avoidance and path planning. The chosen avoidance
approach depends on the robot’s surroundings (navigation corridor, tarmac area
without many obstacles, cluttered indoor environment etc.) at the time of the encounter
with an obstacle.

3.3 Non-destructive Testing


At the start of the project, the NDT tasks were based on the PTZ camera and the 3D
scanner. They require image analysis for the first sensor and point cloud analysis for the
second one. During the project, it has been put into evidence that the navigation
cameras and the laser range finders could also provide data useable for NDT.
At given positions, the robot performs a visual inspection by analyzing acquisitions
made with the PTZ camera. Before the image analysis of the acquisition, several steps
take place: pointing the camera, detecting the element to be inspected, if needed
repointing and zooming with the camera and finally, image acquisition.
Image analysis is used in different cases: doors to determine whether they are open
or closed; on the presence or absence of protection for certain equipment (static port,
probe); the state of turbofan blades; the state of the probes; or the wear of landing gear
tires [13, 14]. Figure 10 provides some examples of items to inspect.

Fig. 10. Examples of items to inspect. From left to right, static port with its protection, open air
inlet valve, AOA probe, trap door with unlocked handle, reactor with foreign element. One can
note that light conditions are very different.

The detection uses shape recognition with regular shapes (rectangles, circles,
ellipses) or more complex shapes obtained with the projection in the image plane of the
3D model of the element to be inspected. The evaluation is based on indices such as the
uniformity of segmented regions, convexity of their forms, or periodicity of the image
pixels’ intensity [13, 14].

zamfira@unitbv.ro
982 F. Donadio et al.

Feature extraction using Speeded Up Robust Features (SURF) can also be applied
to perform the inspection of certain elements having two possible states, such as pitot
probes or static ports being covered or not covered, see Fig. 11. For such items, in
order to decrease the mission time, visual inspection with the navigation cameras
during displacements around the aircraft is under consideration [11].

Fig. 11. Two king of non-destructive testing. At left, visual inspection of the static port with
SURF method. At right, 3D inspection, tridimensional scan of the aircraft surface, a bump is
visible in the middle. The writing, visible in the top, helps to locate precisely the default.

At given positions, the pantograph elevates the 3D scanner at the fuselage level.
A pan-tilt unit moves the Eva scanner to acquire the hull. Figure 11 shows a 3D scan.
By comparing the data acquired to the aircraft model, algorithms are able to diagnose
any faults in the fuselage structure and provide information on their shape, size and
depth.
As explained in [9] by moving the pan-tilt units of the laser range finders, it is also
possible to obtain a point cloud in three dimensions. It is planned to make targeted
acquisitions, simpler in terms of movement, to verify, for example, the absence of
chocks in front of the landing gear wheels, or the proper closing of latches.

3.4 Collaboration
The robot has three possible navigation modes: autonomous mode; follower mode and
remote control mode. The level of robot autonomy is decreasing between these three
modes and the human-robot interactions are adapted in consequence. The collaboration
between the robot and its human operator are described in [8].
In the autonomous mode, like explained in the previous sections, the robot per-
forms a list of tasks autonomously such as moving to a pose in the airport frame or in
the aircraft frame, inspects an item or avoids obstacles. The human operator has to stay
at proximity of the robot and check sometimes the robot behavior. Even so, he can
perform his own inspection tasks in the meantime.
In the follower mode, the robot follows the human operator until a change of mode.
The robot has to avoid obstacles and recognize its operator between other humans. In
the remote control mode, the human operator can displace the robot to a specific
location thanks to the remote control or specify an NDT task thanks to the tablet
interface. The human operator is in charge of the mobile platform and the NDT sensors.

zamfira@unitbv.ro
Artificial Intelligence and Collaborative Robot 983

Classical warnings of the robot can emerge if there is a crashing code problem, or a
material dysfunction. If possible, it continues the mission with its reduced capacities
and skips tasks linked to these problems until someone intervenes. One example during
navigation tasks, if the GPS signal is too weak, then the robot sends a soft warning to
the human operator and move a bit updating its pose with odometry measurements. But
at some point, it has to receive the GPS signal otherwise its confidence level of its pose
estimation would be too low and the robot would stop and send a strong warning to the
user. Second example during inspection tasks, if the elevator of the 3D scanner has a
mechanic malfunction and it is not elevating correctly then the human operator has to
check it.
During its navigation tasks, the robot has to follow navigation corridors and safety
trajectories around the aircraft. It warns its operator if it is stuck and it cannot avoid
safely an obstacle without leaving the navigation corridor or being too far safety
trajectory. In that case, the operator can choose the follower or remote control modes to
lead the robot away from the problem or move the obstacle that blocks its path. Alerts
are also sent if the robot enters a prohibited area or exceeds a given speed.
During its inspection tasks, the robot can warn the human operator that something
is wrong. For example, it did not find the element to inspect in the image or the 3D scan
seems incorrect. The cobot can ask for a fast human intervention for examples if there
is still the chock in front of the landing gear or the protection on a pitot probe.
The operator is also visually checking the aircraft. He could ask for a NDT check if
he thinks there might be a problem on the aircraft which is not taken into consideration
in the robot tasks. After moving the robot, he can control the sensors and asks the robot
to perform some tests. In Fig. 12, the human operator asks for a 3D scan.

Fig. 12. From left to right, elevation of the scanner in order to perform a scan of A320 aircraft in
a hangar of Air France Industries, tridimensional scan of the aircraft where a crack and a bump
are visible, inspection result. The inspection algorithms provide shape, size and depth of those
imperfections with visual color representation to help the human operator.

If the robot confirms a default for example a bump, the operator can add this check
for this particular aircraft. The robot remembers its pose compared to the aircraft and
the performed NDT check so it can do it the next time that it encounters this aircraft.
Figure 12 presents a scan of the aircraft and one associated diagnostic.
At regular intervals, the robot sends its pose estimation in airport frame or aircraft
to the tablet. The human operator can check it on a facility map for the first one, and on
an aircraft representation for the second one. The human operator has an access to the
mission plan status and the diagnostics in real time. He can, for example, check camera
preview before image analysis to check the camera pointing.

zamfira@unitbv.ro
984 F. Donadio et al.

Since the human operator has a better understanding of the environment, he can
take the control of the robot to avoid problems before they arrived or just change the
order of the list of tasks. The human is better adapted to understand if another worker
interferes with the robot mission and at the opposite, to take into account whether the
robot interferes with another worker. In conclusion, he is responsible for choosing
which one has the priority.

4 Actual or Anticipated Outcomes

4.1 Management of the Operations


Airlines, airport management companies, ground-support service providers and, ulti-
mately, passengers will all benefit from this system of video-surveillance. The facility
surveillance system monitors in real time the turnaround progress and other airport
operations around an aircraft. Using all this knowledge, a system could provide, in real
time, adaptive navigation and action plans to the different ground operators to avoid the
other activities, early warnings about arising disruptions.

4.2 Autonomous Vehicles


AIR-COBOT is a mobile robot able to navigate autonomously in the airport and around
the aircraft. For security measures, the localizations in each frame are provided by two
different modalities. It opens the road to an automation of the ground support equip-
ment such as robots to move the chocks, vehicles to bring the luggage, etc.

4.3 Non-destructive Testing


With the video-surveillance system, if some items to control are in the field of view,
then the system can also perform image analysis for some maintenance checks like the
chocks or the probe protection, etc. The robot is able to carry and use a PTZ camera and
a 3D scanner to perform its NDT tasks and the ones asked by the operator. Now, the
human can have two different ways to be seconded for his NDT duties.

4.4 Human-Machine Interactions


In the AIR-COBOT project, compared to his robotic companion, the human is able to
better understand the environment. But he is not able to know exactly what the other
operators are doing around the aircraft and if they are on schedule. The video-
surveillance provides a way to solve this problem.
In the AIR-COBOT project, the two agents are able to navigate in the airport and
around the aircraft in an adaptive way. Various interaction requests reduce the whole
mission time and improve the productivity of the duo. Collaboration between these two
agents is inevitable due to the safety measures to follow in this particular working
environment and the variability of the inspection defaults. The robot is able to learn
from these interactions with the human to improve its efficiency: transforming human

zamfira@unitbv.ro
Artificial Intelligence and Collaborative Robot 985

requested checking tasks into automatic tasks for a specific aircraft; learning new
obstacle to be able to perform recognition; developing its artificial intelligence.

5 Conclusions and Prospects

5.1 Conclusions
The two projects presented in this paper can increase the efficiency of airport opera-
tions. Their monitoring is beneficial to detect problems and adapt them in consequence.
Other vehicles or operators could intervene in case of difficulty to keep the time
schedule. The human-robot collaboration will increase the efficiency and the reliability
of inspection, reduces the risk and uncertainties, self-adapt to different types of air-
crafts, service types, investigation contexts, and operational circumstances.
At the same time, the application of collaborative robots may add complexity to
airport management. The experiments in real airport environment are necessary and the
feedbacks from airlines, airport management companies and ground-support service
providers are important. In the future, it will be necessary to consider in details the
tradeoff between management complexity and robotic intelligence.

5.2 Prospects
The mobile platform is made for inspecting the lower parts of the aircraft. It is envi-
sioned to use it with a drone for the upper parts. The partnership between them is
beneficial due to complementary inspections from different point of view (ground, air)
and better adaptability.
AKKA Technologies envisions also Cyber Physical System (CPS) to monitor not
only the aircraft operations around the aircraft but all the activities in the airport to
provide better predictions and adapt the actions of human operators, future robotic
assistants, and autonomous vehicles.
Thanks to the traceability of maintenance and other actions, aircraft operations will
not be developed according to an aircraft model but to each particular aircraft. State of
the aircrafts will be more closely studied and constraining maintenance operations
could also be predicted in advance.

Acknowledgments. The partners (Inria of Sophia Antipolis, University of Hamburg, University


of Leeds, University of Reading, Toulouse-Blagnac Airport) of the CO-FRIEND project
(FP7-ICT-214975) are gratefully acknowledged. AIR-COBOT (http://aircobot.akka.eu) is a
Fonds Unique Interministériel (FUI) project from the competitiveness cluster of Aerospace
Valley. We thank the other members of Air-Cobot team from AKKA Research and the partners
of the project (Airbus Group, LAAS-CNRS, Armines, 2MoRO Solutions, M3 Systems and
Stéréla) for their help and support. The partners thank Toulouse-Blagnac Airport, Airbus and Air
France Industries for giving us access to video-surveillance cameras or aircrafts to do acquisitions
and validations; and their staffs which help us during those days.

zamfira@unitbv.ro
986 F. Donadio et al.

References
1. Borg, M., Thirde, D., Ferryman, J., Fusier, F., Brémond, F., Thonnat, M.: An integrated
vision system for aircraft activity monitoring. In: Proceedings of the 6th IEEE International
Workshop on Performance Evaluation of Tracking and Surveillance (PETS) (2005)
2. Ferryman, J., Borg, M., Thirde, D., Fusier, F., Valentin, V., Bremond, F., Thonnat, M.,
Aguilera, J., Kampel, M.: Automated scene understanding for airport aprons. In: Advances
in Artificial Intelligence, AI 2005, vol. 3809, pp. 593–603 (2005)
3. Borg, M., Thirde, D., Ferryman, J., Fusier, F., Valentin, V., Brémond, F., Thonnat, M.: A
real-time scene understanding system for airport apron monitoring. In: Proceedings of IEEE
International Conference on Computer Vision Systems (ICVS) (2006)
4. Chau, D.P., Bremond, F., Thonnat, M., Corvee, E.: Robust mobile object tracking based on
multiple feature similarity and trajectory filtering. In: Proceedings of the International
Conference on Computer Vision Theory and Applications (VISAPP) (2011)
5. Patino, L., Brémond, F., Thonnat, M.: Activity discovery from video employing soft
computing relations. In: Proceedings of the IEEE International Joint Conference on Neural
Networks (IJCNN) (2010)
6. Patino, L., Brémond, F., Evans, M., Shahrokni, A., Ferryman, J.: Video activity extraction
and reporting with incremental unsupervised learning. In: Proceedings of the 7th IEEE
International Conference on Advanced Video and Signal-Based Surveillance (AVSS) (2010)
7. Greenall, J.P.: High-level activity learning and recognition in structured environments. Ph.D.
thesis, University of Leeds (2012)
8. Donadio, F., Frejaville, J., Larnier, S., Vetault, S.: Human-robot collaboration to perform
aircraft inspection in working environment. In: Proceedings of the 5th International
Conference on Machine Control and Guidance (MCG) (2016)
9. Frejaville, J., Larnier, S., Vetault, S.: Localisation à partir de données laser d’un robot
naviguant autour d’un avion. In: Proceedings of the Reconnaissance des Formes et
l’Intelligence Artificielle (RFIA) Congress (2016)
10. Jovančević, I., Viana, I., Orteu, J., Sentenac, T., Larnier, S.: Matching CAD model and
image features for robot navigation and inspection of an aircraft. In: Proceedings of the 5th
International Conference on Pattern Recognition Applications and Methods (ICPRAM),
pp. 359–366 (2016)
11. Villemot, T., Larnier, S., Vetault, S.: Détection d’amers visuels pour la navigation autour
d’un avion et son inspection. In: Proceedings of the Reconnaissance des Formes et
l’Intelligence Artificielle (RFIA) Congress (2016)
12. Futterlieb, M., Cadenat, V., Sentenac, T.: A navigational framework combining visual
servoing and spiral obstacle avoidance techniques. In: Proceedings of the 11th International
Conference on Informatics in Control, Automation and Robotics (ICINCO), pp. 57–64
(2014)
13. Jovančević, I., Arafat, A., Orteu, J., Sentenac, T.: Airplane tire inspection by image
processing techniques. In: Proceedings of the 5th Mediterranean Conference on Embedded
Computing (MECO) (2016)
14. Jovančević, I., Larnier, S., Orteu, J., Sentenac T.: Automated exterior inspection of an
aircraft with a pan-tilt-zoom camera mounted on a mobile robot. J. Electron. Imag. 24(6)
(2015)

zamfira@unitbv.ro
Methodological Proposal for Use of Virtual
Reality VR and Augmented Reality
AR in the Formation of Professional Skills
in Industrial Maintenance
and Industrial Safety

Jose Divitt Velosa1(&), Luis Cobo1, Fernando Castillo2,


and Camilo Castillo1
1
Faculty of Engineering, University EAN, Bogota, Colombia
{jvelosa,lacobo}@universidadean.edu.co,
cacastilloa@ean.edu.co
2
School of Industrial Engineering of Toledo,
University of Castilla-La Mancha, Toledo, Spain
Fernando.Castillo@uclm.es

Abstract. Training in industrial safety and maintenance is an important subject


in the curriculum of technicians, technologists and engineers, in order to guar-
antee competences on the protection of people, goods and equipment in different
industrial processes. In particular, industrial and manufacturing engineers and
occupational hazards professionals must strengthen skills and abilities to assess
risks, find fault, detect dangerous situations and generate mitigation and inter-
vention plans [1]. For this, the subject of experimentation in real situations or
very close to the real ones is made relevant, this favors the best understanding of
the studied phenomenon and the lifting of mitigation plans more in line with the
reality. However, the approach to special situations to an untested student could
generate real risks and difficulties in the accompaniment. Due to this, the pos-
sibility of taking the student to these environments like Virtual Reality VR and
Augmented Reality AR are explored that place the student in situations or
environments that reproduce reality and with more information, reducing to the
maximum the proximity with the source of risk and accompanying him for his
correct evaluation of the performance.

Keywords: Virtual reality  Extended reality  Hybrid laboratories in


engineering

1 Introduction

Different authors have developed applications that favor the understanding of good
practice in engineering based of augmented reality and virtual reality [1]. The main
areas in the engineering service in with VR and AR are oriented to the area of industrial
plants and aerospace and the main engineering services intervened are in their order;
maintenance, training and machine inspection [1].

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_92
zamfira@unitbv.ro
988 J.D. Velosa et al.

These two methods; Virtual Reality VR and Augmented Reality AR share modes of
operation and similar principles of development, however the basic principle of Virtual
Reality takes the essential characteristics of real elements and through computer
applications are virtualized in environments of low interaction [2]. The virtual elements
replace the physical reality [3] and the environments are pre-designed, based on it the
mechanisms of evaluation these applications are concentrated in the degree of approach
with reality, its coherence with the physical environment (laws and behaviors) and
interaction with the user [4].
While the AR proposal seeks to take advantage of the real environment and on this
tool, generates points of interaction with the user [5], increasing the connection of the
senses with the recreated phenomenon, especially the perception of location and
movement.
The degree of attachment of person is greater and their perception of reality is
higher than VR. The main characteristics in which these technologies are evaluated are
focused on: Reliability, Sensitivity and Agility [6]. However, a variety of techniques
are known that make a continuous development between virtual reality and real reality.
Although many AR and VR combinations are shown there are several clearly
defined combinations. In Fig. 1, it is shown how the development of these tools is a
continuous development from virtual reality to the total real.

Fig. 1. Classification of reality concepts according to correlation between perception and action
and level of interaction [7].

2 VR and AR Tools in Engineering

In order to VR and AR tools to fulfill the purposes for which they have been designed,
it is necessary to follow a methodology of development and implementation within the
context of the use of engineering laboratories, which starts from the training objectives
until the evaluation in their different aspects.
To this end, an integrated methodology has been proposed for the elaboration of
laboratories in different modalities; Face-to-face, remote and virtual (local and virtual in
the cloud).

zamfira@unitbv.ro
Methodological Proposal for Use of Virtual Reality VR 989

This methodology called SMART (by its acronym in Spanish Modular Temporal
Rapid Access System), is composed of phases that incorporate documents and infor-
mation in order to make a robust and coherent proposal. The final objective teaching
besides generating the object of learning of VR or AR in laboratories is to incorporate
the proposal generated in a broader and scalable environment, until reaching to a
confederation of laboratories, teaching (Fig. 2).

Fig. 2. Structure of a confederate labor scheme.

That is why the structure of practice in laboratories must allow interoperability,


management and scalability, keeping the laboratory standards to share resources and
activities. To demonstrate the use of SMART has been proposed the development of a
Learning Object using Virtual Reality and Augmented Reality.
What is explained below is the development of the Virtual Activity as an experi-
mental laboratory. The four main phases that make up SMART applied to the case of
the Maintenance and Industrial Safety course of the EAN University in Colombia are:

2.1 Requirements
In this phase the requirements of curricular design with the training strategies are
incorporated in order to develop the professional competences that evidence the degree
of development of superior order in the formation of students in engineering.
Some of the best known standards are ABET [8], Bologna, ISCED [9], World Bank
and OECD [10]. One of the most important aspects is to take advantage of the structure
of the standard.

2.2 Architecture
The next phase corresponds to the design of the learning object, that is getting char-
acteristics of the student, equipment and systems available and normative standards.
For example, for Dini [1], the three main methods of aiding the generation of aug-
mented reality experiences are optical combination, video mixing and image
projection.

zamfira@unitbv.ro
990 J.D. Velosa et al.

2.3 Construction
This phase includes two components: the preparation to put into use the first prototype.
Here we determine the definition of the general laboratory structures implemented by
the institution, the elements for its implementation with students and the definition of
mechanisms of control and evaluation mechanisms of students.

2.4 Evolution
As a final phase, the development of the VR and AR learning experience should be
evaluated. This verification is done in three aspects: Evaluation of the implementation,
Evaluation of the competences reached by students and the Evaluation of the Learning
students have gotten. The evaluation of the implementation is done with methods that
evaluate the student’s perception: Technology Acceptance Model (TAM) and ARI [11].
The evaluation of competencies is carried out using learning process evaluation
tools oriented by a certification organization in the ABET study and learning with
Conceiving — Designing — Implementing — Operating (CDIO) MR [12] and
OTSM-TRIZ [13] and Learning Analytics [14, 15].
The methodology SMART (Sistema Modular de Acceso Rápido Temporal for
abbreviators in Spanish), used to integrate all the phases before mentioned, is presented
in the next Fig. 3.
The methodology is valid for any type of engineering competence generation
experience. The way to integrate the features in each step is done through the QFD
(Quality Function Deployment) tools.

3 Proposal for Industrial Maintenance Using VR and VA

Traditionally, the maintenance and industrial safety course of the EAN University
(Colombia) for manufacturing engineers has been worked through scenarios and
hypothetical situations, trying to represent reality through assumptions.
Particularly in the case of maintenance subject it, has been made practices of
element description, but in its application it is not always possible to relate and inte-
grate in a single experience; Machine characteristics, maintenance characteristics,
equipment status and performance against the equipment.
For this experience, the student develops activities with the help of the professor
and laboratory professional in short and always assisted moments, almost always
generating superficial results with fragments of information, highly directed work and
lack of additional information.
Similarly, in the case of occupational hazards, the experience cannot be brought
into the context of a real decision-making situation with multiple sources of infor-
mation to a student without a serious risk of accident or Low understanding of the
phenomenon by being concentrated in the risky situation.
Based on this, the questions that this article seeks to answer are: how can the use of
augmented and virtual reality technological and computer tools facilitate training in the

zamfira@unitbv.ro
Methodological Proposal for Use of Virtual Reality VR 991

Fig. 3. SMART hybrid lab development model

field of maintenance and safety at work? And how to implement a methodological


proposal from the context of training and the generation of skills in this type of
applications?
The proposed is applied in the environment of the laboratory of Physics Processes
course in EAN University to different groups of engineers, in different contexts where
they have reproduced the different types of labor risks, information machines, equip-
ment, and related documentation.
For the design of the Virtual Reality experience, the SMART methodology was
applied. Identifying each of the four steps.
In the Requirements Phase, the Occupational Risk Factors (Physical, Chemical,
Biological, Physical Ergonomics, Physical and Social Environmental Insecurity and
Environmental Sanitation) [17] were consulted. Technical Standard Colombia
NTC-1461 Hygiene and safety. Colors and signs of Security. This material was also
shared with the students of the Unit of Studies.
For the next phase was implemented a logical laboratory and face-to-face physical
architecture using virtual reality glasses and a 5-inch screen Smartphone.
The student is the main actor, for which he must have prepared his participation in
the laboratory with time, reviewing the material and the subjects to be evaluated. The
virtual reality learning environment was prepared by taking 4 spherical photos of areas
of the laboratory with different actual risk situations intentionally placed (Fig. 4).

zamfira@unitbv.ro
992 J.D. Velosa et al.

Fig. 4. Logical design of the laboratory proposal

The competences that were to be evaluated from the ABET standard were:
• An ability to perform standard tests and measurements, and to conduct, analyze and
interpret experiments.
• Ability to apply written, oral and graphic communication in technical and
non-technical environments; And the ability to identify and use appropriate tech-
nical literature.
• Ability to identify, analyze and solve engineering technology problems.
The physical structure is comprised of a Smartphone (with spherical image viewing
applications, file editing and selection and WiFi connection), virtual reality vision
glasses, control computer with Internet access and a closed space (Fig. 5).

Fig. 5. Physical scheme of the proposal of virtual reality

The physical structure provides the elements of communication, control and


interaction on which the student is concentrated and on which the professor evaluates
the competencies proposed for students with the help of other students.

zamfira@unitbv.ro
Methodological Proposal for Use of Virtual Reality VR 993

For the construction phase of the experiment, four photos were taken of the three
most important risk factors that were evidenced in different spaces of the laboratory. An
example of this is shown in the following photo (Fig. 6).

Fig. 6. Risk-generating elements in the machining laboratory of EAN university

After that, the photos are loaded into the memory of the Smartphone that is edited
with the Virtual Reality App.
The constriction or development of the experience is done in the laboratory space.
Students visualize the spherical photo pre-designed with the risks and based on their
knowledge, first identifies the risks presented, then analyzes the degree of severity of
the same and finally proposes the best way to mitigate or avoid it. The identification
can be assisted by the professor or by other students.
Finally, the professor, with the help of the evidence gathered in the experience,
evaluates the three aspects of the student and his experience: Evaluation of the
implementation, Evaluation of the competences reached and the Evaluation of the
Learning. The evaluation of the implementation is carried out by means of a survey
made to the students after the reality virtual and augmented of the experience; this one
is based on the perception of the technology, through the proposal Technology
Acceptance Model (TAM) on perception of the technological acquisition.
The evaluation of the competences and abilities is based on the proposal of certi-
fication of ABET and the elements delivered by the student, among them.
• Risk overview
• Risk Matrix
• Audit and risk control report

zamfira@unitbv.ro
994 J.D. Velosa et al.

Assessment of Learning is done by the professor through the CDIO (Conceive -


Design - Implement - Operate) proposal and as the student progresses through each of
the steps. In the step of conceiving the student must recognize the relation of the theory
and the context of the photo given for the experience. In designing the student creates a
risk profile on the findings in each photo and evaluates them.
For the step of implementing the student must describe the actions to be imple-
mented and attach the basic information to be incorporated, such as risk identification
and mitigation.
And finally the step of operating relates to the handling of written information that
should be given to the plans for a possible implementation in order to improve the
process and decrease accidents at work.

4 Application of the Proposal (Case Study)

The methodology takes as a principle the proposal of [18] and the SMART proposal,
on the application of augmented reality in operating environments and the training
process in manufacturing engineers in the context of engineering design [19, 20].
Based on this integration proposal, a virtual reality activities was designed; First
real spherical photos of risk situations were taken in the laboratory and presented to
students through virtual reality helmets. See Fig. 1(b). Students with the support of
prior information and teacher instructions, and real-time laboratory professional could
interpret the situations presented to them. To See Fig. 7(a).

Fig. 7. Photos of the experience (a) spherical photo identification of equipment (b) Students
developing the industrial safety activity with AR glasses

4.1 Results
The data focus on the perception of the implementation and use of the technology of
the professional versus the use of the activities implemented. To this end, the survey
tool was applied which probes the characteristics of Perceived Utility and ease of use.
The instrument takes into account the TAM (Technology Acceptance Model) method
described by [21, 22].

zamfira@unitbv.ro
Methodological Proposal for Use of Virtual Reality VR 995

Through this tool the perception of the usefulness of the technological imple-
mentation designed and the ease of use is investigated. The survey was applied after the
experience. Facing the perception that the students have evaluated five questions, each
on a Likert scale of 1 to 10 where 10 is the most accepted degree. The results of the
students are shown in graphs 8 and 9.
Based on this it is observed that the highest values are obtained in the question: Q3
The tools used (equipment and 3D material and software) are clear. This corroborates
that the strategy of using photos 360 of the laboratory or the place where the practice is
performed the student achieves a good perception of reality.
Meanwhile, the lowest score in this respect is found in the question; Q0 - Your
level of knowledge of practice laboratories is (at the end), this got a 8.4/10. One of the
main reasons for this value is the limited time between the delivery of the support
material and the practice. The other aspects are enclosed in high ratings.
Perceived utility has generally high values. The most valued answer was: Q4. The
overall evaluation of laboratory integration is positive, confirming the acceptability of
these novel proposals (Fig. 8).

Fig. 8. Values of perceived utility of laboratory experiments

And the one of lower qualification is the one that involves the participation and
intervention of the teacher. This is especially evident in the component that evaluates
the perception of use. The main reason for this appreciation is the orientation given to
the activity by the teacher on the tools that are used (Fig. 9).
The questions were formulated against the characteristics of the MAT model; these
are:
• Knowledge:
Knowledge Achieved when implementing the tool and when used by the student, in
conjunction with the application that the student can give without needing assistance
from the teacher. The related questions are Q0 and Q6.

zamfira@unitbv.ro
996 J.D. Velosa et al.

Fig. 9. Values of perception of use of laboratories

• Use
Characteristic observed in the perception that this tool can have the training by the
student. And in the ease of using the tool with the basic knowledge gained. The related
questions are Q1, Q9 and Q10.
• Coherence
Characteristic that looks for the strong relation between the reality and the situations
that normally could be faced an Engineer. The related questions are Q2.1 and Q5.
• Clarity
Both in the instructions and in the use that is given to the implementation of the
experience. The related questions are Q3 and Q8.
• Degree of acceptance
It measures the degree of conformity to what the student expects and what ultimately
results from his/her experience. The related questions are Q4 and Q7.
In the following graph it is observed how the characteristics that keep an equal or
similar valuation is the Usage and the coherence and the characteristic that has a greater
difference between the Perceived Utility and the ease of use is the degree and
knowledge that can be reached (Fig. 10).

4.2 Effectiveness in Developing Competition


For the evaluation of the effectiveness of the learning experience, it was proposed to
follow the ABET competency assessment methodology [8], with the help of perfor-
mance indicators and a qualification rubric. Evidence by students delivered by the
students was composed of three documents, which were given a level of performance
and a grade for each performance (Table 1).

zamfira@unitbv.ro
Methodological Proposal for Use of Virtual Reality VR 997

Fig. 10. Value of the characteristics of each of the factors of the TAM methodology

Table 1. Competencies evaluated in the laboratory


Competencies ABET evaluate
ABET Ability to perform standard Ability to apply written, Ability to identify,
competencies tests and measurements, oral and graphic analyze and solve
evaluated vs and to conduct, analyzes communication in engineering
documents and interpret experiments technical and technology
delivered non-technical problems
environments; And the
ability to identify and use
appropriate technical
literature
• Risk overview a
• Risk Matrix c e
• Audit and risk b d f
control report

With the help of this matrix and the level of the indicator of the degree of devel-
opment of the competences evaluated, a tool of evaluation of the competition was
proposed based on the application of the experience of virtual reality.
The main elements of the rubric and the average grade of the group evaluated are
presented in the following Table 2.

zamfira@unitbv.ro
998 J.D. Velosa et al.

Table 2. Rubric of evaluation of the experience in the laboratory


Aspect to Proposed Indicator
evaluate Low Medium High
0–60 61–90 90–100
Risk a Identifies some risks in Identifies all proposed risks Identifies new risks
overview the environments shown additional to those
proposed
95.00
Risk e Values the risks found It prioritizes the risks Establish global
Matrix giving value to the encountered and establishes mitigation strategies
probability and severity an order of importance
85.00
c Identifies the failure of It establishes the Proposes appropriate
some of the signaling relationship between risk signaling for risks
elements elements and their
management
95.00
Audit and b Identifies the elements Analyzes the elements of Proposes integration of
risk of the risk management the audit and establishes management plans
control system characteristics of each
report system
80.00
d State the most important Produces an orderly and Establish a guiding
elements in the audit complex report of proposed document to manage all
report risk analysis risks encountered
70.00
f Identifies the Identifies the engineering Proposes the
engineering elements elements required by the implications on the
required by the mitigation plan implementation of the
mitigation plan mitigation plan
80.00

5 Discussion and Conclusions

With the help of exercise and VR and augmented virtual activities AR will establish
practices consistent with the skills and competencies required by the course, including
identification of tacit and active risks, risk rating and action plan. More tests are
currently being made with environments developed with different groups. The expected
results are:
• To corroborate the effectiveness of the use of these augmented reality AR and
virtual VR tools in industrial safety practices. To reduce the risk to the student of
dangers increasing the degree of depth in the situations. Involve the outcome of this
project in a larger project or the one of hybrid laboratories for laboratories federations.

zamfira@unitbv.ro
Methodological Proposal for Use of Virtual Reality VR 999

• Determine the important characteristics for the development of methodologies for


the creation of hybrid laboratories. Provide more open and free information in order to
generate critical thinking to the student versus the unit. Develop interconnected envi-
ronments between students on the subject of safety and industrial maintenance, directed
with the support of the teacher.
The use of elements that provide the experiences of the virtual reality contributes to
the development of competences. The evaluation focuses on these elements and is
evident in the evaluation rubric.

References
1. Dini, G., Mura, M.D.: Application of augmented reality techniques in through-life
engineering services. Procedia CIRP 38, 14–23 (2015)
2. Weyrich, M., Drews, P.: An interactive environment for virtual manufacturing: the virtual
workbench. Comput. Ind. 38(1), 5–15 (1999)
3. Vora, J., Nair, S., Gramopadhye, A.K., Duchowski, A.T., Melloy, B.J., Kanki, B.: Using
virtual reality technology for aircraft visual inspection training: presence and comparison
studies. Appl. Ergon. 33(6), 559–570 (2002)
4. Murat, A., Akcayir, G., Pektas, H.M., Ocak, M.A.: Augmented reality in science
laboratories: the effects of augmented reality on university students’ laboratory skills and
attitudes toward science laboratories. Comput. Hum. Behav. 57, 334–342 (2016)
5. Chang, Y., Liu, H., Kang, Y.: Using augmented reality smart glasses to design games for
cognitive training, pp. 246–247 (2016)
6. Elia, V., Gnoni, M.G., Lanzilotto, A.: Evaluating the application of augmented reality
devices in manufacturing from a process point of view: an AHP based model. Expert Syst.
Appl. 63, 187–197 (2016)
7. Schnabel, M.A., Wang, X., Seichter, H., Kvan, T.: From virtuality to reality and back. In:
Proceedings of the International Association of Societies of Design Research, pp. 1–15
(2007)
8. A. Commission Technology Accreditation. Criteria for Accrediting Engineering Technology
(2014)
9. UNESCO: The International Standard Classification of Education 2011 (2013)
10. Tremblay, K., Lalancette, D., Roseveare, D.: Assessment of higher education learning
outcomes (AHELO) feasibility study. Feasibility Stud. Rep. 1, 113–126 (2013)
11. Georgiou, Y., Kyza, E.A.: crossmark, vol. 98, July 2016, pp. 24–37 (2017)
12. Cdio: The CDIO Initiative. vol. 0, no. 26 January 2011, pp. 1–14 (2010)
13. Becattini, N., Cascini, G., Rotini, F.: OTSM-TRIZ network of problems for evaluating the
design skills of engineering students. Procedia Eng. 131, 689–700 (2015)
14. Orduña, P., Almeida, A., López-De-Ipiña, D., Garcia-Zubia, J.: Learning Analytics on
federated remote laboratories: tips and techniques. In: IEEE Global Engineering Education
Conference EDUCON, no April, pp. 299–305 (2014)
15. Serrano-Laguna, Á.: Computer Standards & Interfaces, vol. 50, September 2016,
pp. 116–123 (2017)
16. de Trabajo, M.: Decreto 1072 de 2015 (26 May 2015). Por medio del cual se expide el
Decreto Único Reglamentario del Sector Trabajo EL, vol. Version ac, pp. 1–326 (2015)
17. Gutiérrez, A.: Guía técnica para el análisis de exposición a factores de riesgo ocupacional
(2011)

zamfira@unitbv.ro
1000 J.D. Velosa et al.

18. Porcelli, I., Rapaccini, M., Espíndola, D.B., Pereira, C.E.: Technical and organizational
issues about the introduction of augmented reality in maintenance and technical assistance
services. In: IFAC Proceedings, pp. 257–262 (2013)
19. Abdullah, F., Ward, R., Al, H., Abdallah, F., Barbar, K., Aorks, T.: Comparing copresent
robots, telepresent robots and virtual agents. Comput. Hum. Behav. 55(2), 1–10 (2015)
20. Di Donato, M., Fiorentino, M., Uva, A.E., Gattullo, M., Monno, G.: Text legibility for
projected Augmented Reality on industrial workbenches. Comput. Ind. 70, 70–78 (2015)
21. De Aceptación, M., Tam, T., Cataldo, A.: Una revisión de la literatura, no. 45, pp. 1–9
(1986)
22. Phatthana, W., Mat, N.K.N.: The application of technology acceptance model (TAM) on
health tourism e-purchase intention predictors in Thailand. In: International Conference on
Business and Economics Research, vol. 1, pp. 196–199 (2010)

zamfira@unitbv.ro
Sketching 3D Immersed Experiences Rapidly
by Hand Through 2D Cross Sections

Frode Eika Sandnes1,2(&)


1
Department Computer Science, Faculty of Technology, Art and Design,
Oslo and Akershus University College of Applied Sciences, Oslo, Norway
Frode-eika.sandnes@hioa.no
2
Faculty of Technology, Westerdals Oslo School of Art,
Communication and Technology, Oslo, Norway

Abstract. Sketching 3D immersed experiences often require the designer to


use some 3D modelling tool. The tools slow down the designers and can be a
hindrance to the creative process. Moreover, the results often have the appear-
ance of finished products. Hand sketching however, allows the designer more
rapidly to express ideas that suddenly emerge and quickly disappear. Moreover,
sketches drawn by hand have the advantage of looking unfinished. This paper
proposes a simple method for making 3D sketches by hand and even on paper.
Tool support is provided for transforming the sketches into 3D models that can
be viewed using standard viewers that give viewers the immersed experience.
The sketches can also be overlaid on existing panoramic images used for
backgrounds. Sketches of 3D scenes and models are created by sketching var-
ious cross-sections of the scene from various angles. Several cases are used to
illustrate how the framework is used. The sketches are viewed using a standard
off-the-shelf panorama or point cloud viewers.

Keywords: Sketching  Immersed experiences  3D  Panorama 


Equirectangular projection  Augmented reality  Virtual reality

1 Introduction

Sketching is used by designers to rapidly capture and communicate ideas [1]. A sketch
can take many forms. It may be a hand drawing, constructed using computer tools, or
composed using other objects. It has been argued that computer tools generally are slow
to use, and that the delays incurred by operating the software compromises the creative
process [2]. Ideas arrive as a wave of thought and thus emerge in an instant and may also
disappear in an instant. It is therefore important to capture and express the idea in a
timely manner. Many argue for using simple sketching tools, such as drawing by hand.
Hand drawings also have the added advantage that they appear unfinished, yet
inspirational through their organic appearance [3]. Hand drawing is relatively easy to
achieve when expressing ideas for two-dimensional user interfaces [4, 5], and
sketching three-dimensional scenes and objects in perspective. However, such
three-dimensional sketches are static, representing the view in one direction from one
position. To sketch immersive interactive experiences often require that the designer

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_93
zamfira@unitbv.ro
1002 F.E. Sandnes

makes a three-dimensional model in some design software. For example, architects


often make three-dimensional models, which customers may virtually walk through
before any physical construction. The design of three-dimensional objects is time-
consuming and can be difficult as it usually involves transforming 2D interaction into
3D using some mechanism [6, 7]. Interaction designers are increasingly also working in
3D as computing is becoming a more pervasive in our lives. Moreover, game
designers, museum [8] and exhibition designers, and simulation designers often work
in 3D.
This paper proposes a simple framework for allowing designers to more easily create
three-dimensional sketches of immersed experiences using their existing drawing skills
and tools. The designer simply make two-dimensional sketches of key cross sections of
the scene, either by sketching on paper or using a computer drawing program of their
choice. A tool is proposed that transforms a series of cross sectional sketches into
three-dimensional models that can be viewed using virtual or augmented reality soft-
ware or panoramic viewers that are intended for panoramic photographs [9, 10]. This
paper uses panoramic viewers [11, 12] as panoramic views are popular means of
communicating an immersed experience to a broad audience via their normal web
browser. Google Street View [13] is one example of a popular service using panoramic
views. Panoramic viewers are also used by museums to communicate immersed
experiences [14].

2 Background

The research into freehand sketching of 3D models appears to be dominated by


attempts to reconstruct 3D models from 2D line drawings [15–18]. Most of these works
are based on the realization that it is difficult to use traditional computer assisted design
tools during the conceptual design phase and these studies are demonstrated as
proof-of-concept for relatively simple sketches.
The first step of converting a two-dimensional line drawing into a three-dimensional
model involves detecting the lines using image processing techniques such as thinning,
line detection, validation and reconstruction [19]. The process is simplified if a digital
input device is used allowing stroke gestures to be captured directly and interpreted as
lines in the sketch [20, 21]. To robustly detect objects in images is generally a very hard
problem [22, 23].
Converting two-dimensional renderings into three-dimensional shapes involves
guesswork. One approach is to try to map the freehand shapes onto mathematically
parameterized curves and reconstruct the 3D-shapes from these generic shapes [24–27]
or splines [28]. Another approach to help the computer interpret the two-dimensional
sketch is by the means of a drawing protocol where certain primitives are drawn in a
certain sequence [28]. Protocol based input require that the drawing sequence can be
captured by the input device.
Even though lines can be detected successfully, and often good guesses can be made
about the three-dimensional mapping it is often not obvious how a two-dimensional
representation maps to three-dimensional space, especially for complex shapes. It has
therefore been proposed to introduce human intervention in the process, where the

zamfira@unitbv.ro
Sketching 3D Immersed Experiences Rapidly by Hand 1003

human interprets the semantics of shapes, which is relatively easy for humans and hard
for machines and the computer perform the other tasks that are hard for the human and
easy for the machine [28]. Another approach is to rely on fuzzy logic where uncertainty
in the 2D input is propagated to the 3D output [24, 29].
To capture complex shapes with continuous contours detailed line grids are needed.
Another approach for sketching complex 3D shapes is the use of shading [30] where
shades are used to express the shape of an object in 3D they way light reflects from the
surface of the object. A very different approach is to use the input sketch as a query into
a database of 3D models and then fit the model into a given scene [31]. This approach
obviously needs database content that matches the needs of the designer. Moreover,
using a sketch as a database query is not trivial.
Silhouette modelling has also been proposed, where a three-dimensional shape is
based on the silhouette of a two-dimensional sketch. For instance, in the Teddy system
[32] the users draws the outline of a cuddly toy in two-dimensions and the
three-dimensional shape is derived from the silhouette with a rounded shape, where
small parts are thinner and large parts become thicker. Two-dimensional silhouettes can
also be used to define three-dimensional cross sections [33], which is similar to the
approach proposed herein.
An interesting method proposed by Tolba et al. [34] mapped the perspective grid
with its vanishing lines to a two-dimensional sketch of a 3D sketch drawn in per-
spective and thereby capture the model. They also had a hybrid approach which was
not solely focused on capturing the three-dimensional model but on the viewing
experience as they also experimented with panoramic sketches where panoramic
images were composed using four 3D sketches of the four viewing directions mapped
onto a unit sphere, this is similar to the approach proposed herein.
It has been proposed to sketch directly in the equirectangular panoramic domain
[35]. This approach allows designers to use their existing three-dimensional sketching
experience. Sketches are still drawn from one position, but the omnidirectional nature
of the sketch allows the sketches to be viewed using panoramic viewing software
giving the viewer a stronger sense of presence. The equirectangular panoramic map-
ping is a projection of the world onto a sphere represented using a geographical
coordinate system with latitude and longitude [36, 37]. A complete panorama repre-
senting all directions from –180 to 180° horizontally and –90 to 90° vertically becomes
a panoramic image with an aspect ratio of 2:1.
One property of the equirectangular projection is that vertical lines remain vertical,
while horizontal lines become curved. It thus require some experimentation and skill to
draw realistic models directly in the equirectangular domain. It was therefore proposed to
use equirectangular grid lines [38] where planes represented as grids are projected onto
the equirectangular panorama. By tracing the various lines along the x, y and z directions
it is possible to make perfect panoramic images. Attempts at translating the viewer inside
a panoramic image has also been attempted [39].
Although the grid lines reportedly helped, it is still non-trivial to sketch directly in
the panoramic domain as humans are not used to process entire panoramas on one go.
The work proposed herein can thus achieve the same results without having to operate
directly in the panoramic domain. It is based on flat two-dimensional cross sections and
humans are good at working with two-dimensional representations.

zamfira@unitbv.ro
1004 F.E. Sandnes

Cubic projection is another panorama representation that is particularly popular


among researchers working on interpolation between multiple panoramic images [40–
42]. Cubic representations splits the panorama into six square images representing the
six different viewing directions, namely east, west, north south, up and down. Cubic
projections are easier to understand than equirectangular projections. However, a major
disadvantage of cubic representations from a sketching perspective is that lines that are
straight in the real world may span several of the six images. Lines are thus bent across
the cube edges. Sketching a panorama on the discrete 90-degree faces of the panorama
cube is very unnatural. In contrast, equirectangular panoramas are continuous.
The use of flat planes in 3D modelling has been used for simple modelling, where a
flat drawing, or a billboard, is used to represent objects in 3D space [43]. Flat drawings,
and flat drawings projected onto curved canvases has also been used to create simple
film sets for animated films [44]. As with sketching, the objective is not to create
accurate models, but rather to create convincing effects. Flat drawings representing
cross-sections of objects has also been proposed as a 3D modelling technique [45].

3 Method

Three-dimensional sketches are built by making two-dimensional sketches representing


planes or cross sections of the scene. It could for instance be four sketches representing
the four walls of a room, and two sketches representing the floor and the ceiling. The
scene assumes that the viewer is located in the center of the coordinate system and the
position of the planes are defined in terms of polar coordinates, that is, the angle of the
plane with respect to the viewing position and the distance to the viewing position. In
the tool prototype, it was assumed that planes are either vertical or flat, although it is
possible to implement tilted planes as well. It is also possible to specify several planes
at the same angle, but at different distances. This could for instance be used to generate
multiple walls where one may look through the door of the nearest wall and see the
wall in the neighboring room. These two walls will be at the same angle, but at different
distances.
The designer can both sketch on paper and scan the sketches, or use a computer
based drawing application. The framework tool can generate template images with grid
lines that can be used for overlay drawing [46]. These grids makes it easier to see the
viewing angles at different distances, and hence make it easier to make different part fit
together, such as a rectangular shaped room where one set of walls is shorter than the
other set of walls.
The set of two-dimensional sketches are imported into the tool and the tool converts
the set of images into a three-dimensional representation. The conversion is performed
by traversing all the pixels of the image. White is considered the background color and
all non-white pixels are therefore included into the three-dimensional model. Moreover,
the current implementation also ignores pixels with the same color as the support grid.
Various shades of cyan were used for this purpose. Obviously, other colors could be

zamfira@unitbv.ro
Sketching 3D Immersed Experiences Rapidly by Hand 1005

used instead of white and cyan to represent the background and grid lines, respectively.
Line drawing sketches will therefore become transparent wire-frames, while sketches
where regions are colored are modelled as non-transparent faces.
The 3D model is built as follows. Given an image I at an angle a at distance d with
width w. The pixel at image I at location [x, y] is given by I(x, y). It is assumed that the
viewing normal vector intersects the image in the middle. An image pixel I(x, y) is
color therefore mapped into the following coordinate of the 3D-model.

width width
xi;j ¼ i  ; i 2 ½0::pixelsx  ð1Þ
pixelsx 2

height height  
yi;j ¼ j  ; j 2 0::pixelsy ð2Þ
pixelsy 2

zi;j ¼ 1 ð3Þ

where pixelsx and pixelsy are the number of pixels in the sketch plane image in the
horizontal directions, respectively. The image size is given by

width ¼ 2  R tan a ð4Þ

height ¼ 2  R tan b ð5Þ

where R is the radius of the viewing sphere, set to 1 herein, and a and b are the
horizontal and vertical half-angle offsets, or angular sizes, of the sketch plane image.
Having obtained the 3D model, it is possible to render the model in various ways, such
as using a virtual reality viewer. In this work the 3D models were used to render
equirectangular panoramic images.
The panorama was build up as follows. First, any background panorama was
painted direction onto the target panorama first, and then all the points in the model
were rendered in decreasing distance to the observer. This is the well-known painter’s
algorithm where the foremost objects are added last such that one achieves hidden line
removal. The conversion from the Cartesian coordinate system to the geographical
coordinate system used in the equirectangular projection were as follows.
First, the point of intersection [x′, y′, z′] between the viewing sphere S and the line
going from the center of the sphere to an image pixel point [x, y, z] is computed as:
h 0 0 0i R
x ; y ; z ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ½x; y; z ð6Þ
x þ y2 þ z2
2

The point of intersection is thus defined by the vector with length R along the line
going from the sphere origin to the grid point. Finally, the intersection point [x′, y′, z′] is
transformed into geographical spherical coordinates [u, /] analogous to latitude and
longitude, using the following expressions

zamfira@unitbv.ro
1006 F.E. Sandnes

Fig. 1. Sketching a room

0 0
/ ¼ tan1 ðx ; y Þ ð7Þ
 0
z
1
u ¼ sin ð8Þ
R

The arctan2 function is used as it reports angles in the full range from –180 to 180°
compared to the arctan function that reports angles between –90 to 90°.
The resulting panoramic images are imported into a standard panoramic viewer or
cloud point viewer. The panoramas rendered in this paper were generated with the
FSPViewer [11] and models were rendered with Cloud Compare [47].

zamfira@unitbv.ro
Sketching 3D Immersed Experiences Rapidly by Hand 1007

Fig. 2. Color sketch of a room

The proposed framework assumes that the viewer is located in the center, that is,
the viewer is in the center of the coordinate system. This is practical form the viewpoint
of constructing the 3D scene and for viewing using panoramic viewers. However, the
method is not limited to the panoramic views from one position. Once the
two-dimensional sketches are imported into the tool, the three-dimensional model can
be used to generate views from any location.

zamfira@unitbv.ro
1008 F.E. Sandnes

Fig. 3. Cage walls and roof (left) and cage floor (right)

4 Experiments

A proof-of-concept modelling framework was implemented in java. The framework


takes flat image segments as input where the angle and distance were specified.
Panoramic images could also be imported as backgrounds. The Implementation gen-
erated single point panoramic images as output that were viewed using the FSPViewer
panoramic viewer. Moreover, the implementation allowed the output of the models as
point clouds or polygon meshes in the PLY format. These models were rendered using
CloudCompare [47], an open source tool for visualizing point clouds and polygon
meshes.
Figure 1 illustrates how to sketch a room including the wall with the window, the
wall with the door, a blank wall that is used twice, the floor with the carpet and the
ceiling with the simple light fixture and smoke detector. All six planes are placed at the
same distance from the origin. The bottom image shows the resulting panoramic image
that can be viewed using the panoramic viewer.
Figure 2 shows an “open” room comprising two back walls, a floor and a dividing
wall with a door and a window. This dividing wall is parallel to the dark brown wall but
at the same time in front of it.
Figure 3 shows the input sketches used to make the example cage in Fig. 4. The
left sketch shows a simple black grid where the white background thus become
transparent. This grid defines the walls at –90, 0, 90 and 180° relative to the floor at
fixed distances from the origin. It is also used as the roof plane. The right image is used
as the floor with a gray non-transparent texture.

zamfira@unitbv.ro
Sketching 3D Immersed Experiences Rapidly by Hand 1009

Fig. 4. Panoramic images with a background panoramic image by James Kennedy Monash and
the cage sketch.

Figure 4 shows two panoramic images with the cage sketch superimposed. The top
panoramic image shows the cage centered around the origin, while the cage is lifted
relatively in the bottom panorama giving a strengthened sensation of the observer
sitting at the bottom of the cage.
Figure 5 shows renderings of the example panoramas presented herein using the
FSPviewer and Fig. 6 shows views of the resulting 3D model of the cage using the
CloudCompare point cloud viewer. The figure shows how the panoramic image is
modelled using a sphere (top left) and that this sphere provides a suitable background
when the model is viewed close to the center of the sphere. Figure 6 also demonstrates
the viewing distortions when the viewer is moved towards the border of the panorama
sphere wall where one can see both the closest side and the far side of the sphere (top
right).

zamfira@unitbv.ro
1010 F.E. Sandnes

Fig. 5. Panoramic sketches rendered with FSPVIewer.

Fig. 6. Real 3D views rendered using ClouldCompare based on the resulting point cloud model.

5 Conclusions

A tool-supported framework for sketching in three-dimensions using hand drawing was


presented. The designer sketches two-dimensional cross sections of the scene and the
tool combines these sketches into a three-dimensional model. The models were

zamfira@unitbv.ro
Sketching 3D Immersed Experiences Rapidly by Hand 1011

rendered as panoramic images that were viewed with standard panoramic viewing
software. An advantage of the proposed approach is that the quality of the captured
three-dimension model is insufficient for professional production purposes. The models
therefore only exist with the purpose of being sketches during ideation.

References
1. Buxton, B.: Sketching User Experiences: Getting the Design Right and the Right Design:
Getting the Design Right and the Right Design. Morgan Kaufmann, San Francisco (2010)
2. Black, A.: Visible planning on paper and on screen: the impact of working medium on
decision-making by novice graphic designers. Behav. Inf. Technol. 9, 283–296 (1990)
3. Sandnes, F.E., Jian, H.L.: Sketching with Chinese calligraphy. Interactions 19, 62–66 (2012)
4. Landay, J.A., Myers, B.A.: Interactive sketching for the early stages of user interface design.
In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems,
pp. 43–50. ACM Press/Addison-Wesley Publishing Co. (1995)
5. Landay, J., Myers, B.: Sketching interfaces: toward more human interface design. Computer
34, 56–64 (2001)
6. Olsen, L., Samavati, F.F., Sousa, M.C., Jorge, J.A.: Sketch-based modeling: a survey.
Comput. Graph. 33, 85–103 (2009)
7. Kondo, K.: Interactive geometric modeling using freehand sketches. J. Geom. Graph. 13,
195–207 (2009)
8. Huang, Y.P., Wang, S.S., Sandnes, F.E.: RFID-based guide gives museum visitors more
freedom. IT Prof. Mag. 13, 25 (2011)
9. Xiong, Y., Turkowski, K.: Registration, calibration and blending in creating high quality
panoramas. In: Fourth IEEE Workshop on of Applications of Computer Vision (WACV
1998), pp. 69–74. IEEE Press (1998)
10. Kweon, G., Choi, Y.: Image-processing based panoramic camera employing single fisheye
lens. J. Opt. Soc. Korea 14, 245–259 (2010)
11. Senore, F.: FSPViewer. http://www.fsoft.it/FSPViewer/. Accessed 20 Nov 2015
12. Keane, T.P., Cahill, N.D., Rhody, H., Hu, B., Tarduno, J., Jacobs, R., Pelz. J.: Sphere 2:
Jerry’s rig, an OpenGL application for non-linear panorama viewing and interaction. In:
Image Processing Workshop (WNYIPW), pp. 13–16. IEEE Press (2012)
13. Anguelov, D., Dulong, C., Filip, D., Frueh, C., Lafon, S., Lyon, R., Ogale, A., Vincent, L.,
Weaver, J.: Google street view: capturing the world at street level. Computer 6, 32–38
(2010)
14. Kwiatek, K., Woolner, M.: Transporting the viewer into a 360 heritage story: panoramic
interactive narrative presented on a wrap-around screen. In: 16th International Conference on
Virtual Systems and Multimedia (VSMM), pp. 234–241. IEEE Press (2010)
15. Ku, D.C., Qin, S.F., Wright, D.K.: What is on the backside of the paper? From 2D sketch to
3D model. In: The 20th BCS HCI Group Conference, British Computer Society (2006)
16. Naya, F., Jorge, J., Conesa, J., Contero, M., Gomis, J.M.: Direct modeling: from sketches to
3D models. In: Proceedings of the 1st Ibero-American Symposium in Computer
Graphics SIACG, pp. 109–117 (2002)
17. Contero, M., Naya, F., Gomis, J.M., Conesa, J.: Calligraphic interfaces and geometric
reconstruction. In: 12th ADM International Conference on Design Tools (2001)
18. Cherlin, J.J., Samavati, F., Sousa, M.C., Jorge, J.A.: Sketch-based modeling with few
strokes. In: Proceedings of the 21st Spring Conference on Computer Graphics, pp. 137–145.
ACM (2005)

zamfira@unitbv.ro
1012 F.E. Sandnes

19. Matondang, M.Z., Mardzuki, S., Haron, H.: Transformation of engineering sketch to valid
solid object. In: Proceedings of International Conference of the 9th Asia Pacific Industrial
Engineering & Management Systems (APIEMS 2008) Conference and The 11th Asia Pacific
Regional Meeting of International Foundation for Production Research, pp. 2707–2715
(2008)
20. Eggli, L., Hsu, C., Brüderlin, B.D., Elber, G.: Inferring 3D models from freehand sketches
and constraints. Comput. Aided Des. 29, 101–112 (1997)
21. Contero, M., Naya, F., Jorge, J., Conesa, J.: CIGRO: a minimal instruction set calligraphic
interface for sketch-based modeling. In: Computational Science and Its Applications—
ICCSA 2003, pp. 549–558. Springer, Berlin Heidelberg (2003)
22. Huang, Y.P., Chang, T.W., Chen, Y.R., Sandnes, F.E.: A back propagation based real-time
license plate recognition system. Int. J. Pattern Recognit. Artif. Intell. 22, 233–251 (2008)
23. Huang, Y.P., Hsu, L.W., Sandnes, F.E.: An intelligent subtitle detection model for locating
television commercials. IEEE Trans. Syst. Man Cybern. Part B 37(2), 485–492 (2007)
24. Qin, S.F., Wright, D.K., Jordanov, I.N.: From on-line sketching to 2D and 3D geometry: a
system based on fuzzy knowledge. Comput. Aided Des. 32, 851–866 (2000)
25. Yang, C., Sharon, D., van de Panne, M.: Sketch-based modeling of parameterized objects.
In: EG Workshop on Sketch-Based Interfaces and Modeling, pp. 63–72 (2005)
26. Fiorentino, M., Monno, G., Renzulli, P.A., Uva, A.E.: 3D sketch stroke segmentation and
fitting in virtual reality. In: Proceedings of GRAPHICON, pp. 188–191 (2003)
27. Kim, D.H., Kim, M.J.: A new modeling interface for the pen-input displays. Comput. Aided
Des. 38, 210–223 (2006)
28. Shtof, A., Agathos, A., Gingold, Y., Shamir, A., Cohen-Or, S.: Geosemantic snapping for
sketch-based modeling. Comput. Graph. Forum 32, 245–253 (2013)
29. Roth-Koch, S.: Digitalization of paper sketches integration of the non digital draft. In:
Proceedings of the 2011 Conference on Designing Pleasurable Products and Interfaces,
p. 39. ACM (2011)
30. Kerautret, B., Granier, X., Braquelaire. A.: Intuitive shape modeling by shading design. In:
Smart Graphics, pp. 163–174. Springer, Heidelberg (2005)
31. Shin, H., Igarashi, T.: Magic canvas: interactive design of a 3-D scene prototype from
freehand sketches. In: Proceedings of Graphics Interface 2007, pp. 63–70. ACM (2007)
32. Igarashi, T., Matsuoka, S., Tanaka, H.: Teddy: a sketching interface for 3D freeform design.
In: ACM SIGGRAPH 2007 Courses, p. 21. ACM (2007)
33. Tai, C.L., Zhang, H., Fong, J.C.K.: Prototype modeling from sketched silhouettes based on
convolution surfaces. Comput. Graph. Forum 23, 71–83 (2004)
34. Tolba, O., Dorsey, J., McMillan, L.: Sketching with projective 2D strokes. In: Proceedings
of the 12th Annual ACM Symposium on User Interface Software and Technology, pp. 149–
157. ACM (1999)
35. Sandnes, F.E.: Communicating panoramic 360 degree immersed experiences: a simple
technique for sketching in 3D. In: Antona, M., Stephanidis, C. (eds.) Proceedings of HCI
International 2016, Universal Access in Human-Computer Interaction. Interaction Tech-
niques and Environments. LNCS, vol. 9738, pp. 338–346. Springer, Heidelberg (2016)
36. Sandnes, F.E.: Where was that photo taken? Deriving geographical information from image
collections based on temporal exposure attributes. Multimed. Syst. 16, 309–318 (2010)
37. Sandnes, F.E.: Determining the geographical location of image scenes based on object
shadow lengths. J. Signal Process. Syst. 65, 35–47 (2011)
38. Sandnes, F.E.: PanoramaGrid – a graph paper tracing framework for sketching 360-degree
immersed experiences. In: Proceedings of the International Working Conference on
Advanced Visual Interfaces (AVI 2016), pp. 342–343. ACM (2016)

zamfira@unitbv.ro
Sketching 3D Immersed Experiences Rapidly by Hand 1013

39. Sandnes, F.E., Huang, Y.P.: Translating the viewing position in single equirectangular
panoramic images. In: Proceedings of SMC 2016. IEEE Press (2016)
40. Shi, F., Laganiere, R., Dubois, E., Labrosse, F.: On the use of ray-tracing for viewpoint
interpolation in panoramic imagery. In: Canadian Conference on Computer and Robot
Vision (CRV 2009), pp. 200–207. IEEE Press (2009)
41. Kolhatkar, S., Laganiere, R.: Real-time virtual viewpoint generation on the GPU for scene
navigation. In: Canadian Conference on Computer and Robot Vision (CRV), pp. 55–62.
IEEE Press (2010)
42. Zhang, C., Zhao, Y., Wu, F.: Triangulation of cubic panorama for view synthesis. Appl. Opt.
50, 4286–4294 (2011)
43. Cohen, J.M., Hughes, J.F., Zeleznik. R.C.: Harold: a world made of drawings. In:
Proceedings of the 1st International Symposium on Non-photorealistic Animation and
Rendering. ACM (2000)
44. Fei, G.: 3D animation creation using space canvases for free-hand drawing. In: Proceedings
of the 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its
Applications in Industry. ACM (2008)
45. Dorsey, J.: The mental canvas: a tool for conceptual architectural design and analysis. In:
15th Pacific Conference on Computer Graphics and Applications (PG 2007). IEEE (2007)
46. Greenberg, S., Carpendale, S., Marquardt, N., Buxton, N.: Sketching User Experiences: The
Workbook. Elsevier, Waltham (2011)
47. Girardeau-Montaut, D.: Cloudcompare-open source project. OpenSource Project (2011)

zamfira@unitbv.ro
Analyzing Modular Robotic Systems

Reem Alattas(&)

Department of Computer Science and Engineering,


University of Bridgeport, Bridgeport, CT, USA
ralattas@live.com

Abstract. This paper surveys modular robot systems, which consist of multiple
modules and aim to create versatile, robust, and low cost systems. The modu-
larity allows these robots to self-assemble, self-reconfigure, self-repair, and
self-replicate. Therefore, the surveyed research covered the previous charac-
teristics along with evolutionary robotics and 3D printed robots. These fields are
interdisciplinary, so we organize the implemented systems according to the main
feature in each one. The primary motivation for this is to categorize modular
robots according to their main function and to discover the similarities and
differences of implementing each system.

Keywords: Evolutionary  Digital fabrication  Modular  Modularity 


Robots  Self-assembly  Self-reconfigurable  Self-repair  Self-replication 
Self-reproducible  3D printing

1 Introduction

Modular robots are composed of various units or modules, hence the name. Each
module involves actuators, sensors, computational, and communicational capabilities.
Usually, these modules are homogeneous; however they could be heterogeneous to
maximize versatility [1].
Modularity allows robots for self-assembly, self-reconfiguration, and self-repair [2]
those are discussed in Sects. 2, 3, 4 and 5 respectively. Section 6 presents self-reproduction
robots. Since developing methods for evolving controllers has been of great interest, we
cover evolutionary robots in Sect. 7, followed by printable robots in Sect. 8 and automatic
manufacturing in Sect. 9. To conclude, we summarize the article with Sect. 10.

2 Modularity

The concept of modularity has emerged in the past few decades, which led to suc-
cessfully implementing a number of prototypes. CEBOT is one of the first modular
robots. It was developed by Fukuda and Kawauchi in 1990, as a distributed robotic
system consisting of cells that could attach together to perform a function. CEBOT is
capable of dynamically self-reconfiguring and self-repairing. The cells are operated by
the communication network COMBUS [3]. Figure 1 shows the geometry of a mobile
and an object cell.

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_94
zamfira@unitbv.ro
Analyzing Modular Robotic Systems 1015

Fig. 1. Geometry of mobile and object cell

In 1993, Yim created a set of modular robots that can employ several locomotion
strategies [4]. Such systems are called self-reconfigurable robots; one example is
PolyBot that was implemented in 2000 to explore how realistic is to implement robots
using several homogeneous hardware modules. These modular self-reconfigurable
systems have three characteristics; versatility, robustness, and low cost. The first two
generations of PolyBot prove versatility by executing locomotion over a variety of
terrain. However, as the number of modules increases, cost increases, and robustness
decreases due to software scalability and hardware dependency issues. Currently the
maximum number of modules utilized in one connected PolyBot system is 32 with
each module having 1 DOF [5]. The third generation deals with 200 modules to show a
variety of capabilities, including moving like a snake, lizard or centipede as well as
humanoid walking and rolling in a loop [6].
Chiang and Chirikjian introduced a metaphoric system in 2001 to form structures
by rolling over each other in a plane. Also, a cost function was introduced to measure
reconfiguration fitness and to bisect shapes. This can be viewed as a geometric figures
pattern-matching problem under rigid body motions [7].
In the same year, Rus and Vona developed Crystalline atoms that have 3 DOF,
which allows expansion and contraction by a factor of two. Robots are formed by
expanding and contracting each atom frame in order to move relatively to the other
atoms. These movements simulate muscles actuation mechanism. Moreover, Crys-
talline robots are capable of self-reconfiguration very fast in O(n2) time, where n is the
number of atoms [8]. Earlier in 1998, Rus et al. have developed modules for building
self-reconfigurable robots called Molecules that support various locomotion modalities
by organizing autonomously as geometric structures to best fulfill the task in hand.
These Molecules have 2 DOF and can be aggregated to form 3D structures. Finally,
motion planning is done in O(n) time, where n is the number of molecules [9, 10].
Suh et al. in 2002 introduced the Telecubes that are cubic modules with 6 prismatic
DOF and sides capable of expanding more than twice its original length. Those cubes
can form a modular self-reconfigurable robot by attaching and detaching magnetically
to other cubes [11].

zamfira@unitbv.ro
1016 R. Alattas

As mentioned earlier, robotic modules are equipped with sensors in order to collect
data and provide necessary feedback that can be used locally on the module to guide
self-reconfiguration. Støy et al. proposed a methodology where raw sensor values can
be used globally, and combined it with role based control method for the
self-reconfigurable robot CONRO [12, 13].
Molecubes is an open hardware and software platform for modular robotics that
was developed to remove entry barriers to the field and accelerate progress. Different
types of active modules; such as gripper, actuated joint, controller, camera, and wheel
along with a number of passive modules were presented. Evolutionary search was to
design different types of robots rapidly [14, 15]. The following Tables 1, 2 and 3
compare the aforementioned systems on a number of parameters including geometrical,
electrical, and physical properties.

Table 1. Geometrical properties comparison


System Dimensions Actual DOF Lattice Geometry
PolyBot 3D 1 Cubic

Chirikjian 2D 3 Hexagonal

Crystalline 2D 1 Square

Telecubes 3D 1 Cubic

CONRO 3D 2 None

Molecubes 3D 4 Cubic

Table 2. Electrical characteristics comparison


System CPU Power Communication Sensors
PolyBot Motorola Yes Optical & Joint position, docking aid,
PowerPC 555 electrical orientation, force
Crystalline Atmel Yes Optical Joint position
AT89C2051
Telecubes – No Optical Docking aid
CONRO Basic Stamp 2 Yes Optical Docking aid
Molecubes None No None None

zamfira@unitbv.ro
Analyzing Modular Robotic Systems 1017

Table 3. Physical properties comparison


System Weight (g) Dimensions (cm) Connector Type Unisex
PolyBot 200 555 Mech. Pin/Hole, SMA Yes
Crystalline 375 5  5 18 (contracted) Mech. Lock No
Telecubes – 6  6 6 (contracted) Switching Perm. Magn Yes
CONRO 115 10.8 5.4 4.5 Mech. Pin/Hole, SMA No
Molecubes – – Mech. Hooks No

3 Self-assembly

One of the main benefits of modularity is the capability of self-assembly, which is the
natural construction of complex multi-unit system using simple units governed by a set
of rules. Self-assembly process is ubiquitous in nature as it generates much of the living
cell functionality [16]. However, it is uncommon in technical field, because it is
considered as a new concept relatively in that arena although it could help in lowering
costs and improving versatility and robustness; which are the three promises of
modular robotics. The ability to form a larger stronger robot using smaller modules
allows self-assembly robotics to perform tasks in remote and hazardous environment.
Jones and Mataric in 2003 introduced intelligent self-assembly system using
assembly agents and a transition rule set compiler, which takes a goal shape as an input
and gives a set of rules as an output that can be utilized by the assembly agents to
assemble the target shape [17]. In additional work, Kelly and Zhang described a model
for optimizing the size of the rule sets used to build a structure [18]. Furthermore,
Werfel studied assembling complex structures using transition rule sets and artificial
swarms to automate construction [19].
Stochastically driven self-assembly systems were studied by White et al. in 2004 as
they developed algorithms and hardware for few systems. One system uses square units
with electromagnets that self-assembled into an L-shape and then into a line. The other
system uses triangular units with swiveling permanent magnets that self-assembled into
a line and then changed their sequence within the line. Both systems lack batteries, and
the modules only receive power after they connect to the structure being self-assembled
[20]. Tolley et al. extended that 2D system to 3D. Their evolutionary approach takes a
target function as input and yields a shape to achieve the input function, and directs the
shape’s assembly. However, the units are unable to move on their own as they need to
circulate in turbulent fluid to accrete onto the structure. This fluidic system could be
scaled down to produce micro-scale modules [21].
In 2005, Bishop et al. built triangular programmable parts that can be assorted on an
air table by overhead oscillating fans to self-assemble various shapes according to the
mathematics of graph grammars. The modules can communicate and selectively bond
using mechanically driven magnets, without global knowledge of the full shape.
Despite planning to build approximately 100 parts, only six parts were built for design
simplicity reasons. Those six parts were used in an experiment that showed these parts
react similarly to chemical systems [22]. Then, Napp et al. added measurements of
kinetic rate data to the previous work of graph grammar in order to yield a Markov

zamfira@unitbv.ro
1018 R. Alattas

Process model [23]. Figure 2 demonstrates a number of programmable parts partially


assembled into a triangle.
Sambot is a mobile self-assembly modular robot, which was implemented by Wei
et al. in 2010. Several modules can self-assemble to form a particular structure through
a 4-phase autonomous docking process. Also, the resulting shape can reconfigure into
different structures that are capable of locomotion [24]. Figure 3 shows the schematic
diagram of a Sambot module.

Fig. 2. Four programmable parts partially assembled into a triangle

Fig. 3. Schematic diagram of Sambot

zamfira@unitbv.ro
Analyzing Modular Robotic Systems 1019

4 Self-reconfiguration

Recently, modular robotics has gotten attention from researchers in the robotics field due
to their ability to self-reconfigure [2]. Modular self-reconfigurable robots involve var-
ious modules that can combine themselves autonomously into a meta-module or a
structure that is capable of performing a specific task under certain circumstances [1].
Self-reconfigurability allows these robots of metamorphosis, which in turn makes them
capable of performing different sorts of kinematics. For instance, a robot may recon-
figure into a manipulator, a crawler, or a legged one [2]. This sort of adaptability enables
self-reconfigurable robots to accomplish tasks in unstructured environments; such as
space exploration, deep-sea applications, rescue missions, or reconnaissance [3].
Yim et al. in 2002 classified reconfigurable robots into three classes of architecture:
lattice, chain, and mobile based on how they reconfigure [25]. Then, they added
deterministic and stochastic reconfigurations in 2007 [26].
Lattice architectures have modules that are connected in a 3D pattern that can be
used as a guide for modules to determine their positions and form the new shape
accordingly. Chain/Tree architectures have modules that are connected together in a
string or tree topology. The underlying architecture is serial such that each chain is
always attached to the rest of the modules at one or more points, and they reconfigure
by attaching and detaching to and from themselves. Mobile architectures change shape
by having modules detach themselves from the main body and move independently
[25]. Deterministic Architecture relies on units moving or being directly manipulated
into their target location during reconfiguration. Stochastic Architecture relies on units
moving around using statistical processes; e.g. Brownian motion, that are used to
guarantee reconfiguration times as well [26].
Since reconfigurable robotics field has a great interest in robotics community, we
have seen many prototypes implementations. Among them is M-TRAN (Modular
transformer) a distributed lattice-based self-reconfigurable system composed of
homogeneous robotic modules. The special design of M-TRAN module realizes both
reliable and quick self-reconfiguration and versatile robotic motion. M-TRAN is able to
metamorphose into robotic configurations such as a legged machine and hereby gen-
erate coordinated walking motion without any human intervention. The actual system
that was built using ten modules was examined through experiments to demonstrate the
basic operations of self-reconfiguration and motion generation. In order to drive
M-TRAN hardware, a series of software programs has been developed including a
kinematics simulator, a UI to design appropriate configurations and motion sequences
for given tasks, and an automatic motion planner for a regular cluster of M-TRAN
modules. These software programs are integrated into the M-TRAN system supervised
by a host computer [27].
In the second prototype, M-TRAN II, various improvements were integrated in order
to allow complicated reconfigurations and versatile whole body motions. Those
improvements contain reliable attachment/detachment mechanism, on-board multi-
computers, high speed inter-module communication system, low power consumption,
and precise motor control. Developed software are also integrated to design

zamfira@unitbv.ro
1020 R. Alattas

self-reconfiguration processes, to verify motions in dynamics simulation, and to realize


distributed control on the hardware [28].
The third prototype, M-TRAN III, has been developed, with an improved con-
nection mechanism. Various control modes including single-master, globally syn-
chronous control and parallel asynchronous control are made possible by using a
distributed controller. Self-reconfiguration experiments using up to 24 units were
performed by centralized and decentralized control. Finally, system scalability and
homogeneity were maintained in all experiments [29].
SuperBot is a multifunctional network of modules that can perform as both
lattice-based and chain-type self-reconfigurable robots. It was developed by Salemi
et al. in 2006 to enhance the mechanical design of M-TRAN by adding an additional
rotational DOF between the two existing rotation axes. SuperBot was designed to be a
flexible, strong, and durable robot that can be used in real world applications; such as
environmental exploration [30].
Another self-reconfigurable robot is ATRON, a lattice-based system consisting of
spherical modules, where each sphere is constructed as two hemi-spheres joined by an
infinite revolute joint. Despite that ATRON modules are minimalistic because they
have only one actuated DEF, the group of modules is capable of self-reconfiguring in
three dimensions [31].
RoomBot is a modular robot that can self-assemble and self-reconfigure into dif-
ferent pieces of furniture. It introduces passive elements in the robot structure, the
implementation of a Central Pattern Generator for generating the command of the
motors, and the possibility of use a motor in oscillation, but also in constant rotation
[32]. The following Tables 4, 5 and 6 compare the aforementioned systems on a
number of parameters including geometrical, electrical, and physical properties.

Table 4. Geometrical properties comparison


System Dimensions Actual DOF Lattice Geometry
Fracta 2D 0 Hexagonal

M-TRAN 3D 2 Cubic

ATRON 3D 1 Surface-Centered Cubic

Table 5. Electrical characteristics comparison


System CPU Power Communication Sensors
Fracta Z80 No Optical None
M-TRAN 3  PIC, Yes Electrical Joint position, orientation
1  TNPM
ATRON Atmel Yes Optical Joint position, orientation and
MEGA128L proximity

zamfira@unitbv.ro
Analyzing Modular Robotic Systems 1021

Table 6. Physical properties comparison


System Weight (g) Dimensions (cm) Connector Type
Fracta 1200 ø12.5 Electro Magnets
M-TRAN 400 6  6  12 (versions I&II)
ATRON 850 ø11 Mech. Hooks

5 Self-repair

The Self-repair is a special type of self-reconfiguration that could allow a robot to


replace damaged modules with functional ones in order to continue with the task at
hand [2]. Typically, such robots are unit-modular and carry a number of redundant
modules on their bodies, because a self-repair system must have two qualities: the
ability to self-modify, and the availability of new parts or resources to fix broken ones.
Self-repair consists of detecting the failure of a module, ejecting the deficient module
and replacing it with an efficient extra module. Such robots are well suited for working
in unknown and remote environments.
Murata et al. developed Fracta robotic system in 1994 as a robot that can recon-
figure by rotating homogeneous modules about each other to form a goal shape [33].
Then, it was extended by Yoshida et al. in 1999 to a self-assembly and self-repair
system that can transform from an arbitrary shape into a desired one. Self-assembly is
implemented using identical software on each unit with local inter-unit communication.
They considered self-repair as an extension of self-assembly that detects damage and
let the whole system reconstructs itself. A simulated-annealing algorithm was devel-
oped for self-repair operation. The system has more than ten units that successfully
configured themselves and recovered from a fault [34]. A schematic 3D view of a
single Fracta module – Fractum – is displayed in Fig. 4.

Fig. 4. Schematic view of 3D Fractum

Fitch et al. built on the previous work of Yoshida et al. to accomplish self-repair
using the self-reconfiguring Crystalline robots with a focus on geometric motion
planning. The aforementioned Crystalline robots consist of modules are actuate by
expanding and contracting, as shown in Fig. 5. This actuation mechanism is used for
self-repair as the process consists of three phases: detect failure, eject the failed module,
and replace the failed module [35].

zamfira@unitbv.ro
1022 R. Alattas

Fig. 5. Schematic diagram of crystalline actuation mechanism

6 Self-reproduction/Self-replication

The ultimate form of self-repair is self-reproduction; which allows robots to reproduce


themselves from an infinite supply of parts using simple rules. If the resulting system is
an exact replica of the original, the system is called a self-replicator [36]. The effort in
self-reproducing is focused on design and construction of a small seed system that will
grow exponentially to form a larger system through tens of generations. The resulting
self-reproducible robots are capable of accomplishing very large-scale tasks; such as
collection of solar energy, direct removal of greenhouse gases from the Earth’s
atmosphere, and desalination of water for irrigation [37]. Self-reproduction differs from
automatic manufacturing or self-assembly, because the resulting system does not need
to make copies of itself.
Von Neumann was the first to prove the possibility of self-reproduction in 1966 in
his close to physical implementation kinetic model of self-reproducing automata [38].
More recently, Griffith et al. demonstrated that self-assembling systems may
self-replicate if the intelligent modules were configured to duplicate [39]. Finally,
Zykov et al. introduced an autonomous self-reproducible robot in 2007. That robot is a
modular one composed of actuated modules equipped with electromagnets to selec-
tively control the morphology of the robotic assembly [40].

7 Evolutionary Robotics

Evolutionary Robotics is the automatic creation of autonomous robots that is inspired


by the Darwinian principle of selective reproduction of the fittest captured by evolu-
tionary algorithms [41].
Nolfi and Floreano presented a set of experiments in their book, ranging from
simple to very complex, in order to address different adaptation mechanisms. The first
set of experiments involves navigational tasks; such as obstacle avoidance. The authors
point out that in some cases the evolved solution outperformed the hand-designed
solution by capitalizing on interactions between machine and environment that could

zamfira@unitbv.ro
Analyzing Modular Robotic Systems 1023

not be captured by a model based approach. On the other hand, more complex tasks
expose limits of reactive architectures. However, very complex tasks such as garbage
collection and battery recharging show that emergent modular structures allowed the
decomposition of the global behavior into basic behaviors to emerge spontaneously.
Furthermore, the achieved decomposition did not correspond to a distal decomposition
an external designer would naturally expect, and outperformed other manually
designed decompositions [41].
According to Lipson, each robot comprises two major parts: controller (brain) and
morphology (body). Robot controllers can be represented in any one of a number of
ways: as logic functions, programs, differential equations, or neural networks. Various
experiments represent the controller as a neural network that maps sensory input to
actuator outputs. These neural networks can have different architectures, such as
feed-forward or recurrent. Sometimes the choice of architecture is left to the synthesis
algorithm [42].
Nolfi and Floreano described an experiment of using evolutionary methods to
evolve a controller that would make a legged robot, which is equipped with actuators
and sensors, locomote towards a high chemical concentration area [43]. Bongard
explored the same concept on a legged robot in a physically realistic simulator. The
robot has four legs and eight rotary actuators. A neural controller that maps sensors to
actuators determines the behavior of the machine. Trying out a candidate controller in
four different concentration fields, and summing up the distance between the final
position of the robot and the highest concentration point evaluated the fitness. The
shorter the distance is considered better. In this experiment, 200 candidate controllers
were evolved for 50 generations and the robot learned to move and to change direction
towards the high concentration [44].
Zykov et al. used evolving controllers for a real dynamical-legged robot in 2004.
The nine-legged machine, demonstrated in Fig. 6, is composed of two Stewart plat-
forms back to back. The author used force-actuators which exact extension can be set.

Fig. 6. Nine-legged robot

zamfira@unitbv.ro
1024 R. Alattas

The controller architecture for this machine was an open-loop pattern generator that
determines when to open and close pneumatic valves. The on-off pattern was evolved
and candidate controllers were evaluated by trying them out on a robot in a cage.
Fitness was measured using a camera that tracks the red ball on the foot of one of the
legs of the machine [45].
Paul and Bongard designed dynamic bipedal robot controllers in simulation using
evolutionary process. The robot consists of the bottom half of a walker with six motors,
a touch sensor at each foot and an angle sensor at each joint. Fitness was the net
distance a robot could travel. Evolving 300 controllers over 300 generations generated
numerous controllers that could make the machine move while keeping it upright.
These results may suggest that evolving a controller for a fixed morphology could be
too restrictive, while co-evolving both the controller and the morphology could yield
better results [46].
Karl Sims explored in simulation the idea of giving the evolutionary process more
freedom in designing both morphology and control using 3D cubes and oscillators as
building blocks [47]. Similarly, Lipson and Pollack explored physically-realizable
machines and started with lower-level building blocks, such as 1D elements and simple
neurons. The used design space was comprised of bars and actuators as building blocks
of structure and artificial neurons as building blocks of control.

8 Printable Robots

Existing rapid prototyping techniques; such as 3-D printing, are becoming increasingly
accessible due to their ability of achieving complex geometries. Therefore, printable
robots utilize these planar fabrication methods in order to create integrated
electro-mechanical laminates. Moreover, 3D Printing allows fabrication of low cost,
capable, agile, functional 3-D robots; such as Origami robots proposed by Onal et al. in
2014. Those robots can fold themselves into functional 3-D machines employing
origami-inspired techniques [48]. One of these robots is displayed in Fig. 7.

Fig. 7. Origami inspired printed robot

zamfira@unitbv.ro
Analyzing Modular Robotic Systems 1025

9 Automatic Manufacturing

Robots automatic design and manufacturing combine evolutionary computation and


additive fabrication; such that the former is used for design and the latter for repro-
duction. The evolutionary computation process operates on candidate robots popula-
tion, each composed of some repertoire of building blocks, to iteratively select fitter
machines, create offspring by adding, modifying and removing building blocks using a
set of operators, and replace them into the population. Similarly, additive fabrication
technology has been developing in terms of materials and mechanical fidelity but has
not been placed under the control of an evolutionary process.
Lipson et al. proposed an approach based on the use of only elementary building
blocks and operators in design and fabrication process. Elementary building blocks
were used to minimize inductive bias and maximize architectural flexibility. Also, they
allow the fabrication process to be more systematic and versatile [49]. The resulting
robots are presented in Fig. 8.

Fig. 8. Automatic manufacturing resulting robots

10 Conclusion

In this paper, we have presented a comprehensive survey of modular robots that were
created to meet three main goals, versatility, robustness, and low cost. Also, modularity
offered a number of features that were used to differentiate types of modular robots from
self-assembly to self-repair. Self-assembly allows a number of modules to integrate and
form a robot. Self-reconfigurable robot is capable of changing its shape and locomotion
kinematics according to the task in hand. A robot that can fix itself by replacing dam-
aged modules with fresh ones is called self-repair. While the robot that can replicate

zamfira@unitbv.ro
1026 R. Alattas

itself is called self-reproducible. Since evolutionary robotics explore the design and
construction of robots using multiple modules, we covered it in this paper, followed by
printable robots and automatic manufacturing. Many representative works were selected
from the literature and some of the implemented prototypes were discussed.

References
1. Faíña, A., Bellas, F., López-Peña, F., Duro, R.J.: EDHMoR: evolutionary designer of
heterogeneous modular robots. Eng. Appl. Artif. Intell. 26(10), 2408–2423 (2013)
2. White, P., Zykov, V., Bongard, J.C., Lipson, H.: Three dimensional stochastic reconfig-
uration of modular robots. In: Robotics: Science and Systems, pp. 161–168 (2005)
3. Fukuda, T., Kawauchi, Y.: Cellular robotic system (CEBOT) as one of the realization of
self-organizing intelligent universal manipulator. In: IEEE International Conference on
Robotics and Automation (ICRA 1990), pp. 662–667 (1990)
4. Yim, M.: A reconfigurable modular robot with many modes of locomotion. In: JSME
International Conference on Advanced Mechatronics, Tokyo, Japan (1993)
5. Yim, M., Duff, D., Roufas, K.: PolyBot: a modular reconfigurable robot. In: IEEE
International Conference on Robotics and Automation (ICRA 2000), pp. 514–520 (2000)
6. Golovinsky, A., Yim, M., Zhang, Y., Eldershaw, C., Duff, D.: PolyBot and PolyKinetic™
system: a modular robotic platform for education. In: Robotics and Automation (ICRA
2004), vol. 2, pp. 1381–1386 (2004)
7. Chiang, C., Chirikjian, G.: Similarity metrics with applications in modular robot motion
planning. Auton. Robots (special issue on Modular Reconfigurable Robots) 10(1), 91–106
(2001)
8. Rus, D., Vona, M.: Crystalline robots: self-reconfiguration with compressible units modules.
Auton. Robots (special issue on Modular Reconfigurable Robots) 10(1), 107–124 (2001)
9. Kotay, K., Rus, D.: Motion synthesis for the self-reconfiguring molecule. In: IEEE/RSJ
International Conference on Intelligent Robots and Systems, pp. 843–851 (1998)
10. Kotay, K., Rus, D., Vona, M., McGray, C.: The self-reconfiguring robotics molecule. In:
IEEE International Conference on Robotics and Automation (ICRA 1998), pp. 424–431
(1998)
11. Suh, J.W., Homans, S.B., Yim, M.: Telecubes: mechanical design of a module for
self-reconfigurable robotics. In: Proceedings of IEEE International Conference on Robotics
and Automation, vol. 4, no. 5, pp. 4095–4101 (2002)
12. Støy, K., Shen, W.-M., Will, P.: On the use of sensors in self reconfigurable robots. In:
Hallam, B., Floreano, D., Hallam, J., Hayes, G., Meyer, J.-A. (eds.) Proceedings of the
Seventh International Conference on the Simulation of Adaptive Behavior, pp. 48–57 (2002)
13. Castano, A., Behan, A., Will, P.: The CONRO modules for self reconfigurable robots. IEEE
Trans. Mechatron. 7(4), 403–409 (2002)
14. Zykov, V., Chan, A., Lipson, H.: Molecubes: an open-source modular robotic kit. In:
IROS-2007 Self-Reconfigurable Robotics Workshop (2007)
15. Zykov, V., Phelps, W., Lassabe, N., Lipson, H.: Molecubes extended: diversifying
capabilities of open-source modular robotics. In: IROS-2008 Self-Reconfigurable Robotics
Workshop (2008)
16. Bishop, J., Burden, S., Klavins, E., Kreisberg, R., Malone, W., Napp, N., Nguyen, T.:
Programmable parts: a demonstration of the grammatical approach to self-organization. In:
Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS), pp. 3684–3691 (2005)

zamfira@unitbv.ro
Analyzing Modular Robotic Systems 1027

17. Jones,C., Mataric, M.J.: From local to global behavior in intelligent self-assembly. In:
Proceedings of IEEE International Conference on Robotics and Automation (ICRA),
pp. 721–726 (2003)
18. Kelly, J., Zhang, H.: Combinatorial optimization of sensing for rule based planar distributed
assembly. In: Proceedings of IEEE International Conference on Intelligent Robots and
Systems, pp. 3728–3734 (2006)
19. Werfel, J.: Anthills built to order: automating construction with artificial swarms. Ph.D.
dissertation, MIT (2006)
20. White, P., Kopanski, K., Lipson, H.: Stochastic self-reconfigurable cellular robotics. In:
Proceedings of IEEE Conference on Robotics and Automation, April 2004, pp. 2888–2893
(2004)
21. Tolley, M., Hiller, J., Lipson, H.: Evolutionary design and assembly planning for stochastic
modular robots. In: Proceedings of IEEE Conference on Intelligent Robotics and Systems
(IROS), October 2009, pp. 73–78 (2009)
22. Bishop, J., Burden, S., Klavins, E., Kreisberg, R., Malone, W., Napp, N., Nguyen, T.:
Programmable parts: a demonstration of the grammatical approach to self-organization. In:
Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS), August 2005, pp. 3684–3691 (2005)
23. Napp, N., Burden, S., Klavins, E.: The statistical dynamics of programmed self-assembly.
In: Proceedings 2006 IEEE International Conference on Robotics and Automation (ICRA
2006), pp. 1469–1476 (2006)
24. Wei, H., Cai, Y., Li, H., Li, D., Wang, T.: Sambot: a self-assembly modular robot for swarm
robot. In: Robotics and Automation (ICRA), pp. 66–71 (2010)
25. Yim, M., Zhang, Y., Duff, D.: Modular robots. IEEE Spectr. 39(2), 30–34 (2002)
26. Yim, M., Shen, W.M., Salemi, B., Rus, D., Moll, M., Lipson, H., Chirikjian, G.S.: Modular
self-reconfigurable robot systems (grand challenges of robotics). Robot. Autom. Magaz. 14
(1), 43–52 (2007)
27. Murata, S., Yoshida, E., Kamimura, A., Kurokawa, H., Tomita, K., Kokaji, S.: M-TRAN:
self-reconfigurable modular robotic system. IEEE/ASME Trans. Mechatron. 7(4), 431–441
(2002)
28. Kurokawa, H., Kamimura, A., Yoshida, E., Tomita, K., Kokaji, S., Murata, S.: M-TRAN II:
metamorphosis from a four-legged walker to a caterpillar. In: Intelligent Robots and Systems
(IROS 2003), vol. 3, pp. 2454–2459 (2003)
29. Kurokawa, H., Tomita, K., Kamimura, A., Kokaji, S., Hasuo, T., Murata, S.: Distributed
self-reconfiguration of M-TRAN III modular robotic system. Int. J. Robot. Res. 2(3–4), 373–
386 (2008)
30. Salemi, B., Moll, M., Shen, W.M.: SUPERBOT: a deployable, multi-functional, and
modular self-reconfigurable robotic system. In: Intelligent Robots and Systems, pp. 3636–
3641 (2006)
31. Østergaard, E.H., Kassow, K., Beck, R., Lund, H.H.: Design of the ATRON lattice-based
self-reconfigurable robot. Auton. Robots 21(2), 165–183 (2006)
32. Wieser, S.: Locomotion in Modular Robotics: Roombot Module. Semester project,
Biologically Inspired Robotic Group (2008)
33. Murata, S., Kurokawa, H., Kokaji, S.: Self-assembling machine. In: Proceedings of the 1994
IEEE International Conference on Robotics and Automation, San Diego (1994)
34. Yoshida, E., Murata, S., Tomita, K., Kurokawa, H., Kokaji, S.: An experimental study on a
self-repairing modular machine. Robot. Auton. Syst. 29, 79–89 (1999)
35. Fitch, R., Rus, D., Vona, M.: A basis for self-repair robots using self-reconfiguring crystal
modules. Intell. Auton. Syst. 6, 903–910 (2000)

zamfira@unitbv.ro
1028 R. Alattas

36. Lackner, S., Wendt, C.H.: Exponential growth of large self-reproducing machine systems.
Math. Comput. Model. 21(10), 55–81 (1995)
37. Ulam, S.: Random presses and transformations. In: Proceedings of the International
Congress of Mathematicians, vol. II, Cambridge, MA (1950)
38. von Neumann, J.: Theory of Self-Reproducing Automata. University of Illinois Press,
Urbana (1966). Edited and completed by A.W. Burks
39. Griffith, S., Goldwater, D., Jacobson, J.M.: Robotics: self-replication from random parts.
Nature 437, 636 (2005)
40. Zykov, V., Mytilinaios, E., Desnoyer, M., Lipson, H.: Evolved and designed
self-reproducing modular robotics. IEEE Trans. Robot. 23(2), 308–319 (2007)
41. Nolfi, S., Floreano, D.: Evolutionary Robotics – The Biology, Intelligence, and Technology
of Self-Organizing Machines. MIT Press, Cambridge (2000)
42. Lipson, H.: Evolutionary robotics and open-ended design automation. Biomimetics 17(9),
129–155 (2005)
43. Floreano, D., Husbands, P., Nolfi, S.: Evolutionary robotics. In: Siciliano, B., Khatib, O.
(eds.) Handbook of Robotics, pp. 1423–1451. Springer, Heidelberg (2007)
44. Bongard, J.C., Pfeifer, R.: Evolving complete agents using artificial ontogeny. In: Hara, F.,
Pfeifer, R. (eds.) Morpho-Functional Machines: The New Species, pp. 237–258. Springer,
Tokyo (2003)
45. Zykov, V., Bongard, J.C., Lipson, H.: Evolving dynamic gaits on a physical robot. In:
Proceedings of Genetic and Evolutionary Computation Conference (GECCO 2004) (2004)
46. Paul, C., Bongard, J.C.: The road less traveled: morphology in the optimization of biped
robot locomotion. In: Proceedings of the IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS 2001), Hawaii, USA (2001)
47. Sims, K.: Evolving 3D morphology and behavior by competition. In: Artificial Life IV,
pp. 28–39 (1994)
48. Onal, C.D., Tolley, M.T., Wood, R.J., Rus, D.: Origami-Inspired Printed Robots (2011)
49. Lipson, H., Pollack, J.B.: Automatic design and manufacture of robotic lifeforms. Nature
406(6799), 974–978 (2000)

zamfira@unitbv.ro
An Educational Physics Laboratory in Mobile
Versus Room Scale Virtual Reality
- A Comparative Study

Johanna Pirker1(B) , Isabel Lesjak1 , Mathias Parger1 , and Christian Gütl1,2


1
Graz University of Technology, Graz, Austria
{jpirker,cguetl}@iicm.edu, {isabel.lesjak,parger}@student.tugraz.at
2
Curtin University, Perth, Western Australia

Abstract. Despite year-long efforts in education, studying and under-


standing physical phenomena still proves to be a challenge to both learn-
ers and educators. However, with the current rise of Virtual Reality expe-
riences, interactive immersive simulations in 3D are becoming a promis-
ing tool with great potential to enhance and support traditional class-
room setups and experiences in an engaging and immersive way. The
paper describes the evaluation of the physics laboratory Maroon pre-
sented on two distinct VR setups: first, a mobile and cost-efficient but
simpler VR experience with the Samsung GEAR and second, a more
interactive room scale experience with the HTC VIVE. First results of
both preliminary empirical studies indicate that the VIVE environment
increases user interactivity and engagement whereas the GEAR setup
benefits from portability and better flexibility. In this paper we dis-
cuss device-specific design aspects and provide a comparison focusing
on aspects such as immersion, engagement, presence and motivation.

Keywords: Virtual reality · Immersion · Physics education

1 Introduction
The improvement of science education is still a topic under frequent discussion
in the world today. In physics education in particular in, the situation is two-
fold: many teachers are challenged in teaching concepts to an increasing number
of students, who in turn often face issues themselves in trying to understand
the concepts taught while linking theoretical formulas to natural phenomena.
Engaging and interesting students in this increasingly relevant issue in our edu-
cational system is thus matter of the utmost importance. Emerging technologies,
such as virtual simulations and laboratories in VR provide novel ways to engage
and interest students in class while at the same time also giving educators more
possibilities to create and improve classroom experiences.
Simulations and dynamic visualizations can be used to make invisible con-
cepts visible, stretch time and space, and conduct dangerous or even impossible
experiments [2,14]. While earlier studies suggest that the use of simulations can

c Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6 95

zamfira@unitbv.ro
1030 J. Pirker et al.

enhance the understanding of such conceptual topics [1,3,8,21], achievement of


student engagement, enthusiasm, and curiosity is still challenging. Gamified and
interactive laboratory experiences as a tool have been shown to increase learning
outcomes in an engaging way compared to traditional methods [4,16]. Especially
the current interest in VR technologies, in particular cost-effective versions such
as mobile virtual reality head mounted displays (HMD) (e.g. Samsung Gear VR,
Google Cardboard), can open up new possibilities of engaging in-class learn-
ing and remote learning. Additionally, rather expensive current state-of-the-art
devices such as HTC Vive provide room scale technologies to support and enable
fully immersive experiences in VR, which might be particularly useful for all
kinds of educational scenarios, which require a more immersive, interactive, and
hands-on exploration of learning environments.
The objective of the research described in this paper is to propose an immer-
sive and engaging form of physics education, which combines effective physics
simulations with an engaging and interactive virtual reality experience and also
to compare the potential of mobile VR technologies with room scale experiences
in order to provide recommendations for use cases. Preliminary qualitative tests
of the environment were performed with a small group of students to evaluate
the effectiveness of the environment itself in engaging students, checking the
learning potential, and usability of such mobile VR systems, which support only
interactions with gaze or taps on the HMD compared to more advanced sys-
tems with additional hardware in a room scale environment. The final aim of
this research is to evaluate such experiences as opportunities for greater student
engagement in learning physics.
With this work we aim to discuss the potential of mobile and room scale VR
headsets through making the following contributions:

1. Design and implementation description of a physics laboratory for a mobile


and a room scale VR experience
2. Two case studies to examine differences in immersion, usability, and engage-
ment and discussing benefits, issues, and interesting use cases of both imple-
mentations

In the following sections we will first discuss related work on STEM education
with focus on virtual reality experiences. After that we shortly describe Maroon,
the virtual laboratory developed for the experiments. In Sect. 4 two different
user studies on mobile and room-scale experiences in VR are presented.

2 Related Work
Designing STEM education in an interesting and engaging manner still repre-
sents a challenge. One successful pedagogical method for teaching practice in
regular classrooms is “active learning”. In this method, students not only listen
passively to the concepts, but they are also directly involved in the learning
process. This has been shown to be an effective strategy for increasing the stu-
dents performance compared to traditional methods [10,15]. In physics education

zamfira@unitbv.ro
An Educational Physics Laboratory 1031

a crucial element of the learning process is understanding various phenomena.


In active learning approaches for physics education students, one way to teach
abstract concepts is the interaction with these concepts through computer-based
visualizations or animations, which make unseen phenomena visible and also to
allow small experiments [9,14]. Simulations have been shown by Wieman and
Perkins as more effective, safe, and cost-efficient compared to traditional exper-
iments [20].
Other successful virtual teaching methods include physic laboratories in dig-
ital form virtual or remote laboratories facilitate conducting dangerous, expen-
sive, or even impossible experiments [6]. Such tools as part of an educational
model either in a remote or an in-class setup can make learning physics more
effective, interesting, and engaging [20].
However, while these environments are often a successful learning tool they
often fail to engage and convince students about the “fun” elements of this field.
In a large-scale study with 306 participants Corter et al. [6] examined the
learning outcomes and student preferences for hands-on, remote, and simulated
laboratories and found that learning outcomes after performing remote or sim-
ulated labs were as high or higher compared to hands-on labs. Students rated
virtual labs as more convenient and reliable, but would prefer hands-on expe-
riences. The feeling of physical presence in a lab was still rated as important
factor of engaging laboratory experiences. In [12] the authors investigate various
educational efforts in learning labs and conclude that such “alternative access
modes must be considered pedagogical alternatives, rather than simply logisti-
cal conveniences” and point out the importance of a focus on pedagogical and
interaction design. Especially in different VR environments, emotions and activ-
ities are perceived in a different way and it is crucial to consider different design
aspects for the various VR technologies [19].
A playful form of virtual laboratories has been tested in the field biotech
education by Bonde et al. [4]. They tested a laboratory designed with gamifi-
cation elements and found that this form of environment significantly increased
the students learning outcomes and their performance compared with tradi-
tional teaching. Another form of more interactive and engaging learning in such
a virtual physics environment is described in [18]. The authors describe a col-
laborative setup for physics education, where students are able to work together
on experiments and discuss simulations. In a study it was shown that the col-
laborative aspects was rated as important, however, engagement and immersion
is subject to improvement. One way to improve the interface with engaging ele-
ments the use of gamification. In [17] the authors describe simulation design
with such game-based design tools.
In many of the above discussed environments, the lack of immersion and
engagement was noted. However, in this digital and playful time, engagement,
immersion, or even flow [7] are described ever more frequently as factors for
creating interesting experiences. Immersion can be described as feeling of being
part of the experience [5]. There is a ongoing discussion about the professional
reality in remote and virtual laboratory experiences [13]. Adding immersion as

zamfira@unitbv.ro
1032 J. Pirker et al.

main concept to the learning experience could be used to add new ways to
create professional and interesting working and learning environments. The use
of virtual reality headsets and technologies is a promising way to create a more
immersive, engaging, and interactive environment. With the current efforts to
produce VR headsets which are affordable for private users (e.g. PlayStation
VR, Samsung Gear VR, HTC Vive), VR is also becoming more attractive as
a tool to enhance classroom experiences. Several studies have looked into the
potential of virtual reality (VR) for educational scenarios.
In this paper, we introduce Maroon, an interactive immersive physics labo-
ratory, integrated with (1) the interactive virtual reality technology HTC Vive,
supporting in-room movement and a two controller setup and (2) a mobile setup
with the Samsung Gear VR.

3 Maroon - The Immersive Physics Laboratory


The immersive physics laboratory Maroon (see Figs. 1 and 2) was designed as
a reduced and simplified showcase of an interactive educational physics labo-
ratory with a subset of educational experiments to evaluate usability and user
experience in VR and to measure factors such as engagement, immersion, and
learning progress. With the Samsung Gear VR and the HTC Vive, we have
selected two very different high in demand, state-of-the-art VR devices to base
the comparative evaluation on to investigate on the one hand a mobile virtual
reality experience (Maroon Mobile VR) and on the other a fully immersive and
interactive room scale VR experience (Maroon Room Scale VR).

Fig. 1. Lab overview

3.1 The Design


As our research on Maroon includes two studies, the development of the labo-
ratory was also done in two stages. In a first step, a prototype was developed
in Unity3D.1 Unity supports stereoscopic rendering for different VR devices,
1
http://www.unity3d.com.

zamfira@unitbv.ro
An Educational Physics Laboratory 1033

Fig. 2. Van de Graaff Experiment

including the Samsung Gear VR. For the HTC Vive, the official SteamVR2 plu-
gin and framework was used. This lab prototype was the design basis for the two
VR variants. From originally six implemented simulations in the context of elec-
tromagnetism for this setup, only two were integrated in this initial prototype
for the study.
In our setup, users can experience VR in two distinct ways on two conceptu-
ally different devices: either through a mobile, more light-weight setup (Samsung
Gear VR, using the Samsung Galaxy S6) or a more graphically rich, advanced
room scale system tracking both HMD and controllers (HTC Vive using two con-
trollers). In particular for user interaction, navigation, manipulation and selec-
tion of UI elements with the virtual world, two different design approaches were
chosen, considering various limitations and the different design of these two VR
devices.
The version of the immersive physics lab Maroon as introduced is designed
to support both mobile VR systems such as Google Cardboard or Samsung Gear
VR3 running on mobile phones as well as more advanced setups with roomscale
VR such as the HTC Vive.4 The designed interaction with the environment
and the experiments is mostly performed through gaze for the Samsung Gear
VR and via controllers for the HTC Vive. Samsung Gear VR additionally pro-
vides possibilities to interact through touch and slide input, whereas the HTC
Vive benefits from several buttons on both its tracked controllers which can be
specifically programmed and also visually adapted for individual user actions.
The navigation designs for the two VR alternatives are discussed in more detail
in the following.

Navigation Design in Mobile VR. Given the Samsung Gear VR system with
the smartphone inserted into a head-mounted gear, a real-life like user experience
is achieved through a combination of eye gaze, a virtual avatar and a touchpad
2
http://store.steampowered.com/steamvr.
3
http://www.samsung.com/global/galaxy/gear-vr/.
4
https://www.htcvive.com/.

zamfira@unitbv.ro
1034 J. Pirker et al.

mounted on the side of the device, with user actions such as double tap, long
press and swipe to rotate. Here, the user controls are mostly designed for gaze
and tap interactions. An avatar (see Fig. 1) is controlled with a gaze point to
move through the laboratory. The avatar is always placed on the gaze point -
the center of the screen - and can be moved by moving the gaze. Simulations
can be started by moving the gaze cursor to the interaction button. Movement is
designed as teleporting the avatar to different locations. Sliding (only supported
by Samsung Gear VR) can be used optionally to rotate the character or to move
specific controls (sliders) of experiments.

Navigation Design in Room Scale VR. In contrast, the HTC Vive sys-
tem consists of a larger HMD connected to the PC as well as two additional
controllers, which include a highly-sensitive touchpad and individually program-
mable buttons with haptic feedback for improved user interaction within virtual
worlds. Each hardware element in the Vive setup is tracked by two base stations
named lighthouses, thus eliminating the need for an avatar and further enabling
the user to move around freely for a more immersive room-scale VR experience.
Simulations are started by entering a portal-like object through button press on
the controllers. Movement as in teleporting is achieved by pressing the touchpad
on one of the controllers, which in turn acts like a pointer, as the user aims at
the preferred target and displays a precise colored beam for visual orientation.
Concerning the experimental setup, the main difference between the imple-
mentation for Samsung Gear VR and HTC Vive was the addition of interactable
objects in the HTC Vive version and its lack of a virtual avatar which was instead
implemented in the Samsung Gear version for better usability. By using several
programmable controller buttons as well as touchpad press, HTC Vive users are
able to benefit from further real-life like interaction possibilities. The necessity
of a virtual avatar was not given for these since users carry both HMD and
controllers which are being tracked by the lighthouse system.

Interactivities in the Lab. The main interactivities integrated as experi-


mental immersive setup for the study are as follows: a virtual laboratory room
with different “stations” containing experiments or interactive activities (see
Fig. 1), two experiments with a Van de Graaff Generator [11] which, combined
with a balloon or a grounding device respectively, simulates electric fields while
visualizing field lines as well a display of voltage and charge (see Fig. 2), and
interactions with the controllers or the touchpad such as starting the experi-
ment or teleporting. While the HTC Vive to some extend supports movement
in the real room, the laboratory was designed as large-room experience; thus
a teleporting functionality was necessary for both devices to reach all stations.
Based on these interactivities 3–5 main educational experiences were included
in our study setup of the virtual physics laboratory: (1) an experiment with a
Van de Graaff Generator and a balloon, where charges, electric fields, and field
lines can be visualized, (2) another experiment with a Van de Graaff generator
and a movable grounding device where charges, electric fields, and field lines are

zamfira@unitbv.ro
An Educational Physics Laboratory 1035

visualized (see Fig. 2) and (3) a whiteboard with information and labeled pic-
tures to explain the theory behind the Van De Graaff experiments. In order to
showcase the manifold possibilities of user interaction with virtual objects using
controller mechanisms, the HTC Vive version of this station additionally features
an interactive playground with different textured objects such as throwable and
grabbable cubes and metal balls. (4-only HTC) A triboelectric experiment with
two rods and one balloon as well as a miniature version of the previous Van
de Graaff experiment, however, this was only fully implemented for the HTC
Vive test setup. Hence, to achieve more diversity in our experimental setting,
this specific station was replaced by another station on the Samsung Gear VR
version where it features a laptop with an interactive, feedback-supported quiz
session in order to test the theoretical knowledge users should have gained with
their practical hands-on walk-through of Maroon Mobile VR. (5-optional) Addi-
tionally, an accurate model of a Tesla transformator can be found by users as a
hidden “easter egg” by further exploring the virtual laboratory world.
In our research, these two conceptually different VR setups provide the frame
for our implementation of the interactive immersive physics laboratory. Ulti-
mately, the goal in developing these simulations is to let users act more or less
the same way they were would if placed in a real-life physics laboratory. As of
now, users are - to some extent - able to immerse themselves into this world while
being shielded from (visual) influences of their actual physical surrounding. As
such, immersive 3D has shown to be a beneficial aid to present difficult concepts
in physics, such as the effect of switching a Van De Graaff generator on and off.

4 User Studies
We performed two preliminary user studies with a total of 17 participants to
evaluate the system and the experience. In a first study (with 9 participants) we
focused on testing the Maroon with the mobile setup only. In the second study
(with 8 participants) we focused on evaluating (1) engagement, (2) immersion,
(3) learning experience, (4) virtual reality experience, and (5) usability and user
experience in comparison to a more interactive VR experience with the HTC
VIVE.

4.1 Material and Setup

The VR setup for Samsung Gear VR consists of the following hardware compo-
nents: mobile HMD and smartphone Samsung Galaxy S6. Figure 3a shows the
Samsung Gear VR with the attached mobile phone. The setup for HTC Vive
contains the HMD itself, cables and two base stations as well as two controllers.
For a room-scale setup setting, we provided an area of about 2 m × 2 m. Further-
more, a powerful high-end hardware PC is necessary. A mobile VR setup was
chosen in order to support a widely accessible and cost-effective way to interact
with the laboratory, which could be used in classroom environments (e.g. guided
by an instructor), or for self-regulated learning at home.

zamfira@unitbv.ro
1036 J. Pirker et al.

(a) (b)

Fig. 3. Samsung Gear VR and HTC Vive setup

4.2 Method and Procedure

For the first study with Samsung Gear VR, we first asked the participants to fill
out a pre-questionnaire. The pre-questionnaire was used to get information about
the participants experience with virtual experiences, and VR technologies, and
their expertise in physics. They were then were introduced briefly to the system.
After this they were asked to use the Maroon Mobile VR with the Samsung Gear
VR. After the experience, the participants shortly described their impressions in
form of an open dialog. Finally, they were asked to complete a post-questionnaire
with 10 open-ended question on the experience and 20 single-choice questions
with ratings on a Likert scale between 1 (fully disagree) and 7 (fully agree).
In the other extended study with both devices, participants were required to
fill out a short pre-questionnaire with standard personal background information,
followed by a brief introduction to the experimental setup. The main goal was then
to complete consecutive tasks in the immersive lab Maroon, which were announced
by the study moderator during the test run. Since we examine the differences and
similarities of both devices, our eight test subjects were divided into two sepa-
rate groups of four persons each for the purpose of AB/BA testing where users
test both devices in reverse order. (Specifically, four users tested the Vive first;
whereas the other four tested the Samsung Gear VR first.) After each single run,
users completed a corresponding post-questionnaire containing 19 standardized
questions from the Game Engagement Questionnaire (GEQ, [5]) to measure the
level of engagement based on absorption, flow, presence, and immersion, as well
as ten open-ended questions on the experience and 20 single-choice questions with
ratings on a Likert scale between 1 (fully disagree) and 7 (fully agree). For a com-
parative evaluation, all subjects had to complete a “combined” post-questionnaire
with open-ended questions about their experience on both devices at the end of
the experiment (Fig. 4).

zamfira@unitbv.ro
zamfira@unitbv.ro
Fig. 4. Survey results of experience with Maroon Mobile VR between 1 (not at all) and 7 (fully agree) in GEAR VR
An Educational Physics Laboratory
1037
1038 J. Pirker et al.

4.3 Participants

Experiment 1. In the first study 9 students (2f) between 23 and 27 (AVG


= 24.78; SD = 1.47) tested Maroon Mobile VR. All students were in the field
of computer science or electrical engineering and rated their experience with
computers very high. 6 students rated their selves on a Likert scale between 1 (not
at all) and 5 (fully agree) also as very experienced in the usage of video-games
(AVG = 4.11;1,17), 8 like playing video games. All of them rated themselves
as not very experienced in the usage of VR (AVG = 1.78;0.97). 7 had heard
of mobile VR devices before, 4 have used Google Cardboard, 5 the Samsung
Gear VR. Rating their physics expertise the results were very mixed (AVG =
2.89;1.05).

Experiment 2. In the second study 8 (1f) participants were asked to test the
mobile (Maroon Mobile VR) and the interactive physics lab (Maroon Room
Scale VR). 7 are very experienced in the use of computers (AVG = 4.38;1.41),
only 2 in the usage of video-games (AVG = 3;1.2), and only 1 in VR (AVG =
2.25;1.39). 4 have used a mobile VR setup before, nobody the HTC VIVE. 7
rated their physics knowledge a 3 or below (AVG = 2.63;0.92).
In the following sections we discuss different aspects of the outcomes of the
post-questionnaires and the interviews. The individual aspects will be mainly
described by including outcomes of the questionnaire and direct quotes describ-
ing the students impressions and experiences. An overview of the results can also
be found in Fig. 1.

4.4 Experiencing Immersion and Engagement

Most of the participants said they find learning in this manner more engaging
(AVG = 6.67; SD = 0.82) and fun (AVG = 6.33;0.82). When being asked if
they find it engaging and motivating, most of them agreed: “very motivating
way of demonstrating stuff ”. The lack of content and variety was mentioned as a
drawback here: “Not yet, but I can see how the concept would be engaging once
more variety exists.” When asked what they liked about the system, immersive
and three-dimension characteristics were mentioned in particular: “Immersion
makes me remember stuff better”. The VR experience was received very positive
and described as very immersive. In the second part of the study we compared
presence, absorption, flow, and immersion between interactive VR experience
(with the HTC Vive setup and the mobile setup. As seen in Fig. 6 the interactive
version achieves only slightly better results in all 4 categories.

4.5 Experiencing Learning

In the first part of the study, on a Likert scale between 1 (not at all) and 7
(fully agree) most of the people questioned said they would like to learn with
Maroon Mobile VR (AVG = 5.33; SD = 1.51) and feel that the content is easier

zamfira@unitbv.ro
An Educational Physics Laboratory 1039

to understand (AVG = 5.67;1.21) and more motivating than ordinary exercises


(AVG = 6.0;0.89). However, the environment inspired only a few to learn more
about physics (AVG = 3.17;1.33). When we asked them if they would use it
for learning, all but one of the participants were positive about this idea. Many
positive comments mentioned the experimentation and visualization of usually
unseen things: “I would use it immediately for my mechanical engineering stud-
ies, because it is an advantage to see and rotate the machines in a 3D space;
also it can be an advantage when learning about dangerous machines: one can
still see everything without a distance”. It was also mentioned that they would
like immersive lab as supplement for learning (AVG = 6.16;0.98). The students
of the evaluation group would rather like to use Mobile VR in a class-room
environment (AVG = 5.33;1.86) than at home (SVG = 4.5;1.87). “There are a
few elements missing that would produce a good learning environment for me.
The first thing are explanations. If someone learns about the illustrated concepts
beforehand (maybe in a class), the game could certainly help with that, but it
is far from a standalone learning tool right now”. Concerns using this system

Fig. 5. Comparison of survey results of experience with Maroon VR between 1 (not


at all) and 7 (fully agree) between experiment 1 with Gear VR and experiment 2 with
both devices

zamfira@unitbv.ro
1040 J. Pirker et al.

for learning include the topic choice (“It’s good for demonstrating something,
maybe not as good for learning facts etc., because you can’t for example take
notes etc.”) and additional overhead. The VR aspect was very well received for
learning. Participants thought it was engaging to see the physics simulations
with the VR glasses (AVG = 6.5,0.55) and also a bit more engaging than with-
out VR (AVG = 5.83;0.98) “learning with VR is gonna be awesome and I never
thought about what happens to a balloon if we place him between a Tesla-coil and
a grounder Funny”.
In Fig. 5 we compare the above mentioned results with the results of the
second part of the study. Again, we can see that the interactive VR experiences
achieved slightly better results compared to the mobile experience. However, in
the first experiment with the mobile device only, the results were slightly better
for the mobile setup compared to the second experiment. This could explained
by a bias through the interaction with the HTC Vive setup.

Fig. 6. Comparison of GEQ

4.6 Experiencing Usability and User Experience


While some of the people had no issues with the controls and the interface, oth-
ers had problems here, especially with learning the movements. Minor usability
issues were mentioned. These included in particular the unusual movement (tele-
porting instead of walking; how to turn the avatar) and interactions (e.g. clicking
twice on the door to exit a simulation instead just once). “Moving in the envi-
ronment was not very intuitive, but worked well. The UI was not very hard to
figure out.” Additionally, the idea to give more feedback on interaction possi-
bilities was mentioned “I wished for some visual feedback on what’s clickable.
I wasn’t sure what I can click and what not so I clicked around quite a lot”.

4.7 Concerns and Improvements


Concerns and ideas for improvement were mainly in the areas of usability and
controls, graphical interface, and more content. The low resolution of the VR

zamfira@unitbv.ro
An Educational Physics Laboratory 1041

experience was mentioned as a drawback by some students. “If the target group
is Cardboard users then theres not much to improve graphics wise I think. Maybe
having a narrator voice explaining things or physic concepts to the user would
be nice.” Several participants mentioned that they would like to see more exper-
iments and simulations in the world and that the lab still looks very empty “I
think one can learn a lot, however, many experiments or models are required for
that”, “It was nice, a bit empty, not very realistic looking, but nice.” “the VR
technology itself needs to be improved. Higher resolution and lenses will make
a huge difference. The game it self was, except of some teething troubles, well
done. The controls are good, maybe improvable with a controller. But all in all I
liked it.”
The study was designed to get insights to improve the current prototype with
focus on engagement, immersion, and learning outcomes. The first study only
focuses on testing the mobile experience and was also used to evaluate the VR
study design for the second study. Based on these finding, the prototype will be
updated and a large-scale study with more participants is designed.

5 Discussion and Conclusion


In this paper we have described an immersive learning environment for physics
education based on interactive physics simulations. First results report very pos-
itive experiences with the environment. The Immersive physics laboratory was
described as a very engaging experience, which participants would be in favor of
using for learning and which they find more engaging and also effective compared
to traditional learning scenarios. The participants would recommend the use of
such tools rather as supplement to traditional in-classroom learning experiences
than as a stand-alone tool for self-regulated learning at home.
The results suggest that such interactive and immersive experiences have the
potential to become an integral part of future learning. The use of VR devices as
learning device can change guided classroom learning and self-regulated learning
at home. More specifically, our first results also suggest that interactable objects
such as a balloon or a ball placed in the virtual world can actually enhance the
feeling of total immersion with users.
Due to the mobility and cost-efficiency of the mobile VR setup (Maroon
Mobile VR), this form of VR lab can be used to extend the classroom learning
with small in-class exercises as part of active learning strategies. In an application
scenario, all students could use it at the same time, while the teacher makes
remarks and talks about the concept. It could also create a new way of making
remote learning exercises more interesting. The room scale setup (Maroon Room
Scale VR) was experienced as more immersive, but it requires a lot of space,
however, and due to the hardware requirement it is very cost-intensive and only
one student could use it at a time. Thus, this setup could be used as part of a
self-directed learning room for students to learn after class.
We have described preliminary tests on a first simplified prototype of the
laboratory with several simulations. The lack of further simulations and inter-
action possibilities was mentioned by the participants and had influenced the

zamfira@unitbv.ro
1042 J. Pirker et al.

study results. To fully explore the potential of such environments we are cur-
rently extending the laboratory with other forms of simulations with different
educational goals. Additionally, we are planning to study further the effects on
learning of the VR experience of the laboratory, also in comparison to the same
desktop experience.

Acknowledgment. We would like to thank John Winston Belcher from the Depart-
ment of Physics Massachusetts Institute of Technology for supporting this research.
The Maroon projects is a research project at Graz University of Technology:
gamelabgraz.com/maroon/. We thank all people who are and were involved in the
development process.

References
1. Adams, W.K., Reid, S., LeMaster, R., McKagan, S.B., Perkins, K.K., Dubson, M.,
Wieman, C.E.: A study of educational simulations part 1-engagement and learning.
J. Interact. Learn. Res. 19(3), 397 (2008)
2. Aldrich, C.: Learning Online with Games, Simulations, and Virtual Worlds: Strate-
gies for Online Instruction, vol. 23. Wiley, San Francisco (2009)
3. Bell, R.L., Smetana, L.K.: Using computer simulations to enhance science teaching
and learning. Natl. Sci. Teachers Assoc. 3, 23–32 (2008)
4. Bonde, M.T., Makransky, G., Wandall, J., Larsen, M.V., Morsing, M., Jarmer, H.,
Sommer, M.O.: Improving biotech education through gamified laboratory simula-
tions. Nat. Biotechnol. 32(7), 694–697 (2014)
5. Brockmyer, J.H., Fox, C.M., Curtiss, K.A., McBroom, E., Burkhart, K.M.,
Pidruzny, J.N.: The development of the game engagement questionnaire: a measure
of engagement in video game-playing. J. Exp. Soc. Psychol. 45(4), 624–634 (2009)
6. Corter, J.E., Nickerson, J.V., Esche, S.K., Chassapis, C., Im, S., Ma, J.: Construct-
ing reality: a study of remote, hands-on, and simulated laboratories. ACM Trans.
Comput.-Hum. Interact. (TOCHI) 14(2), 7 (2007)
7. Csikszentmihalyi, M., Csikszentmihalyi, I.S.: Optimal Experience: Psychological
Studies of Flow in Consciousness. Cambridge University Press, Cambridge (1992)
8. Dori, Y.J., Belcher, J.: How does technology-enabled active learning affect under-
graduate students’ understanding of electromagnetism concepts? J. Learn. Sci.
14(2), 243–279 (2005)
9. Dori, Y.J., Hult, E., Breslow, L., Belcher, J.W.: How much have they retained?
Making unseen concepts seen in a freshman electromagnetism course at MIT. J.
Sci. Educ. Technol. 16(4), 299–323 (2007)
10. Freeman, S., Eddy, S.L., McDonough, M., Smith, M.K., Okoroafor, N., Jordt, H.,
Wenderoth, M.P.: Active learning increases student performance in science, engi-
neering, and mathematics. Proc. Natl. Acad. Sci. 111(23), 8410–8415 (2014)
11. OMICS International: Van de Graaff Generator (2014). http://research.
omicsgroup.org/index.php/Van de Graaff generator
12. Lindsay, E., Good, M.: Virtual and distance experiments: pedagogical alternatives,
not logistical alternatives. In: American Society for Engineering Education, pp. 19–
21 (2006)
13. Lowe, D., Murray, S., Lindsay, E., Liu, D., Bright, C.: Reflecting professional reality
in remote laboratory experiences. In: Proceedings of International Conference on
Remote Engineering and Virtual Instrumentation (REV 2008) (2008)

zamfira@unitbv.ro
An Educational Physics Laboratory 1043

14. Lunce, L.M.: Simulations: bringing the benefits of situated learning to the tradi-
tional classroom. J. Appl. Educ. Technol. 3(1), 37–45 (2006)
15. Olson, S., Riordan, D.G.: Engage to excel: producing one million additional col-
lege graduates with degrees in science, technology, engineering, and mathematics.
Report to the president, Executive Office of the President (2012)
16. Pirker, J., Berger, S., Guetl, C., Belcher, J., Bailey, P.H.: Understanding physical
concepts using an immersive virtual learning environment. In: Proceedings of the
2nd European Immersive Education Summit, Paris, pp. 183–191 (2012)
17. Pirker, J., Gütl, C.: Educational gamified science simulations. In: Gamification in
Education and Business, pp. 253–275. Springer (2015)
18. Pirker, J., Gütl, C., Belcher, J.W., Bailey, P.H.: Design and evaluation of a learner-
centric immersive virtual learning environment for physics education. In: Human
factors in computing and informatics, pp. 551–561. Springer (2013)
19. Settgast, V., Pirker, J., Lontschar, S., Maggale, S., Gütl, C.: Evaluating experiences
in different virtual reality setups. In: International Conference on Entertainment
Computing, pp. 115–125. Springer (2016)
20. Wieman, C., Perkins, K.: Transforming physics education. Phys. Today 58(11), 36
(2005)
21. Windschitl, M.A.: Using computer simulations to enhance conceptual change: the
roles of constructivist instruction and student epistemological beliefs (1995)

zamfira@unitbv.ro
Human Interaction Lab: All-Encompassing Computing
Applied to Emotions in Education

Hector Fernand Gomez Alvarado1 ✉ , Judith Nunez-R1, Luis Alberto Soria2,


( )

Roberto Jacome-G3, Elena Malo-M4, and Claudia Cartuche4


1
Facultad de Ciencias Humanas y de la Educacion, Universidad Tecnica de Ambato,
Campus Huachi, Ambato, Ecuador
{hf.gomez,judithdnunezr}@uta.edu.ec
2
Facultad de Ingeniería Civil y Arquitectura, Universidad Catolica del Ecuador, Quito, Ecuador
lsoria089@puce.edu.ec
3
Facultad de Ingeniería en Sistemas, Universidad Nacional de Loja, Loja, Ecuador
rjacome@unl.edu.ec
4
Facultad de Arquitectura y Diseño, Universidad Tecnica Particular de Loja, Loja, Ecuador
{semalo,cpcartuche}@utpl.edu.ec

Abstract. Emotion analysis is a key variable in textual analysis, namely that


which focuses on detecting, separating or extracting information related to human
attitudes and feelings-such as opinions or value judgments. This research paper
aims to identify through a combination of built a human interaction Lab to identify
human emotions in the classroom. Although this research is only currently in its
developmental stage, it provides some conclusions about the state of the art as
well as details of future works that will be carried out in the field of lab emotion
analysis in an online environment.

1 Introduction

Today, working with human behavior is vitally important, especially if we consider the
impact neuroscience has had in our understanding of learning processes. The idea is to
discover the part of behavior that is essential in organizing learning that occurs in the
brain, where emotions become involved. In this paper we analyze emotions in relation
to the opinions students have about the work teachers give them. Then, we try to deter‐
mine if that work along with tests are the appropriate tools in evaluating output or if they
are simply something they have to comply with as part of their curriculum.
We center our study on facial expression and writing as they are deposits of positive
and negative emotions. In fact, we can firmly say that a camera, depending on the algo‐
rithm used, can capture more information than the human eye. By following a specific
process, a camera directed to a student’s face can help us identify what students think
about the work they have at hand. This is where the neuroscience lab comes into play.
It is a place that is dedicated to identify emotional human behavior through the signals
received on cameras that are located in the Faculty of Human Sciences and Education.
The scientific background that supports our work includes the following:

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_96

zamfira@unitbv.ro
Human Interaction Lab 1045

Understanding student progress in learning situations help us identify contexts and


adapt environments to their needs. Identifying emotions in participants’ faces is possible
in this research through a process that involves various steps. The first is to obtain infor‐
mation about the current expression of the face. This includes detection of features, such
as: eyes, eyebrows and mouth. We can do this by using a tracking feature point such us,
optical flow fields, neural networks, distortion outlines or imaging by differences that
had been presented in many researches in which emotional analysis were obtained. The
second step is to interpret the function information associating it with an emotion. (Park
and Lee 2008) suggest using their system that interprets gestures, including facial
expressions. The disadvantage is that an emotion can only be interpreted if the facial
expression of emotion starts from a neutral expression. It is possible to detect emotions
and polarity in a text. The information found on forums, better known as “content
generated by the user”, is generally presented as free or unstructured text. Its analysis
requires advanced techniques in natural language processing. This has given way to
emotion analysis and opinión, a task undertaken by natural language processing that
identifies opinions related to an object (Liu 2011). In education, more research is being
centered on emotion analysis since opinions on different tools or activities can be meas‐
ured according to the user criteria. Lexical tools and algorithms can be used to determine
the degree to which educational resources are acceptable and relevant in a target área
(Rodriguez et al. 2014). We work with a software and hardware structure assembled for
the identification of the emotions in the participants of an experiment. And what we are
going to describe as a fundamental contribution of our proposal, we begin with the states
of art where we find the importance of studying emotions in educational environments,
we propose the physical structure and software, finally show some experiments and
conclusions of our work.

2 State of the Art

The following theoretical entries allowed us to determine the efficiency of open educa‐
tion resources OER1 used by a teacher in a specific subject area. We were able to use
them to measure their emotional impact and polarity right when they were being imple‐
mented in the classroom. This allowed us to obtain original answers from the students
with regards to the material handed out by the teacher. There is a direct estimated distri‐
bution for teacher satisfaction regarding the academic performance of the students
(Marcenaro-Gutierrez et al. 2015). In fact, the distribution of academic performance can
be checked by contrasting the teacher’s interest with his students’ opinions about race
(Hanushek and Rivkin 2009). We maintain that this proposal is feasible if we want to
determine whether teaching and learning performance indicators are appropriate in
measuring the academic performance of a student. There are several methodologies that
utilize traditional techniques for Natural Language Processing (NLP) together with
sentiment analysis processes and Web Semantic technologies. Their main objectives are
to improve results for opinion mining, which are based on specific characteristics, e.g.
by employing ontologies for the selection of techniques as well as providing a new
1
OER.- <www.oercommons.org/>.

zamfira@unitbv.ro
1046 H.F. Gomez Alvarado et al.

method for sentiment analysis that is based on vector analysis. It is comprised of four
main modules: the Natural Language Processing module (NLP), the module for the
identification of characteristics based on ontologies, the module for the identification of
polarity, and the module for opinion-based data mining (Peñalver Martínez et al. 2011).
In (Pang and Lee 2004), Pao and Lee propose a novel machine learning method that
applies techniques for the categorization of text- namely its subjective contents. The
extraction of these subjective parts is achieved by using efficient techniques to determine
minimum reductions (cessations) in the networks- thereby facilitating the incorporation
of contextual sentences. Therefore, a methodology has been proposed to facilitate the
usage of a subjectivity detector, which determines whether each phrase is subjective or
not- thus highlighting the objectives and creating an extract of critical and subjective
content for the classification of a predetermined polarity. We developed an application
that would enable us to identify emotions while a person was writing a text. All this
information is recorded in a video for each participant. In our study, however, we did
not need to focus so much on the details of the photo image seeing that we had to correlate
the sentiment analysis with the emotion that the person is displaying. For this reason,
the results that we obtained during the experimental stage are promising. Below are
details of the methodology that was used and the experimental phase that was carried
out by this research team (Gomez et al. 2015).

3 Methodology

First, we installed the lab to collect the data. Cameras that centered on participants’ faces
were placed in hallways and classrooms throughout the Faculty to identify emotion
patterns. We also took into account safety as the cameras can also be used to identify
criminal acts. Figure 1 shows one of the cameras located at one of the halls of the Faculty.

Fig. 1. Camera installed at the hall of the Faculty of Human Sciences and Education.

The monitor that can be seen in the Fig. 1 gives us a broad view of the cameras. It
also allows us to see if the software is running correctly. There are appropriate research
backup cameras not only in the Faculty in Ambato. The camera signals are processed
by the prototype software found at the Human Faculty on the Behavior Lab. The general
distribution of the cameras in the Faculty’s Lab is shown in the following diagram. The

zamfira@unitbv.ro
Human Interaction Lab 1047

methodology being used is the development of prototypes that can provide us with soft‐
ware that can used to obtain camera signals which, after being processed, can give us
emotion analysis data. The following is an example of a face captured by the prototype
(Fig. 2).

Fig. 2. Capture of a face in the prototype.

4 Experimentation

For the indicators of academic performance, we worked with students that took class
during the 2015 semesters. The selection process and the classes involved do not require
a special statistical process since the proposed method is based on 4000 emotional
training faces that were previously included in the database. During the training phase
people were allowed to try out the prototype software to identify their emotions. Based
on that data we were able to train the prototype software and measure its validity in
recognizing emotions in students’ faces. The following table shows the results obtained,
with 10 cameras in classrooms, respect at students emotions recognition:
Figure 3 show that recall are better that precisión with 0.83 with maximal value. The
precision has 0.73 of maximal value.

Fig. 3. Precision and Recall

zamfira@unitbv.ro
1048 H.F. Gomez Alvarado et al.

5 Conclusions

It’s imperative to take into account the opinion of students in the learning process. In
this sense, we developed tools applied to education in its Human Behavior Lab. These
can be used to accurately understand if the materials and the academic performance
indicators are the ones to be used in the process. Thus, students and teachers are the
direct beneficiaries of this project because it contributes to the follow-up process that is
part of teaching and learning. The universities in the study have advancing scientific
research through projects such as the one described here. The Faculty of Human Sciences
and Education has taken advantage of the mechanisms of intervention provided by the
security system put in place, but, in this case, as it is being applied in education. The
recycling of electronic devices benefits the University, Science and the research that is
being carried out in the country. New areas of research are being created which open up
new areas of study, because the values of precision and recall are aceptables in relation
with (Arunnehu and Kalaiselvi 2016). This time it is being done with the participation
of students who we consider to be the scientific community par excellence. That is, new
research topics, with a focus on neurosciences as well as Alzheimer (Arias Tapia et al.
2016), are being born at the Human Behavior Lab, with a special emphasis on Education
(Gomez et al. 2015). The beneficiary of these new projects is the population considered
to be vulnerable in the countries.

Acknowledgments. We thank UTA for supporting the Research Project Human Performance
Laboratory.

References

Peñalver Martínez, I., Valencia García, R., García Sánchez, F.: Minería de opiniones basada en
características guiada por ontologías. Sociedad Española para el Procesamiento del Lenguaje
Natural
Arias Tapia, S., Martínez-Tomás, R., Gómez, H., Del Salto, V., Guerrero, J., Mocha-Bonilla, J.,
et al.: The dissociation between polarity, semantic orientation, and emotional tone as an early
indicator of cognitive impairment. Frontiers in Computational Neuroscience (2016)
Arunnehu, J., Kalaiselvi, G.: Automatic human emotion recognition in surveillance video. En
Intelligent Techniques in Signal Processing for Multimedia Secutiry, pp. 321–342. Springer
(2016)
Gomez, A.H., Arias, T.S., Torres, P., Sanchez, J., Hernandez, V.: Emotions analysis techniques:
their application in the identification of criteria for selecting suitable Open Educational
Resources. In: International Conference on Interactive Collaborative and Blended Learning.
Mexico (2015)
Hanushek, E., Rivkin, S.: Harming the best: how schools affect the blackwhite achievement gap.
Natl. Bur. Econ. Res. 28, 366–393 (2009)
Liu, B.: Web Data Mining Exploring Hyperlinks Contents and Usage Data. University of Illinois,
Chicago (2011)
Marcenaro-Gutierrez, O., Luque-Gallego, M., Lopez-Aguado, L.: Teacher’s satisfaction as
indicator of education system performance. In: EAEE 2015, Alicante (2015)

zamfira@unitbv.ro
Human Interaction Lab 1049

Pang, B., Lee, L.: A sentimental education: Sentiment analysis using subjectivity summarization
based on minimum cuts. In: Proceeding ACL, New York (2004)
Park, E., Lee, Y.: Emotion-based image retrieval using multiple queries and consistency feedback.
In: Conference on Industrial Informatics, New York (2008)
Rodriguez, P., Ortigosa, A., Carro, R.: Detecting and making use of emotions to enhance student
motivation in e-learning environments. Int. J. Continuing Eng. Educ. Life Long Learn. 24(2),
168–183 (2014)

zamfira@unitbv.ro
Distance Learning System Application for Maritime
Specialists Preparing and Corresponding Challenges
Analyzing

Vladlen Shapo ✉
( )

National University “Odessa Maritime Academy”, Odessa, Ukraine


stani@te.net.ua

Abstract. Learning management system Moodle application for maritime trans‐


port specialists preparing in university with territorially distributed subdivisions
and worldwide moving students is described. Corresponding arising challenges
are shown and analyzed.

Keywords: Electronic teaching and methodical materials · Learning


management system · Maritime transport · Remote subdivisions · Specialists
preparing · Training system hardware choosing

1 Introduction

Last years Ukrainian universities implement learning management systems (LMS) very
actively. In connection with impossibility to buy quite expensive software the choice
usually stops on free and open source LMS Moodle. It renews regularly, it has conven‐
ient interface and a lot of possibilities to work for students, teachers, teaching and
methodical materials (TMM) developers [1, 2]. Bit by bit some books appear in different
languages where procedures of distance courses creation, TMM elaboration and LMS
administration are described with different levels of specification [3–5].
Significant difference of National University “Odessa Maritime Academy” (NU
“OMA”) [6] from a lot of different another universities is in presence of remote struc‐
tured subdivisions in different parts of Ukraine: in the city of Izmail (Izmail faculty, IF)
and in the city of Mariupol (Azov maritime institute, AMI). Moreover thousands of full
time students, students by correspondence, postgraduate students, advanced training
courses students and trainees every year pass many month naval training on different type
of commercial vessels and need to have access to TMMs and teacher’s consultations,
being far from the home and NU “OMA”. Before these possibilities could be realized by
passing paper or electronic TMMs to vessels during crew members change (unexpect‐
edly and unreliable), sending electronic TMMs via e-mail or downloading electronic
TMMs from NU “OMA”’s web site [6] (these ways were realized during 13 years without
any statistics, registration, efficiency analyzing, etc.). Using LMS Moodle (in test mode
it has begun in September 2009, in full mode it has begun in September 2010) has signif‐
icantly allowed to enhance quality of TMMs preparing, and also theirs quantity and

© Springer International Publishing AG 2018


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_97

zamfira@unitbv.ro
Distance Learning System Application 1051

actuality, and to prepare teachers and students step by step for regular using of Moodle
possibilities wide spectrum.
Scheme of distance information interaction between participants of educational
process which is realized in NU “OMA” is presented on Fig. 1.

Fig. 1. Structure of information interaction between NU “OMA”’s subdivisions

zamfira@unitbv.ro
1052 V. Shapo

It’s known that in most cases students begin studying of disciplines from school
books, which are created by universities’ teachers, where they study. Precisely these
school books much simpler (in comparing with books of another authors and franchisors)
to place in LMS because by this way it’s possible to minimize problem of content piracy
and another juridical aspects.
At present time distance learning system of NU “OMA” is based on Moodle 1.9.6 [5]
software and provides work of more than 8400 users (including 320 teachers). For fast
registration of big number of users special additional software is created. Using of LMS
allows to get additional positive results in studying of interested students [1, 2]. Activity
and interest of LMS using by students of different studying forms are growing permanently.

2 Methodology

In any information system can happen some fault or failure that’s why it’s necessary to
store backup copy of LMS folders structure with created distance courses and another
materials and database of users. When moving LMS Moodle from one physical server
to another one there is a possibility to export all data from previous server in SQL format
(database dump) and to import all these data to new server. But during LMS exploitation

Fig. 2. Minimal local computer loading during database dump importing

zamfira@unitbv.ro
Distance Learning System Application 1053

server will store new distance courses with a lot of different methodical materials and
tests in the frame of hundreds or even more than thousand disciplines, archive files which
contain backup copies of each course will be added, new users will be added as well,
thousands of users will upload different files, pictures and will take part in forums and
so on. As a result the volume of stored data will grow very fast, the volume of export
SQL file will grow very fast as well and will take place the problem of importing database
dump at new server even from administrator’s console on server computer without LAN
using. Time of import will exceed network (browser) time out value and process of
importing will be interrupted itself without finishing. Concurrently level of hardware
utilization of quite modern typical computer configuration is very high (50–94% of 2-
cores CPU resources and 700–1200 MBytes of RAM). Also LMS Moodle uses CRON
scenario (system job) which works in endless cycle checking and processing all new
events in LMS and adds significant CPU and RAM loading.
Graphics of computer resources utilization with small, middle and high loading are
shown at Figs. 2, 3 and 4 accordingly. These graphics contain following information:
CPU utilization (50, 76 and 94% accordingly) and chronology of CPU utilization in the
top part of graphics; swap file size (696, 941 MBytes and 1.21 GBytes accordingly) and
chronology of swap file using.

Fig. 3. Middle local computer loading during database dump importing

zamfira@unitbv.ro
1054 V. Shapo

Fig. 4. Maximal local computer loading during database dump importing

In the same time repeated manual data input which is necessary to restore LMS is
quite primitive, very boring, long and laborious process which can take between some
days and some weeks of LMS administrator work depends on his qualification and
experience. Herewith it’s undesirable to attract any staff for execution of these works
inside LMS in connection with necessity of full administrator rights granting to addi‐
tional people and unpredictable results of further work because of human factor (fully
unworking LMS in unexpected moment, appearance of faked users, plagiarism of
different learning materials an so on).
Some tasks and problems which have to be solved in university information systems
are described in [7].
It’s necessary to formulate some recommendations for LMS Moodle administrators
based on own 6 years long experience.
1. Not to add LMS users one by one manually even when number of these users is not
too big. It’s much preferably to create external text files with full list of user’s data
using special template, to add new users in the end of this file and to execute auto‐
matic procedure of importing by means of LMS Moodle, because file exported from
LMS which contains the user’s data will not contain user’s passwords and in the
case of LMS fault requires some additional manual processing of exported files

zamfira@unitbv.ro
Distance Learning System Application 1055

which requires additional time. Also this way involves the necessity of new pass‐
words generation and user’s informing about this situation with naturally arising
dissatisfaction and mess, or searching and restoring of previous versions of pass‐
words.
2. Before exporting of full database dump it’s necessary to delete beforehand all archive
copies of distance courses from the server, because presence of these files signifi‐
cantly increases database dump (sometimes even in times).
3. It’s necessary to restrict file sizes (especially for graphical files), which will be
uploaded on the server by users.
4. Not to import database dump to the server by LAN. Generally the most of Ukrainian
universities use Fast Ethernet with 100 Mbps bandwidth as LANs. Just LAN data
transfer speed will be as bottle neck because even bandwidth of old hard drive
connection interface IDE is 100 MBps (theoretically 8 times faster), and modern
SATA interface has at least 150 MBps. At the same time the same LAN will be used
by another users which will take some part of bandwidth as well. So, during
importing of database to the server directly from administrator console LAN will
not become as restricting factor.
5. To store archive copies of each distance course on external hard, optical, flash drives,
streamers and so on. It will make procedure of data restoring for separate distance
courses or whole database much simpler and faster.
6. During database restoring on new server at first it’s necessary to restore only cate‐
gories and subcategories structure and to create empty distance courses without
restoring of their content, and to make database dump. Such dump will have rela‐
tively small size and can be simple and fast restored, being the skeleton of LMS.
7. Before importing of big database dump it’s necessary to enlarge values of following
parameters: max_execution_time, max_input_time, memory_limit,
default_socket_timeout, mysql.connect_timeout, session.gc_maxlifetime in configura‐
tion file php.ini.
Last 10–15 years it’s become absolutely clear that Life long learning (LLL) concep‐
tion is necessary to be realized by any engineer, developer, valuable specialist working
in the field of automation, industry, transport, data transfer and control systems, etc. One
of the facilities to get new knowledge, practical skill, experience, etc. is using of training
equipment.
Main lack of any training equipment is big or huge cost because modern training
equipment consists of high performance computers, network equipment, very expensive
touch panels with big diagonals (at least 40 in) and high resolution and so on. Typically
training equipment may cost several tens, hundreds or even millions dollars depending
on functionality and sphere of application. In the same time even super modern equip‐
ment becomes morally outdated very fast because of appearance of new hardware, soft‐
ware, technologies, network protocols, concepts, algorithms and so on. That’s why from
the economical point of view such training equipment has to be used 24 h per day. Thus
it’s necessary to choose optimal hardware (memory volume and performance, disk
subsystem type and performance, network and graphical interfaces bandwidth, central
and graphic processors productivity, network technology and data transfer rate) config‐
uration taking into consideration cost/productivity ratio.

zamfira@unitbv.ro
1056 V. Shapo

Additionally it’s very complex to realize training equipment loading during whole
day in real life. Moreover very often trainees, which have to pass corresponding training,
live in different cities and even countries, have different level of language speaking,
work in different companies (for example, in maritime branch). These reasons quite
often don’t allow to get all them together. That’s why very important property of training
equipment is possibility to work with trainee and to be controlled by administrator and/
or trainee remotely.
One more task to be solved is Internet channel bandwidth optimal choosing and
analyzing and calculating of additional loading in corporate (campus) computer
network. Transferring of uncompressed graphical data will be reason for unstable
network condition. Fast Ethernet network technology with 100 Mbit/s data transfer rate,
which is most popular in Ukrainian campus networks, will become the bottle neck. There
are 2 evident ways to solve this problem.
1. Using of additional real time data compression boards which will contain own CPU
and memory.
2. Upgrading of network equipment (switches) from Fast Ethernet to Gigabit Ethernet
or even to 10 Gigabit Ethernet for some network segments and structured cabling
system or some it’s segments from category 5 twisted pair to newer twisted pair
category or fiber optics.

3 Conclusions

In this paper some challenges and problems connected with modern university learning
management systems software and hardware are touched on and some recommendations
on exploitation of such systems are proposed. Currently learning management system
is a part of more complex enterprise information systems, it can be combined with
another software like ERP-, CRM-, BI-systems, complex computerized training systems
for seafarers within the same computer network and hardware using virtualization tech‐
nologies in particular. Inasmuch in present time learning and self-learning is life long
process in any field of human activity and is very actual and will become much more
dynamic, these recommendations may be useful not only for specialists in education
system but for specialists which create and exploit different information systems as well.

References

1. Vinnikov V., Shapo V.: Qualified personnel preparing in logistics field with distance learning
systems using. In: INCEL 08: E-learning in Higher School – Problems and Perspectives: Works
of International Conference. Odessa National Polytechnic University – National Technical
University, Kharkiv Polytechnic Institute (2008). http://cde.kpi.kharkov.ua/tempus/incel/
2. Vinnikov V., Shapo V.: Specialists preparing in logistics field with distance learning
application. In: Strategy of Quality in Industry and Education: Works of 4th International
Conference, V.2, pp. 536–539. Technical University, Varna (2008)
3. Miasnikova, T.S., Miasnikov, S.A.: Distance Learning System MOODLE. Publishing House
of Sheinina E, Kharkiv (2008)

zamfira@unitbv.ro
Distance Learning System Application 1057

4. Anisimov, A.: Working in Distance Learning System Moodle. Kharkiv National Academy of
City Management, Kharkiv (2009)
5. MOODLE project. https://moodle.org/mod/data/view.php?id=7246
6. National University “Odessa Maritime Academy”. http://www.onma.edu.ua
7. Shapo V.: Building of complex university information system. In: Problems of Information
Society Development: Materials of VI International Scientific-Practical Conference,
INFORMATIO-2009: Electronic Information Resources: Creating, Using, Access and XII
International Scientific-Practical Conference, Building of Information Society: Resources and
Technologies, Kiev, pp. 120–123 (2009)

zamfira@unitbv.ro
Author Index

A Cano-Ortiz, S.D., 758


Absalyamova, Svetlana Germanovna, 940 Carro, German, 19, 125
Abuzaghleh, Omar, 901 Cartuche, Claudia, 1044
Alattas, Reem, 1014 Carvalho, Vítor, 583, 628
Alías, Francesc, 77 Casagrande, Luan C., 97, 499
Alsina-Pagès, Rosa Ma, 77 Castillo, Camilo, 987
Alves, Gustavo R., 290 Castillo, Fernando, 987
Alves, Gustavo, 298, 375 Castro Gil, Manuel, 125
Al-Zoubi, Abdallah, 424 Castro, Manuel, 19, 298, 375
Angulo, Ignacio, 77, 344, 859, 949 Centea, Dan, 68, 794, 919
Arras, Peter, 315 Chacón, J., 250
Arun Kumar, S., 190 Chandra, Yohanes, 596
Astriani, Maria Seraphina, 596 Chassapis, Constantin, 110
Azad, Abul K.M., 517 Claesson, Lena, 438
Azcuenaga, Esteban, 859 Cobo, Luis, 987
Aziz, El-Sayed, 110 Colombo, Alejandro Francisco, 290
Concari, Sonia Beatriz, 290
B Constantin, Fulvia Anca, 910
Bajči, Brajan, 144 Contreras, Alfonso, 125
Balikin, Gali, 307 Costa, R., 375
Ball, Jeremy, 660 Cristea, Luciana, 532
Barman, Oindrila Ray, 901 Crotti, Yuri, 499
Barznji, Ammar O., 731 Cunha, Renan, 499
Basu, Debarati, 652 Cuperman, Dan, 307
Belkhir, Lotfi, 68
Beltran Pavani, Ana Maria, 957 D
Bidkar, Kunal, 822 D’Mello, Reynold, 901
Boehringer, David, 459 da Silva Silveira, Wagner, 36
Boer, Attila Laszlo, 532 Datta, Avishek, 901
Boukachour, Hadhoum, 609 de C Pereira, Daniel B., 957
Brogan, Daniel Stuart, 652 de la Torre, Jaime Arturo, 469
Broisin, Julien, 220 de la Torre, Luis, 170, 205, 250, 469
Buckens, Lilianne, 555 de Souza Barbosa, William, 957
DeCusatis, C., 132
C Demir, Veysel, 517
Cai, Su, 701 Despotović, Željko V., 392
Callaghan, Michael James, 660 Diaz, Gabriel, 298
Calliari, Felipe, 957 Dobboletta, Elsa, 290, 298, 375
Cano, Jesus, 708 Donadio, Frédéric, 973

© Springer International Publishing AG 2018 1059


M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6

zamfira@unitbv.ro
1060 Author Index

Dormido, Sebastián, 170, 205 Henriques, Renato Ventura B., 36


Drăgulin, Stela, 910 Heradio, Ruben, 205
Dudić, Slobodan, 144 Hernandez, Roberto, 708
Duman, Gazi Murat, 510 Hernandez, Unai, 375, 859
Dziabenko, Olga, 619, 833 Hernandez-Jayo, Unai, 77, 290, 344, 949
Herranz, J.P., 125
E Herrero-Betancourt, F., 758
Eguíluz, Augusto Gomez, 660 Herring, George K., 258
Elbestawi, Mo, 919 Hesselink, Lambertus, 258
Elio, San Cristobal, 125 Hetsch, Verena, 561
ElSayed, Ahmed, 510 Horine, Brent, 540
Esche, Sven, 110 Hu, Wenshan, 278
Esquembre, Francisco, 205 Hussain, Dena, 645
Hutschenreuter, René, 743
F
Fang, Amy, 307 J
Fäth, Tobias, 743 Jacome-G, Roberto, 1044
Faye, Pape Mamadou Djidiack, 764 Jacques, H., 758
Felgueiras, C., 375 Jamkojian, Hagop, 778
Fernandez, Ruben, 298 Jethra, Jasveer Singh T., 822
Fidalgo, André, 298, 375 Jinga, Vlad, 56
Fortuna, Jeff, 851 Johansson, Sven, 438
Francisco, José, 689 Jović, Nikola, 392, 809
Frejaville, Jérémy, 973 Justason, Michael D., 68, 851
Frerich, Sulamith, 160
K
G Kalyan Ram, B., 190, 235
Galan, Daniel, 205 Kist, Alexander A., 266, 483
Galinho, Thierry, 609 Klimova, Blanka, 933
Gamer, Sergei, 359 Klinger, Thomas, 452
Garbi Zutin, Danilo, 375, 452 Koike, Nobuhiko, 367
García Clemente, Félix J., 170 Kongar, Elif, 510
Garcia-Loro, Felix, 125, 298 Krbecek, Michal, 182
García-Zubia, Javier, 290, 344, 375, 859, 949 Krcmar, Helmut, 570
Gerza, Michal, 182 Kreiter, C., 375
Gillet, Denis, 170, 778, 874 Kruse, Daniel, 160
Gomez Alvarado, Hector Fernand, 1044 Kumar, Rakesh, 28
Gomez, Mario, 298 Kuska, Robert, 160
Gonzalez, Fernando, 408
Gouveia, Nuno, 681 L
Gowripeddi, Venkata Vivek, 235 Langmann, Reinhard, 3, 758
Grieu, Jean, 609 Larnier, Stanislas, 973
Gruber, Vilson, 97, 499 Larrondo Petrie, Maria M., 416
Gueye, Amadou Dahirou, 764 Lauber, Andreas, 85
Guimarães Jr., Carlos Solon S., 36 Lecroq, Florence, 609
Gütl, Christian, 1029 Lei, Zhongcheng, 278
Lerro, Federico, 290
H Lesjak, Isabel, 1029
Håkansson, Lars, 438 Liengtiraphan, P., 132
Halimi, Wissam, 778, 874 Lima, N., 375
Hambali, Sharon, 596 Lishou, Claude, 764
Hauer, Andreas, 570 Lohani, Vinod K., 652
Heininger, Robert, 570 Lombardia-Legra, L., 758
Henke, Karsten, 151, 315, 743 Long, Yu, 840

zamfira@unitbv.ro
Author Index 1061

Loro, F., 375 Petrova, Natalya Nikolaevna, 940


Luculescu, Marius Cristian, 532 Pirker, Johanna, 1029
Lundberg, Jenny, 438 Plaza, Pedro, 19
Poliakov, Mykhailo, 151
M Pop, Sebastian, 532
Madritsch, Christian, 452 Pozzo, I., 375
Mahesh, B., 190 Pozzo, María Isabel, 290, 298
Maiti, Ananda, 266, 483 Prathap, S., 190
Mallikarjuna Sarma, B., 190 Punj, Roopali, 28
Malo-M, Elena, 1044 Putinelu, Victor Bogdan, 660
Marcelino, Roderval, 97, 499
Marchisio, Susana, 290 Q
Marques, A., 375 Qiao, Weifeng, 840
Martinez-Cañete, Y., 758
Martínez-Pieper, Gabriel, 344 R
Martins, Tiago, 628 Rashid, Tarik A., 731
Matijević, Milan, 392, 809 Rastogi, Aashi, 901
Maxwell, Andrew D., 266, 483 Reitman, Michael, 307
May, Dominik, 160 Reljić, Vule, 144
McShane, Niall, 660 Riedel-Kruse, Ingmar H., 331
Menezes, Paulo, 681, 689 Robles-Gomez, Antonio, 708
Michels, Lucas B., 97 Rodríguez-Gil, Luis, 344, 859
Milanović, Miloš, 392 Romm, Tal, 307
Milenković, Ivana, 144 Ros, Salvador, 708
Miller, Stephen, 540 Ruiz, Elena, 19
Moniaga, Jurike V., 596
Monni, Stefano Leone, 887 S
Moss, Kevin John, 561 Sachse, Stefan, 555, 561
Muñoz Camacho, Eugenio, 125 Sáenz, Jacobo, 250, 469
Murgia, Fabrizio, 887 Sager, A., 132
Sakhapov, Rustem Lukmanovich, 940
N Saliah-Hassane, Hamadou, 874
Neagu, Andrei, 56 Salillas, Jorge Caballero, 660
Neustock, Lars Thorben, 258 Salis, Carole, 887
Nunez-R, Judith, 1044 Salzmann, Christophe, 170, 778, 874
Samoilă, Cornel, 56, 910
O Sánchez, J., 250
Okhmak, Vyacheslav, 315 Sánchez, José, 469
Orduña, Pablo, 344, 859 Sancristobal, Elio, 19, 298
Ortelt, Tobias R., 160 Sandnes, Frode Eika, 1001
Ozvoldova, Miroslava, 182 Sax, Eric, 85
Schaeffer, Lirio, 97
P Schauer, Franz, 182
Palomo Lima, Vanessa A., 957 Schuldt, Jacqueline, 555, 561
Parger, Mathias, 1029 Šešlija, Dragan, 144
Parkhomenko, Andriy, 322 Shapo, Vladlen, 1050
Parkhomenko, Anzhelika, 322 Sidorenko, Lyudmila Pavlovna, 940
Pastor, Rafael, 708 Singh, Ishwar, 794, 851, 919
Patrão, Bruno, 681, 689 Singh, Pavneet, 822
Pavan, J., 235 Sivakumar, B., 235
Paz, Hector, 298 Smajic, Hasan, 546
Pereira, Carlos Eduardo, 36 Smith, Mark, 266
Pestana Cardoso, Giselen, 957 Soares, Filomena, 583, 628

zamfira@unitbv.ro
1062 Author Index

Sokolyanskii, Aleksandr, 322 W


Soria, Fernando, 298 Wang, Tao, 701
Soria, Luis Alberto, 1044 Wanyama, Tom, 44, 794
Stiller, Michael, 3 Wessel, Niels, 546
Šulc, Jovan, 144 Wijaya, Yangky, 596
Wilson, Marie Florence, 887
T Wolfer, James, 672
Tabunshchyk, Galyna, 315, 895 Wuttke, Heinz-Dietrich, 151, 743
Tajuelo, Javier, 469
Tekkaya, A. Erman, 160
X
Tobarra, Llanos, 708
Xiong, Xingguo, 721
Torres, Marcus, 583
Xue, Xiaoru, 701
Tozanli, Ozden, 510
Tsourlidaki, Eleftheria, 833
Tulenkov, Artem, 322 Y
Yakubiv, Valentyna, 619
U Yamuna Devi, C.R., 235
Uddin, Mohammed Misbah, 517
Ursuţiu, Doru, 56, 910 Z
Utesch, Matthias, 570 Zackrisson, Johan, 438
Zalewski, Janusz, 408
V Zalyubovskiy, Yaroslav, 322
Van Merode, Dirk, 315, 895 Zamfira, Constantin Sorin, 532
Vannier, Thibault, 660 Zapata Rivera, Luis Felipe, 416
Vanvinkenroye, Jan, 459 Zhang, Han, 701
Velosa, Jose Divitt, 987 Zhang, Linfeng, 721
Venant, Rémi, 220 Zhang, Man, 840
Verner, Igor, 307, 359 Zhang, Weilong, 278
Vetault, Stéphane, 973 Zhang, Zhou, 110
Vidal, Philippe, 220 Zhou, Hong, 278
Viegas, M.C., 375 Zinyuk, Lyubov, 619
Vukosavić, Slobodan, 392 Zúñiga, Ignacio, 469

zamfira@unitbv.ro

Vous aimerez peut-être aussi