Vous êtes sur la page 1sur 5

IPASJ International Journal of Computer Science (IIJCS)

Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm


Email: editoriijcs@ipasj.org
ISSN 2321-5992

A Publisher for Research Motivation ........

Volume 2, Issue 12, December 2014

EVALUATING THE BEST ALTERNATIVE


FOR SOFTWARE RE-ENGINEERING USING
HYBRID CODE METRIC
HARMAN PREET KAUR, AMITPAL SINGH
GNDU REGIONAL CAMPUS, GURDASPUR, INDIA

ABSTRACT
Computer software Re-engineering is a perfect and economical way to give much required boost to ones present computer
software system. It is similar to get to the completed software, break it apart, analyze and re-examine its contents, transform
the required contents or the whole computer software as needed and then put it right back together again. In this paper, process
of software reengineering and various merits like reducing costs, protecting investments, reducing risks, improving the process
or increasing the maintainability of existing process has been discussed. Moreover, a survey on various metrics has been done
to find the best alternative for the software re-engineering process. This paper ends up with the suitable solution to find the
alternative of the software re-engineering process.

Keywords:- Software Re-Engineering, Metrics

1. INTRODUCTION
Application Re-engineering is the process of tweaking the prevailing (current) computer software to adapt to changed
circumstances. Put simply, it's getting the completed product (software), break it apart, analyze and re-examine its
contents, transform the required contents (or the whole computer software as needed) and then put it right back
together again. Since this activity is the actual other of Application Progress, it can also be known as Reverse
Engineering. When Computer software Re-engineering happens the existing organization method will also have to be
improved simultaneously to get in-line with the improved variation of the software. Re-engineered computer software
may be simple to keep in comparison to older techniques as ideal methods and knowledge (skilled persons) are plentiful
in the market. There's high risk associated with new computer software development projects. One of much biggest risk
is problems.

Figure 1 Software re-engineer V/s Software development


A single problem produced throughout some of the phases of the software living cycle may multiply right into a greater
problem causing the generation of flawed software. Computer software Re-engineering might help not merely support
to fix any problems that have been produced throughout its generation but also reduce steadily the tendency in recreating a similar problem related when making new software.

Volume 2 Issue 12 December 2014

Page 39

IPASJ International Journal of Computer Science (IIJCS)


A Publisher for Research Motivation ........

Volume 2, Issue 12, December 2014

Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm


Email: editoriijcs@ipasj.org
ISSN 2321-5992

2. MERITS OF THE SOFTWARE RE-ENGINEERING


One can find a number of reasons as to the reasons a company may choose to go for Computer software Re-engineering
tasks alternatively of creating a brand new computer software altogether. These generally include:
2.1 Protect Investments
Defend opportunities and maintain aggressive gain on the market As formerly said, a lot of sources including time
and money is used when developing new software. You can find amount of different phases through that your
organization has to go through before the end item is present and functioning. If the business is always to discard the
prevailing system for a newer one, they will have to invest twice as much resources. This is easily prevented when the
prevailing software may be re-engineered to suite current instances and needs.
2.2 Reduces Costs
Operating older software can be specially costly! The straightforward purpose will be the quick evolution of computer
technology. Frequent issues confronted contain:
a) It could be significantly difficult to find experts specific in that specific region to keep up the present system.
b) Exchanging or finding substitute pieces for the old electronics combined with the present software can create as a
challenge since it becomes obsolete.
In this occasion, the organization will be required to throw out their active system. Alternatively, it's in the best
fascination of the business to re-engineer the present software. Moreover, it can be relatively less costly to re-engineer
the present system when comparing to creating new software.
2.3 Reduces Risks
There's high risk associated with new computer software development projects. One of many biggest risk is problems!
A single problem produced throughout some of the phases of the software living cycle may multiply right into a greater
problem causing the generation of flawed software. Computer software Re-engineering might help not merely support
to fix any problems that have been produced throughout its generation but also reduce steadily the tendency in recreating a similar problem related when making new software.

Figure 2 System Reengineering


2.4 Improves the Business Process
Any computer software that's made; it sometimes compliments or changes the existing organization process. When
Computer software Re-engineering happens the existing organization method will also have to be improved
simultaneously to get in-line with the improved variation of the software.
2.5 Increases the Maintainability of the Existing Software
Re-engineered computer software may be simple to keep in comparison to older techniques as ideal methods and
knowledge (skilled persons) are plentiful in the market.

3. PROCESS OF SOFTWARE REENGINEERING


When taking a look at the Computer software Re-engineering method at a glance it is simple to see that it is carried out
in a similar fashion as Computer software Development. But, it must be taken into account that equally are very
different from each other. An overview of equally Computer software Re-engineering and Computer software Progress
is created under utilizing a simple diagram Looking more into the computer software Re-engineering process, you will
notice so it has are many important milestones. However, depending on the character of the existing computer software
and the various wants of the corporation these milestones can vary from task to project. In other words, there is number
hard and rapidly rule that says each landmark (described below) must certainly be carried out in a computer software
Re-engineering project. Let us search at these important milestones briefly.

Volume 2 Issue 12 December 2014

Page 40

IPASJ International Journal of Computer Science (IIJCS)


A Publisher for Research Motivation ........

Volume 2, Issue 12, December 2014

Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm


Email: editoriijcs@ipasj.org
ISSN 2321-5992

3.1 Translate The Existing Software Into A Modern Language


Legacy methods usually written in old computer development languages are needed to be re-written in a contemporary
computer language. For a good example computer software written using the development language COBOL perhaps
re-written applying a contemporary computer development language such as Visible Basic.
3.2 Organizing And Restructuring The Software
For a good example, workers in your organization may possibly protest that the computer program is gradual and using
extended time and energy to process the data. One probable purpose is the software perhaps written with plenty of
pointless development code. There may be several reasons for achieving this; one common purpose generally
discovered is using a heritage development language written to complement the history hardware. Thus, re-organizing
and restructuring the development signal could be categorized as a area of the computer software Re-engineering task.
3.3 Modify Of Data And Structures Within The Software
Information structure tells the computer on how to accept information and store them within the system. A good
example with this circumstance could be the Y2K Bug. In the sooner times of research, computer methods were set to
accept the day in the MM/DD/YY format. Since the discovery of the bug and its possible hurt they certainly were reprogrammed to accept the format MM/DD/YYYY.
3.4 Re-Documenting The Software
Documentation is just a essential element in the computer software Growth process as it reflects all components of the
entire process and acts as a blue printing for the end product. Sadly there is a better potential for paying a lesser
emphasis in that prospect when creating new software. What does that end up in? The maintenance task becomes a
nightmare for everyone beginner to the software. Publishing excellent paperwork for the existing program can also be
still another computer software Re-engineering task.
3.5 Conclusion
Computer software Re-engineering is a perfect and economical way to give that much required boost to your present
computer software system. While you can find some steps active in the living routine of a computer software Reengineering task, it is perhaps not compulsory to hold them out separately as it is completely influenced by the
character of the software and the wants of the organization.

4. LITERATURE SURVEY
M. H. Alafli et al. [1] introduced another set of scope criteria for web applications, in view of page access, utilization of
server variables, and collaborations with the database. Taking after an instrumentation change to embed element
following of these angles, a static investigation was utilized to naturally make a scope database by concentrating and
executing just the instrumentation articulations of the project. The database was then overhauled alertly amid execution
by the instrumentation calls themselves. They exhibited the convenience of their scope criteria and the accuracy of their
methodology on the examination of the prevalent web notice board framework Phpbb 2.0. K. M. Manet et al. [2]
exhibited the issues they confronted when outlining the Squale quality model, then they displayed an experimental
arrangement focused around weighted conglomerations and on persistent capacities. The arrangement has been termed
the Squale quality model and accepted in excess of 4 years with two huge multinational organizations: Air FranceKLM and PSA Peugeot-Citroen. Collecting and weighting measurements to deliver quality records is a troublesome
errand. To be sure, sure weighting methodologies may prompt irregular circumstances where a designer expanding the
nature of a product segment seeing the general quality corrupt. At last, mapping mixtures of metric qualities to quality
files may be an issue when utilizing edges. N. Anqutil et al. [3] they mulled over a genuine organizing case (on the
Eclipse stage) to attempt to better comprehend if (some) current measurements would have helped the product designs
in the undertaking. Results demonstrated that the attachment and coupling measurements utilized within the test did
not act of course and would most likely not have helped the maintainers reach there objective. They additionally
measured an alternate conceivable rebuilding which need to lessening the quantity of cyclic conditions between
modules. Once more, the results did not meet desires. F. A Fontana et al. [4] meant to give help insights for the
assessment of the code and outline nature of a framework and specifically they recommended to utilize measurements
processing and antipatterns identification together. They proposed measurements processing focused around specific
sorts of micro-structures and the location of structural and item arranged antipatterns with the point of distinguishing
territories of outline upgrades. They assessed the nature of a framework as indicated by diverse issues, for instance by
understanding its worldwide unpredictability, dissecting the attachment and coupling of framework modules and
spotting the most basic and complex segments that need specific refactoring or upkeep. M. von Detten et al. [5]
Archimetrix empowers the reengineer to catch the most important insufficiencies concerning a figured out part based
construction modeling and underpinned them by displaying the design outcomes of evacuating a given insufficiency. In
this way, a figuring out step was required that recouped the framework's segments, subsystems, and connectors. Be that
as it may, figuring out techniques were seriously affected by configuration lacks in the framework's code base, e.g., they

Volume 2 Issue 12 December 2014

Page 41

IPASJ International Journal of Computer Science (IIJCS)


A Publisher for Research Motivation ........

Volume 2, Issue 12, December 2014

Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm


Email: editoriijcs@ipasj.org
ISSN 2321-5992

prompt wrong part structures.N. Yoshida et al. [6] proposed a methodology to partitioning source code into practical
portions. Their methodology utilized union metric for code bit to recognize begin and end purposes of every useful
fragment. Amid programming upkeep, comprehension source code was one of prolonged exercises. Great programming
practice proposed that developers ought to embed clear lines to separation source code into practical fragments, and a
remark at the start of every utilitarian section. Those help engineers to comprehend utilitarian division of source code,
for example, begin and end purposes of every practical section. M.w Ashghar et al. [7] presented a device that
prioritizes (change) prerequisites by utilizing antiquities traceability data, to find the necessities execution, and a set of
code-based measurements, to measure a few properties (e.g., coupling, size, dispersing) of the prerequisites usage. The
instrument, henceforth, decided the prerequisite requesting concerning how these prerequisites were actualized in a
subject programming framework. R. Shatnawi et al. [8] Computer software testers are generally provoked with jobs
which have faults. Predicting a class's fault-proneness is essential for minimizing cost and increasing the effectiveness
of the software testing. Prior study on software metrics has shown strong relationships between software metrics and
defects in object-oriented systems using a binary variable. Nevertheless, these designs do not consider the annals of
defects in classes. A dependent variable has been planned that uses fault history to charge lessons into four types
(none, low chance, medium chance and large risk) and to improve the predictive convenience of fault models. The
mathematical variations in seven machine learning methods has been evaluated to locate perhaps the planned variable
can be used to create greater forecast models. The performance of the classifiers using the four types has significantly
greater than the binary variable. The results has shown changes on the stability of the forecast designs as the software
matures. Therefore the fault history improves the forecast of fault-proneness of lessons in open-source systems. S. Sato
et al. [9] The progress firm frequently changes throughout computer software development. Derivative developments,
forks, and change of developers due to purchase or open-sourcing are some conceivable situations. However, the impact
of this change on computer software quality has yet to be elucidated. Herein we add the thought of origins to review the
results of organizational changes on computer software quality. A file's source is described as its development and
change history. Applying the thought of origins, the two open supply tasks has been considered, Start Office and
Electronic Field, of each manufactured by a total of three organizations. The mathematical examination has been
conducted to investigate the connection between the origins, solution metrics, the amount of changes, and defects.
Benefits has shown that files which can be developed or revised by multiple organizations or by later organizations tend
to be faultier due to the escalation in difficulty and change frequency.A.Peer et al. [10] The connection between objectoriented metrics and application modify proneness has been modeled. The flexible neuro-fuzzy inference process
(ANFIS) has been used to assess the modify proneness for the two professional open source application systems. The
performance of ANFIS has weighed against different methods like bagging, logistic regression and choice trees. The
region below recipient operation quality (ROC) contour has been used to determine the effectiveness of the model. The
current analysis has shown that of all the methods investigated, ANFIS has given the best results for equally the
software systems. The tenderness and specificity for every method has been determined and use it as a calculate to
gauge the model effectiveness. S.ghaith et al. [11] proposed to utilize an idea known as Transaction Profile, which gave
a point by point representation to the transaction in a heap free way, to recognize abnormalities through execution test
runs. The methodology utilized information promptly accessible within execution relapse tests and a queueing system
model of the framework under test to derive the Transactions Profiles. Their beginning results demonstrated that the
Transactions Profiles figured from burden relapse test information uncover the execution effect of any redesign to the
product. Hence they reasoned that utilizing Transactions Profiles has been a powerful approach to permit testing groups
to effectively guarantee every new programming discharge does not endure execution relapse. L. Vidacas et al. [12]
they examined the impact of diverse test lessening routines on the execution of issue limitation and discovery methods.
They likewise gave new consolidated routines that fuse both restriction and identification angles. The explore different
avenues regarding SIR programs customarily utilized as a part of deficiency confinement look into, and amplified the
research endeavor with substantial modern programming frameworks including GCC and Webkit. P. Oliveria et al.
[13] depicted an observational strategy for concentrating relative limits from true frameworks. They additionally
reported a study on applying this strategy in a corpus with 106 frameworks. In view of the consequences of this study,
they contended that the proposed edges express a harmony in the middle of true and romanticized outline rehearses.
The proposed limits were relative on the grounds that they expect that metric edges ought to be trailed by most source
code substances, however that it has likewise characteristic to have various elements in the "long-tail" that don't take
after as far as possible. N. Baliyan et al. [14] performed writing audit and subsequently drawn an arrangement of
progressing research in this course of adjustment in this paper. Different exploration holes in the ranges of
programming improvement process, programming reengineering, estimation, measurements, and quality models
focused at Software as a Service were recognized, which can be an initial move towards the meaning of benchmarks
and rules for Software as a Service advancement.

Volume 2 Issue 12 December 2014

Page 42

IPASJ International Journal of Computer Science (IIJCS)


A Publisher for Research Motivation ........

Volume 2, Issue 12, December 2014

Web Site: http://www.ipasj.org/IIJCS/IIJCS.htm


Email: editoriijcs@ipasj.org
ISSN 2321-5992

5. GAPS IN LITERATURE
Following are the gaps in literature:a. As known many version are there for change for single software. So selecting the best software for re-engineer is
found to be very critical task.
b. The role of code metrics has been neglected by the most of the existing researchers to find the best alternative for
re-engineer.
c. No hybrid metric has been proposed to find the collective value.

6. CONCLUSION AND FUTURE SCOPE


In this paper, there has been discussion on various merits and process of software re-engineering. The survey on
various metrics has shown that the role of code metrics has been neglected by the most of the existing researchers to
find the best alternative for re-engineer. Moreover, no hybrid metric has been proposed to find the collective value till
date. Therefore, in near future, a hybrid metric approach can be proposed which will find the best alternative to select
software to be re-engineered in an efficient manner.

REFERENCES
[1] Alalfi, Manar H., James R. Cordy, and Thomas R. Dean. "Automating coverage metrics for dynamic web
applications." In Software Maintenance and Reengineering (CSMR), 2010 14th European Conference on, pp. 5160. IEEE, 2010.
[2] Mordal-Manet, Karine, Jannik Laval, Stphane Ducasse, Nicolas Anquetil, Franoise Balmas, Fabrice Bellingard,
Laurent Bouhier, Philippe Vaillergues, and Thomas J. McCabe. "An empirical model for continuous and weighted
metric aggregation." In Software Maintenance and Reengineering (CSMR), 2011 15th European Conference on,
pp. 141-150. IEEE, 2011.
[3] Anquetil, Nicolas, and Jannik Laval. "Legacy software restructuring: Analyzing a concrete case." In Software
Maintenance and Reengineering (CSMR), 2011 15th European Conference on, pp. 279-286. IEEE, 2011.
[4] Fontana, Francesca Arcelli, and Stefano Maggioni. "Metrics and Antipatterns for Software Quality Evaluation." In
Software Engineering Workshop (SEW), 2011 34th IEEE, pp. 48-56. IEEE, 2011.
[5] von Detten, Markus. "Archimetrix: A Tool for Deficiency-Aware Software Architecture Reconstruction." In
Reverse Engineering (WCRE), 2012 19th Working Conference on, pp. 503-504. IEEE, 2012.
[6] Yoshida, Norihiro, Masataka Kinoshita, and Hajimu Iida. "A cohesion metric approach to dividing source code
into functional segments to improve maintainability." In Software Maintenance and Reengineering (CSMR), 2012
16th European Conference on, pp. 365-370. IEEE, 2012.
[7] Asghar, M. Waseem, Alessandro Marchetto, Angelo Susi, and Giuseppe Scanniello. "Maintainability-based
requirements prioritization by using artifacts traceability and code metrics." In Software Maintenance and
Reengineering (CSMR), 2013 17th European Conference on, pp. 417-420. IEEE, 2013.
[8] Shatnawi, Raed. "Empirical study of fault prediction for open-source systems using the Chidamber and Kemerer
metrics." IET Software 8, no. 3 (2013): 113-119.
[9] Sato, Seiji, Hironori Washizaki, Yoshiaki Fukazawa, Sakae Inoue, Hiroyuki Ono, Yoshiiku Hanai, and Mikihiko
Yamamoto. "Effects of Organizational Changes on Product Metrics and Defects." In Software Engineering
Conference (APSEC, 2013 20th Asia-Pacific, vol. 1, pp. 132-139. IEEE, 2013.
[10] Peer, Akshit, and Ruchika Malhotra. "Application of adaptive neuro-fuzzy inference system for predicting software
change proneness." In Advances in Computing, Communications and Informatics (ICACCI), 2013 International
Conference on, pp. 2026-2031. IEEE, 2013.
[11] Ghaith, Shadi, Miao Wang, Philip Perry, and John Murphy. "Profile-based, load-independent anomaly detection
and analysis in performance regression testing of software systems." In Software Maintenance and Reengineering
(CSMR), 2013 17th European Conference on, pp. 379-383. IEEE, 2013.
[12] Vidcs, Lszl, rpd Beszdes, David Tengeri, Istvn Siket, and Tibor Gyimthy. "Test suite reduction for fault
detection and localization: A combined approach." In Software Maintenance, Reengineering and Reverse
Engineering (CSMR-WCRE), 2014 Software Evolution Week-IEEE Conference on, pp. 204-213. IEEE, 2014.
[13] Oliveira, Paloma, Marco Tulio Valente, and Fernando Paim Lima. "Extracting relative thresholds for source code
metrics." In Software Maintenance, Reengineering and Reverse Engineering (CSMR-WCRE), 2014 Software
Evolution Week-IEEE Conference on, pp. 254-263. IEEE, 2014.
[14] Baliyan, Niyati, and Sandeep Kumar. "Towards software engineering paradigm for software as a service." In
Contemporary Computing (IC3), 2014 Seventh International Conference on, pp. 329-333. IEEE, 2014

Volume 2 Issue 12 December 2014

Page 43

Vous aimerez peut-être aussi