Vous êtes sur la page 1sur 31

PROJECTS ON DOT NET

( IEEE 2010 )
S.NO PROJECT TITLE TOPIC
1 ACHIEVING SECURE, SCALABLE, AND FINE-GRAINED DATA ACCESS Cloud
CONTROL IN CLOUD COMPUTING Computing

ABSTRACT:
Cloud computing is an emerging computing paradigm in which resources of the
computing infrastructure are provided as services over the Internet. This paper proposed
some services for data security and access control when users outsource sensitive data for
sharing on cloud servers. This paper addresses this challenging open issue by, on one
hand, defining and enforcing access policies based on data attributes, and, on the other
hand, allowing the data owner to delegate most of the computation tasks involved in fine
grained data access control to un-trusted cloud servers without disclosing the underlying
data contents. Our proposed scheme enables the data owner to delegate tasks of data file
re-encryption and user secret key update to cloud servers without disclosing data contents
or user access privilege information. We achieve this goal by exploiting and uniquely
combining techniques of attribute-based encryption (ABE), proxy re-encryption, and lazy
re-encryption. Our proposed scheme also has salient properties of user access privilege
confidentiality and user secret key accountability and achieves fine - graininess,
scalability and data confidentiality for data access control in cloud computing. Extensive
analysis shows that our proposed scheme is highly efficient and provably secures under
existing security models.
2 ADAPTIVE JOIN OPERATORS FOR RESULT RATE OPTIMIZATION ON Data
STREAMING INPUTS Mining

ABSTRACT:
MODERN information processing is moving into a realm where we often need to
process data that are pushed or pulled from autonomous data sources through
heterogeneous networks. Adaptive join algorithms have recently attracted a lot of
attention in emerging applications where data are provided by autonomous data sources
through heterogeneous network environments
Their main advantage over traditional join techniques is that they can start
producing join results as soon as the first input tuples are available, thus, improving
pipelining by smoothing join result production and by masking source or network delays.
In this project, we first propose Double Index Nested-loops Reactive join
(DINER), a new adaptive two-way join algorithm for result rate maximization. DINER
combines two key elements: an intuitive flushing policy that aims to increase the
productivity of in-memory tuples in producing results during the online phase of the join,
and a novel reentrant join technique that allows the algorithm to rapidly switch between
processing in-memory and disk-resident tuples, thus, better exploiting temporary delays
when new data are not available
We then extend the applicability of the proposed technique for a more challenging
setup: handling more than two inputs. Multiple Index Nested-loop Reactive join
(MINER) is a multi-way join operator that inherits its principles from DINER. Our
experiments using real and synthetic data sets demonstrate that DINER outperforms
previous adaptive join algorithms in producing result tuples at a significantly higher rate,
while making better use of the available memory. Our experiments also shows that in the
presence of multiple inputs, MINER manages to produce a high percentage of early
results
3 AI AND OPINION MINING Data
Mining
ABSTRACT:
The main goal of this project is to extracting, classifying, understanding and
accessing the opinions expressed in various online news sources. Here opinion mining
refers to computational techniques for analyzing the opinions that are extracted from
various sources. Current opinion research focuses on business and e-commerce such as
product reviews and movie ratings.
We developed a framework for analysis with four major stages such as
stakeholder analysis, topical analysis, sentiment analysis and stock modeling. During the
stakeholder analysis stage, we identified the stakeholder groups participating in web
forum discussions. In the topical analysis stage, the major topics of discussion driving
communication in the Web forum are determined. The sentiment analysis stage consists
of assessing the opinions expressed by the Web forum participants in their discussions.
Finally, in the stock modeling stage, we examine the relationships between various
attributes of web forum discussions and the firm’s stock behavior.
Opinion target, opinion holder and opinion are the definitions used to extracting
opinions from different online Sources. An opinion can be expressed in two types. 1.
Direct opinion, 2.Comparative opinion. All the opinions are stored in a document.
4 AN ABUSE FREE FAIR CONTRACT SIGNING PROTOCOL BASED ON THE Network
RSA SIGNATURE Security

ABSTRACT:
A fair contract signing protocol allows two potentially mistrusted parities to
exchange their commitments (i.e., digital signatures) to an agreed contract over the
Internet in a fair way, so that either each of them obtains the other’s signature, or neither
party does. Based on the RSA signature scheme, a new digital contract signing protocol is
proposed in this paper. Like the existing RSA-based solutions for the same problem, our
protocol is not only fair, but also optimistic, since the third trusted party is involved only
in the situations where one party is cheating or the communication channel is interrupted.
Furthermore, the proposed protocol satisfies a new property, i.e., it is abuse-free. That is,
if the protocol is executed unsuccessfully, none of the two parties can show the validity
of intermediate results to others. Technical details are provided to analyze the security
and performance of the proposed protocol. In summary, we present the first abuse-free
fair contract signing protocol based on the RSA signature, and show that it is both secure
and efficient.
5 CONDITIONAL SHORTEST PATH ROUTING IN DELAY TOLERANT Networking
NETWORKS
ABSTRACT:
This article studies Delay tolerant networks (DTNs) where each node
knows the probabilistic distribution of contacts with other nodes. Delay tolerant networks
are characterized by the sporadic connectivity between their nodes and therefore the lack
of stable end-to-end paths from source to destination. Since the future node connections
are mostly unknown in these networks, opportunistic forwarding is used to deliver
messages. Based on the observations about human mobility traces and the findings of
previous work, we introduce a new metric called conditional intermeeting time. We
propose Conditional Shortest Path Routing (CSPR) protocol that route the messages over
conditional shortest paths in which the cost of links between nodes is defined by
conditional intermeeting times rather than the conventional intermeeting times. When a
node receives a message from one of its contacts, it stores the message in its buffer and
carries the message until it encounters another node which is at least as useful (in terms
of the delivery) as itself. Through trace-driven simulations, we demonstrate that CSPR
achieves higher delivery rate and lower end-to-end delay compared to the shortest path
based routing protocols that use the conventional intermeeting time as the link metric.
6 DYNAMIC AUTHENTICATION FOR CROSS-REALM SOA-BASED BUSINESS Mobile
PROCESSES Computing

ABSTRACT:

Modern distributed business applications are embedding an increasing degree of


automation and dynamism, from dynamic supply-chain management, enterprise
federations, and virtual collaborations to dynamic service interactions across
organizations. Such dynamism leads to new challenges in security and dependability. In
Service-Oriented Architecture (SOA), collaborating services may belong to different
security realms but often need to be engaged dynamically at runtime. If a cross-realm
authentication relationship cannot be generated dynamically at runtime between
heterogeneous security realms, it is technically difficult to enable dynamic business
processes through secure collaborations between services. A potential solution to this
problem is to generate a trust relationship across security realms so that a user can use the
credential in the local security realm to obtain the credentials to access resources in a
remote realm. However, the process of generating such kinds of trust relationships
between two disjoint security realms is very complex and time consuming, which could
involve a large number of extra operations for credential conversion and require
collaborations in multiple security realms. In this paper, we propose a new cross-realm
authentication protocol for dynamic service interactions. This protocol does not require
credential conversion or establishment of authentication paths.
7 ENSURING DATA STORAGE SECURITY IN CLOUD COMPUTING Cloud
Computing
ABSTRACT:

Cloud computing has been envisioned as the next-generation architecture of IT


enterprise. In contrast to traditional solutions, where the IT services are under proper
physical, logical and personnel controls, cloud computing moves the application software
and databases to the large data centers, where the management of the data and services
may not be fully trustworthy. This unique attribute, however, poses many new security
challenges which have not been well understood. In this article, we focus on cloud data
storage security, which has always been an important aspect of quality of service. To
ensure the correctness of users' data in the cloud, we propose an effective and flexible
distributed scheme with two salient features, opposing to its predecessors. By utilizing
the homo morphic token with distributed verification of erasure-coded data, our scheme
achieves the integration of storage correctness insurance and data error localization, i.e.,
the identification of misbehaving server (s). Unlike most prior works, the new scheme
further supports secure and efficient dynamic operations on data blocks, including: data
update, delete and append. Extensive security and performance analysis shows that the
proposed scheme is highly efficient and resilient against Byzantine failure, malicious data
modification attack, and even server colluding attacks.
8 FUZZY KEYWORD SEARCH OVER ENCRYPTED DATA IN CLOUD Cloud
COMPUTING Computing

ABSTRACT:
As Cloud Computing becomes prevalent, more and more sensitive information
are being centralized into the cloud. Although traditional searchable encryption schemes
allow a user to securely search over encrypted data through keywords and selectively
retrieve files of interest, these techniques support only exact keyword search. In this
paper, for the first time we formalize and solve the problem of effective fuzzy keyword
search over encrypted cloud data while maintaining keyword privacy. Fuzzy keyword
search greatly enhances system usability by returning the matching files when users’
searching inputs exactly match the predefined keywords or the closest possible matching
files based on keyword similarity semantics, when exact match fails. In our solution, we
exploit edit distance to quantify keywords similarity and develop two advanced
techniques on constructing fuzzy keyword sets, which achieve optimized storage and
representation overheads. We further propose a brand new symbol-based trie-traverse
searching scheme, where a multi-way tree structure is built up using symbols transformed
from the resulted fuzzy keyword sets. Through rigorous security analysis, we show that
our proposed solution is secure and privacy-preserving, while correctly realizing the goal
of fuzzy keyword search. Extensive experimental results demonstrate the efficiency of
the proposed solution.
9 PRICING UNDER CONSTRAINTS IN ACCESS NETWORKS: REVENUE Networking
MAXIMIZATION AND CONGESTION MANAGEMENT

ABSTRACT:
Here we investigate pricing of Internet connectivity services in the context of a
monopoly ISP selling broadband access to consumers. We first study the optimal
combination of flat-rate and usage-based access price components for maximization of
ISP revenue, subject to a capacity constraint on the data rate demand. Next, we consider
time-varying consumer utilities for broadband data rates that can result in uneven demand
for data-rate over time. Practical considerations limit the viability of altering prices over
time to smoothen out the demanded data rate.
Despite such constraints on pricing, our analysis reveals that the ISP can retain the
revenue by setting a low usage fee and dropping packets of consumer demanded data that
exceed capacity. Regulatory attention on ISP congestion management discourages such
“technical” practices and promotes economics based approaches. We characterize the loss
in ISP revenue from an economics based approach. Regulatory requirements further
impose limitations on price discrimination across consumers, and we derive the revenue
loss to the ISP from such restrictions. We then develop partial recovery of revenue loss
through non-linear pricing that does not explicitly discriminate across consumers. While
determination of the access price is ultimately based on additional considerations beyond
the scope of this paper, the analysis here can serve as a benchmark to structure access
price in broadband access networks.
10 SIGNATURE-FREE BUFFER OVERFLOW ATTACK BLOCKER Mobile
Computing
ABSTRACT:
This project propose SigFree, a real time, signature-free, out-of the-box,
application layer blocker for preventing buffer overflow attacks, one of the most serious
cyber security threats. SigFree can filter out code-injection buffer overflow attack
messages targeting at various Internet services such as web service. Motivated by the
observation that buffer overflow attacks typically contain executables whereas legitimate
client requests never contain executables in most Internet services, SigFree blocks attacks
by detecting the presence of code. SigFree first blindly dissembles and extracts
instruction sequences from a request. It then applies a novel technique called code
abstraction, which uses data flow anomaly to prune useless instructions in an instruction
sequence. Finally it compares the number of useful instructions to a threshold to
determine if this instruction sequence contains code. SigFree is signature free, thus it can
block new and unknown buffer overflow attacks; SigFree is also immunized from most
attack-side code obfuscation methods. Since SigFree is transparent to the servers being
protected, it is good for economical Internet wide deployment with very low deployment
and maintenance cost. We implemented and tested SigFree; our experimental study
showed that SigFree could block all types of code injection attack packets (above 250)
tested in our experiments. Moreover, SigFree causes negligible throughput degradation to
normal client requests
11 SLOW ADAPTIVE OFDMA SYSTEM THROUGH CHANCE CONSTRAINED Networking
PROGRAMMING
ABSTRACT:
Adaptive OFDMA has recently been recognized as a promising technique for
providing high spectral efficiency in future broadband wireless systems. The research
over the last decade on adaptive OFDMA systems has focused on adapting the allocation
of radio resources, such as sub carriers and power, to the instantaneous channel
conditions of all users. However, such “fast” adaptation requires high computational
complexity and excessive signaling overhead. This hinders the deployment of adaptive
OFDMA systems worldwide. This paper proposes a slow adaptive OFDMA scheme, in
which the sub carrier allocation is updated on a much slower timescale than that of the
fluctuation of instantaneous channel conditions. Meanwhile, the data rate requirements of
individual users are accommodated on the fast timescale with high probability, thereby
meeting the requirements except occasional outage. Such an objective has a natural
chance constrained programming formulation, which is known to be intractable. To
circumvent this difficulty, we formulate safe tractable constraints for the problem based
on recent advances in chance constrained programming. We then develop a polynomial-
time algorithm for computing an optimal solution to the reformulated problem. Our
results show that the proposed slow adaptation scheme drastically reduces both
computational cost and control signaling overhead when compared with the conventional
fast adaptive OFDMA. Our work can be viewed as an initial attempt to apply the chance
constrained programming methodology to wireless system designs. Given that most
wireless systems can tolerate an occasional dip in the quality of service, we hope that the
proposed methodology will find further applications in wireless communications.
PROJECTS ON DOT NET

YEAR PROJECT TITLE TOPIC

1. AUG A Gen2 Based RFID Authentication Protocol for Security and Privacy Mobile
2009 computing
Abstract:
EPC global Class-1 Generation-2 specification (Gen2 in brief) has been
approved as ISO18000-6C for global use, but the identity of tag (TID) is transmitted in
plaintext which makes the tag traceable and Clonable. Several solutions have been
proposed based on traditional encryption methods, such as symmetric or asymmetric
ciphers, but they are not suitable for low-cost RFID tags. Recently, some lightweight
authentication protocols conforming to Gen2 have been proposed. However, the
message flow of these protocols is different from Gen2. Existing readers may fail to
read new tags. In this paper, we propose a novel authentication protocol based on
Gen2, called Gen2þ, for low-cost RFID tags. Our protocol follows every message flow
in Gen2 to provide backward compatibility. Gen2þ is a multiple round protocol using
shared pseudonyms and Cyclic Redundancy Check (CRC) to achieve reader to- tag
authentication. Conversely, Gen2þ uses the memory read command defined in Gen2 to
achieve tag-to-reader authentication. We show that Gen2þ is more secure under tracing
and cloning attacks.
2. APRIL A TABU Search Algorithm for Cluster Building in Wireless Sensor Networks Mobile
2009 computin
Abstract: g
The main challenge in wireless sensor network deployment pertains to optimizing
energy consumption when collecting data from sensor nodes. This paper proposes a
new centralized clustering method for a data collection mechanism in wireless sensor
networks, which is based on network energy maps and Quality-of-Service (QOS)
requirements. The clustering problem is modeled as a hyper graph partitioning and its
resolution is based on a TABU search heuristic. Our approach defines moves using
largest size cliques in a feasibility cluster graph. Compared to other methods (CPLEX-
based method, distributed method, simulated annealing-based method), the results
show that our TABU search-based approach returns high-quality solutions in terms of
cluster cost and execution time. As a result, this approach is suitable for handling
network extensibility in a satisfactory manner.
3. 2009 A Survey on Privacy Preserving Data Mining Data
Abstract: Engineer
ing
Privacy preserving becomes an important issue in the development progress of
data mining techniques. Privacy preserving data mining has become increasingly
popular because it allows sharing of privacy-sensitive data for analysis purposes. So
people have become increasingly unwilling to share their data, frequently resulting in
individuals either refusing to share their data or providing incorrect data. In turn, such
problems in data collection can affect the success of data mining, which relies on
sufficient amounts of accurate data in order to produce meaningful results. In recent
years, the wide availability of personal data has made the problem of privacy
preserving data mining an important one. A number of methods have recently been
proposed for privacy preserving data mining of multidimensional data records. This
paper intends to reiterate several privacy preserving data mining technologies clearly
and then proceeds to analyze the merits and shortcomings of these technologies.
4. JUNE Analysis of Shortest Path Routing for Large Multi-Hop Wireless Networks Networki
2009 ng
Abstract:
In this paper, we analyze the impact of straight line routing in large
homogeneous multi-hop wireless networks. We estimate the nodal load, which is
defined as the number of packets served at a node, induced by straight line routing. For
a given total offered load on the network, our analysis shows that the nodal load at each
node is a function of the node’s Voronoi cell, the node’s location in the network, and
the traffic pattern specified by the source and destination randomness and straight line
routing. In the asymptotic regime, we show that each node’s probability that the node
serves a packet arriving to the network approaches the products of half the length of the
Voronoi cell perimeter and the load density function that a packet goes through the
node’s location. The density function depends on the traffic pattern generated by
straight line routing, and determines where the hot spot is created in the network.
Hence, contrary to conventional wisdom, straight line routing can balance the load over
the network, depending on the traffic patterns.
5. APR Biased Random Walks in Uniform Wireless Networks Mobile
2009 Computin
Abstract: g
A recurrent problem when designing distributed applications is to search for a
node with known property. File searching in peer-to-peer (P2P) applications, resource
discovery in service-oriented architectures (SOAs), and path discovery in routing can
all be cast as a search problem. Random walk-based search algorithms are often
suggested for tackling the search problem, especially in very dynamic systems-like
mobile wireless networks. The cost and the effectiveness of a random walk-based
search algorithm are measured by the excepted number of transmissions required
before hitting the target. Hence, to have a low hitting time is a critical goal. This paper
studies the effect of biasing random walk toward the target on the hitting time. For a
walk running over a network with uniform node distribution, a simple upper bound that
connects the hitting time to the bias level is obtained. The key result is that even a
modest bias level is able to reduce the hitting time significantly. This paper also
proposes a search protocol for mobile wireless networks, whose results are interpreted
in the light of the theoretical study. The proposed solution is for unstructured wireless
mobile networks.
6. JUN Cell Breathing Techniques for Load Balancing in Wireless LANs
2009 Abstract:
Maximizing network throughput while providing fairness is one of the key
challenges in wireless LANs (WLANs). This goal is typically achieved when the load
of access points (APs) is balanced. Recent studies on operational WLANs, however,
have shown that AP load is often substantially uneven. To alleviate such imbalance of
load, several load balancing schemes have been proposed. These schemes commonly
require proprietary software or hardware at the user side for controlling the user-AP
association. In this paper we present a new load balancing technique by controlling the
size of WLAN cells (i.e., AP’s coverage range), which is conceptually similar to cell
breathing in cellular networks. The proposed scheme does not require any modification
to the users neither the IEEE 802.11 standard. It only requires the ability of
dynamically changing the transmission power of the AP beacon messages. We develop
a set of polynomial time algorithms that find the optimal beacon power settings which
minimize the load of the most congested AP. We also consider the problem of
network-wide min-max load balancing. Simulation results show that the performance
of the proposed method is comparable with or superior to the best existing association-
based methods.
7. AUG Delay Analysis for Maximal Scheduling With Flow Control in Wireless Networks Networki
2009 ng
With Bursty Traffic
We consider the delay properties of one-hop networks with general interference
constraints and multiple traffic streams with time-correlated arrivals. We first treat the
case when arrivals are modulated by independent finite state Markov chains. We show
that the well known maximal scheduling algorithm achieves average delay that grows
at most logarithmically in the largest number of interferers at any link. Further, in the
important special case when each Markov process has at most two states (such as
Bursty ON/OFF sources), we prove that average delay is independent of the number of
nodes and links in the network, and hence is order-optimal. We provide tight delay
bounds in terms of the individual auto-correlation parameters of the traffic sources.
These are perhaps the first order-optimal delay results for controlled queuing networks
that explicitly account for such statistical information. Our analysis treats cases both
with and without flow control.
8. JUN Energy Maps for Mobile Wireless Networks: Coherence Time versus Spreading Mobile
2009 Computin
Period g
Abstract:
We show that even though mobile networks are highly unpredictable when
viewed at the individual node scale, the end-to-end quality-of-service (QoS) metrics
can be stationary when the mobile network is viewed in the aggregate. We define the
coherence time as the maximum duration for which the end-to-end QoS metric remains
roughly constant, and the spreading period as the minimum duration required to spread
QoS information to all the nodes. We show that if the coherence time is greater than
the spreading period, the end-to-end QoS metric can be tracked. We focus on the
energy consumption as the end-to-end QoS metric, and describe a novel method by
which an energy map can be constructed and refined in the joint memory of the mobile
nodes. Finally, we show how energy maps can be utilized by an application that aims
to minimize a node’s total energy consumption over its near-future trajectory.
9. SEPT Enforcing Minimum-Cost Multicast Routing against Shelfish Information Flows Parallel
2009 And
Abstract: Distribute
We study multicast in a non cooperative environment where information flows d Systems
selfishly route themselves through the cheapest paths available. The main challenge is
to enforce such selfish multicast flows to stabilize at a socially optimal operating point
incurring minimum total edge cost, through appropriate cost allocation and other
economic measures, with replicable and encodable properties of information flows
considered. We show that known cost allocation schemes are not sufficient. We
provide a shadow-price-based cost allocation for networks without capacity limits and
show that it enforces minimum-cost multicast. This improves previous result where a
2-approximate multicast flow is enforced. For capacitated networks, computing cost
allocation by ignoring edge capacities will not yield correct results. We show that an
edge tax scheme can be combined with a cost allocation to strictly enforce optimal
multicast flows in this more realistic case. If taxes are not desirable, they can be
returned to flows while maintaining weak enforcement of the optimal flow. We relate
the taxes to VCG payment schemes and discuss an efficient primal-dual algorithm that
simultaneously computes the taxes, the cost allocation, and the optimal multicast flow,
with potential of fully distributed implementations.
10. FEB Explicit Load Balancing Technique for NGEO Satellite IP Networks with On- Networki
2009 ng
Board Processing Capability
Abstract:
Non-geostationary (NGEO) satellite communication systems offer an array of
advantages over their terrestrial and geostationary counterparts. They are seen as an
integral part of next generation ubiquitous communication systems. Given the non-
uniform distribution of users in satellite footprints, due to several geographical and/or
climatic constraints, some Inter-Satellite Links (ISLs) are expected to be heavily loaded
with data packets while others remain underutilized. Such scenario obviously leads to
congestion of the heavily loaded links. It ultimately results in buffer overflows, higher
queuing delays, and significant packet drops. To guarantee a better distribution of
traffic among satellites, this paper proposes an explicit exchange of information on
congestion status among neighboring satellites. Indeed, a satellite notifies its
congestion status to its neighboring satellites. When it is about to get congested, it
requests its neighboring satellites to decrease their data forwarding rates by sending
them a self status notification signaling message. In response, the neighboring satellites
search for less congested paths that do not include the satellite in question and
communicate a portion of data, primarily destined to the satellite, via the retrieved
paths. This operation avoids both congestion and packet drops at the satellite. It also
ensures a better distribution of traffic over the entire satellite constellation. The
proposed scheme is dubbed “Explicit Load Balancing” (ELB) scheme. While the
multi-path routing concept of ELB has many advantages, it may lead to persistent
packet reordering. In case of connection-oriented protocols, this phenomenon results in
unnecessary shrinkage of the data transmission rate. A solution to this issue is also
incorporated in the design of ELB. The interactions of ELB with mechanisms that
provide different QOS by differentiating traffic (e.g., Differentiated Services) are also
discussed. The good performance of ELB, in terms of better traffic distribution, higher
throughput, and lower packet drops, is verified via a set of simulations using the
Network Simulator (NS).
11. JULY Greedy Routing with Anti Void Traversal for Wireless Sensor Networks Mobile
2009 computin
Abstract: g
The un-reachability problem (i.e., the so-called void problem) that exists in the
greedy routing algorithms has been studied for the wireless sensor networks. Some of
the current research work cannot fully resolve the void problem, while there exist other
schemes that can guarantee the delivery of packets with the excessive consumption of
control overheads. In this paper, a greedy anti-void routing (GAR) protocol is proposed
to solve the void problem with increased routing efficiency by exploiting the boundary
finding technique for the unit disk graph (UDG). The proposed rolling-ball UDG
boundary traversal (RUT) is employed to completely guarantee the delivery of packets
from the source to the destination node under the UDG network. The boundary map
(BM) and the indirect map searching (IMS) scheme are proposed as efficient
algorithms for the realization of the RUT technique. Moreover, the hop count reduction
(HCR) scheme is utilized as a short-cutting technique to reduce the routing hops by
listening to the neighbor’s traffic, while the intersection navigation (IN) mechanism is
proposed to obtain the best rolling direction for boundary traversal with the adoption of
shortest path criterion. In order to maintain the network requirement of the proposed
RUT scheme under the non-UDG networks, the partial UDG construction (PUC)
mechanism is proposed to transform the non-UDG into UDG setting for a portion of
nodes that facilitate boundary traversal. These three schemes are incorporated within
the GAR protocol to further enhance the routing performance with reduced
communication overhead. The proofs of correctness for the GAR scheme are also
given in this paper. Comparing with the existing localized routing algorithms, the
simulation results show that the proposed GAR-based protocols can provide better
routing efficiency.
12. AUG Information Content-Based Sensor Selection and Transmission Power Mobile
2009 computin
Adjustment for Collaborative Target Tracking g
Abstract:
For target tracking applications, wireless sensor nodes provide accurate
information since they can be deployed and operated near the phenomenon. These
sensing devices have the opportunity of collaboration among themselves to improve
the target localization and tracking accuracies. An energy-efficient collaborative target
tracking paradigm is developed for wireless sensor networks (WSNs). A mutual-
information-based sensor selection (MISS) algorithm is adopted for participation in the
fusion process. MISS allows the sensor nodes with the highest mutual information
about the target state to transmit data so that the energy consumption is reduced while
the desired target position estimation accuracy is met. In addition, a novel approach to
energy savings in WSNs is devised in the information-controlled transmission power
(ICTP) adjustment, where nodes with more information use higher transmission
powers than those that are less informative to share their target state information with
the neighboring nodes. Simulations demonstrate the performance gains offered by
MISS and ICTP in terms of power consumption and target localization accuracy.
13. 2009 Local Construction of Near-Optimal Power Spanners for Wireless Ad Hoc Networki
ng
Networks
Abstract:
We present a local distributed algorithm that, given a wireless ad-hoc network
modeled as a unit disk graph U in the plane, constructs a planar power spanner of U
whose degree is bounded by k and whose stretch factor is bounded by 1+ (2 Sin /k)^þ ,
where k>=10 is an integer parameter and þ € [2,5] is the power exponent constant. For
the same degree bound k, the stretch factor of our algorithm significantly improves the
previous best bounds by Song et al. We show that this bound is near-optimal by
proving that the slightly smaller stretch factor of 1+ (2 sin π/( k +1)) is unattainable for
the same degree bound k . In contrast to previous algorithms for the problem, the
presented algorithm is local. As a consequence, the algorithm is highly scalable and
robust. Finally, while the algorithm is efficient and easy to implement in practice, it
relies on deep insights on the geometry of unit disk graphs and novel techniques that
are of independent interest.
14. SEP Movement-Assisted Connectivity Restoration in Wireless Sensor and Actor Parallel
2009 Networks And
Abstract: Distribut
Recent years have witnessed a growing interest in applications of wireless ed
sensor and actor networks (WSANs). In these applications, a set of mobile actor nodes Systems
are deployed in addition to sensors in order to collect sensors’ data and perform
specific tasks in response to detected events/objects. In most scenarios, actors have to
respond collectively, which requires interactor coordination. Therefore, maintaining a
connected interactor network is critical to the effectiveness of WSANs. However,
WSANs often operate unattended in harsh environments where actors can easily fail or
get damaged. An actor failure may lead to partitioning the interactor network and thus
hinder the fulfillment of the application requirements. In this paper, we present DARA,
a Distributed Actor Recovery Algorithm, which opts to efficiently restore the
connectivity of the interactor network that has been affected by the failure of an actor.
Two variants of the algorithm are developed to address 1- and 2-connectivity
requirements. The idea is to identify the least set of actors that should be repositioned
in order to reestablish a particular level of connectivity. DARA strives to localize the
scope of the recovery process and minimize the movement overhead imposed on the
involved actors. The effectiveness of DARA is validated through simulation
experiments.
15. AUG On the Planning of Wireless Sensor Networks Energy-Efficient Clustering under Mobile
2009 the Joint Routing and Coverage Constraint computi
Abstract: ng
Minimizing energy dissipation and maximizing network lifetime are important
issues in the design of applications and protocols for sensor networks. Energy-efficient
sensor state planning consists in finding an optimal assignment of states to sensors in
order to maximize network lifetime. For example, in area surveillance applications,
only an optimal subset of sensors that fully covers the monitored area can be switched
on while the other sensors are turned off. In this paper, we address the optimal planning
of sensors’ states in cluster-based sensor networks. Typically, any sensor can be turned
on, turned off, or promoted cluster head, and a different power consumption level is
associated with each of these states. We seek an energy-optimal topology that
maximizes network lifetime while ensuring simultaneously full area coverage and
sensor connectivity to cluster heads, which are constrained to form a spanning tree
used as a routing topology. First, we formulate this problem as an Integer Linear
Programming model that we prove NP-Complete. Then, we implement a Tabu search
heuristic to tackle the exponentially increasing computation time of the exact
resolution. Experimental results show that the proposed heuristic provides near-optimal
network lifetime values within low computation times, which is, in practice, suitable
for large-sized sensor networks.
16. 2009 Performance of Orthogonal Fingerprinting Codes under Worst-Case Noise Informati
on
Abstract: Forensics
We study the effect of the noise distribution on the error probability of the and
detection test when a class of randomly rotated spherical fingerprints is used. The Security
detection test is performed by a focused correlation detector, and the spherical codes
studied here form a randomized orthogonal constellation. The colluders create a noise-
free forgery by uniform averaging of their individual copies, and then add a noise
sequence to form the actual forgery. We derive the noise distribution that maximizes
the error probability of the detector under average and almost-sure distortion
constraints. Moreover, we characterize the noise distribution that minimizes the
decoder's error exponent under a large-deviations distortion constraint
17. AUG PRESTO Feedback-Driven Data Management in Sensor Networks Networki
2009 ng
Abstract:
This paper presents PRESTO, a novel two-tier sensor data management
architecture comprising proxies and sensors that cooperate with one another for
acquiring data and processing queries. PRESTO proxies construct time-series models
of observed trends in the sensor data and transmit the parameters of the model to
sensors. Sensors check sensed data with model-predicted values and transmit only
deviations from the predictions back to the proxy. Such a model-driven push approach
is energy-efficient, while ensuring that anomalous data trends are never missed. In
addition to supporting queries on current data, PRESTO also supports queries on
historical data using interpolation and local archival at sensors. PRESTO can adapt
model and system parameters to data and query dynamics to further extract energy
savings. We have implemented PRESTO on a sensor testbed comprising Intel Stargates
and Telos Motes. Our experiments show that in a temperature monitoring application,
PRESTO yields one to two orders of magnitude reduction in energy requirements over
on-demand, proactive or model-driven pull approaches. PRESTO also results in an
order of magnitude reduction in query latency in a 1% duty-cycled five hop sensor
network over a system that forwards all queries to remote sensor nodes.
18. AUG Random Cast : An Energy-Efficient Communication Scheme for Mobile Ad Hoc Mobile
2009 Computin
Networks g
Abstract:
In mobile ad hoc networks (MANETs), every node overhears every data
transmission occurring in its vicinity and thus, consumes energy unnecessarily.
However, since some MANET routing protocols such as Dynamic Source Routing
(DSR) collect route information via overhearing, they would suffer if they are used in
combination with 802.11 PSM. Allowing no overhearing may critically deteriorate the
performance of the underlying routing protocol, while unconditional overhearing may
offset the advantage of using PSM. This paper proposes a new communication
mechanism, called Random Cast, via which a sender can specify the desired level of
overhearing, making a prudent balance between energy and routing performance. In
addition, it reduces redundant rebroadcasts for a broadcast packet, and thus, saves more
energy. Extensive simulation using ns-2 shows that Random Cast is highly energy-
efficient compared to conventional 802.11 as well as 802.11 PSM-based schemes, in
terms of total energy consumption, energy good put, and energy balance.
19. JUNE Re-sequencing Analysis of Stop-and-Wait ARQ for Parallel Multichannel Networki
2009 ng
Communications
Abstract:
In this paper, we consider a multichannel data communication system in which
the stop-and-wait automatic-repeat request protocol for parallel channels with an in-
sequence delivery guarantee (MSW-ARQ-INS) is used for error control. We evaluate
the re-sequencing delay and the re-sequencing buffer occupancy, respectively. Under
the assumption that all channels have the same transmission rate but possibly different
time-invariant error rates, we derive the probability generating function of the re-
sequencing buffer occupancy and the probability mass function of the re-sequencing
delay. Then, by assuming the Gilbert–Elliott model for each channel, we extend our
analysis to time-varying channels. Through examples, we compute the probability
mass functions of the re-sequencing buffer occupancy and the re-sequencing delay for
time-invariant channels. From numerical and simulation results, we analyze trends in
the mean re-sequencing buffer occupancy and the mean re-sequencing delay as
functions of system parameters. We expect that the modeling technique and analytical
approach used in this paper can be applied to the performance evaluation of other ARQ
protocols (e.g., the selective-repeat ARQ) over multiple time-varying channels.
20. JUNE Resource Allocation in OFDMA Wireless Communications Systems Supporting Networki
2009 ng
Multimedia Services
Abstract:
We design a resource allocation algorithm for downlink of orthogonal frequency
division multiple access (OFDMA) systems supporting real-time (RT) and best-effort
(BE) services simultaneously over a time-varying wireless channel. The proposed
algorithm aims at maximizing system throughput while satisfying quality of service
(QOS) requirements of the RT and BE services. We take two kinds of QOS
requirements into account. One is the required average transmission rate for both RT
and BE services. The other is the tolerable average absolute deviation of transmission
rate (AADTR) just for the RT services, which is used to control the fluctuation in
transmission rates and to limit the RT packet delay to a moderate level. We formulate
the optimization problem representing the resource allocation under consideration and
solve it by using the dual optimization technique and the projection stochastic sub
gradient method. Simulation results show that the proposed algorithm well meets the
QOS requirements with the high throughput and outperforms the modified largest
weighted delay first (M-LWDF) algorithm that supports similar QOS requirements
21. SEPT Route Stability in MANETs under the Random Direction Mobility Model Mobile
2009 computin
Abstract: g
A fundamental issue arising in mobile ad hoc networks (MANETs) is the
selection of the optimal path between any two nodes. A method that has been
advocated to improve routing efficiency is to select the most stable path so as to reduce
the latency and the overhead due to route reconstruction. In this work, we study both
the availability and the duration probability of a routing path that is subject to link
failures caused by node mobility. In particular, we focus on the case where the network
nodes move according to the Random Direction model, and we derive both exact and
approximate (but simple) expressions of these probabilities. Through our results, we
study the problem of selecting an optimal route in terms of path availability. Finally,
we propose an approach to improve the efficiency of reactive routing protocols.
Algorithm /Technique: Random Direction model.
22. JUNE Secure and Policy-Compliant Source Routing Networki
2009 ng
Abstract:
In today’s Internet, inter-domain route control remains elusive; nevertheless,
such control could improve the performance, reliability, and utility of the network for
end users and ISPs alike. While researchers have proposed a number of source routing
techniques to combat this limitation, there has thus far been no way for independent
ASes to ensure that such traffic does not circumvent local traffic policies, nor to
accurately determine the correct party to charge for forwarding the traffic. We present
Platypus, an authenticated source routing system built around the concept of network
capabilities, which allow for accountable, fine-grained path selection by
cryptographically attesting to policy compliance at each hop along a source route.
Capabilities can be composed to construct routes through multiple. ASes and can be
delegated to third parties. Platypus caters to the needs of both end users and ISPs: users
gain the ability to pool their resources and select routes other than the default, while
ISPs maintain control over where, when, and whose packets traverse their networks.
We describe the design and implementation of an extensive Platypus policy framework
that can be used to address several issues in wide-area routing at both the edge and the
core, and evaluate its performance and security. Our results show that incremental
deployment of Platypus can achieve immediate gains.
23. 2009 Single-Link Failure Detection in All-Optical Networks Using Networki
ng
Abstract:
In this paper, we consider the problem of fault localization in all-optical
networks. We introduce the concept of monitoring cycles (MCs) and monitoring paths
(MPs) for unique identification of single-link failures. MCs and MPs are required to
pass through one or more monitoring locations. They are constructed such that any
single-link failure results in the failure of a unique combination of MCs and MPs that
pass through the monitoring location(s). For a network with only one monitoring
location, we prove that three-edge connectivity is a necessary and sufficient condition
for constructing MCs that uniquely identify any single-link failure in the network. For
this case, we formulate the problem of constructing MCs as an integer linear program
(ILP). We also develop heuristic approaches for constructing MCs in the presence of
one or more monitoring locations. For an arbitrary network (not necessarily three-edge
connected), we describe a fault localization technique that uses both MPs and MCs and
that employs multiple monitoring locations. We also provide a linear-time algorithm to
compute the minimum number of required monitoring locations. Through extensive
simulations, we demonstrate the effectiveness of the proposed monitoring technique.
24. MARC Spread-Spectrum Watermarking Security Informati
H 2009 on
Abstract: Forensics
This paper presents both theoretical and practical analyses of the security And
offered by watermarking and data hiding methods based on spread spectrum. In this Security
context, security is understood as the difficulty of estimating the secret parameters of
the embedding function based on the observation of watermarked signals. On the
theoretical side, the security is quantified from an information-theoretic point of view
by means of the equivocation about the secret parameters. The main results reveal
fundamental limits and bounds on security and provide insight into other properties,
such as the impact of the embedding parameters, and the tradeoff between robustness
and security. On the practical side, workable estimators of the secret parameters are
proposed and theoretically analyzed for a variety of scenarios, providing a comparison
with previous approaches, and showing that the security of many schemes used in
practice can be fairly low.
25. JAN- The Effectiveness of Checksums for Embedded Control Networks Dependab
MAR le And
Abstract: Secure
2009
Embedded control networks commonly use checksums to detect data transmission Computin
errors. However, design decisions about which checksum to use are difficult because of g
a lack of information about the relative effectiveness of available options. We study the
error detection effectiveness of the following commonly used checksum computations:
exclusive or (XOR), two’s complement addition, one’s complement addition, Fletcher
checksum, Adler checksum, and cyclic redundancy codes (CRCs). A study of error
detection capabilities for random independent bit errors and burst errors reveals that the
XOR, two’s complement addition, and Adler checksums are suboptimal for typical
network use. Instead, one’s complement addition should be used for networks willing
to sacrifice error detection effectiveness to reduce computational cost, the Fletcher
checksum should be used for networks looking for a balance between error detection
and computational cost, and CRCs should be used for networks willing to pay a higher
computational cost for significantly improved error detection.
26. JUNE Two Blocking Algorithms on Adaptive Binary Networki
2009 ng
Abstract:
In radio frequency identification (RFID) systems, the reader identifies tags
through communication over a shared wireless channel. When multiple tags transmit
their IDs simultaneously, their signals collide, increasing the identification delay.
Therefore, many previous anti-collision algorithms, including an adaptive query
splitting algorithm (AQS) and an adaptive binary splitting algorithm (ABS), focused on
solving this problem. This paper proposes two blocking algorithms, a single resolution
blocking ABS algorithm (SRB) and a pair resolution blocking ABS algorithm (PRB),
based on ABS. SRB not only inherits the essence of ABS which uses the information
of recognized tags obtained from the last process of tag identification, but also adopts a
blocking technique which prevents recognized tags from being collided by
unrecognized tags. PRB further adopts a pair resolution technique which couples
recognized tags and thus only needs half time for next identifying these recognized
tags. We formally analyze the performance of SRB and PRB. Finally, the analytic and
simulation results show that SRB slightly outperforms ABS and PRB significantly
surpasses ABS.
27. JULY Active Learning Methods for Interactive Image Retrieval Image
2008 processing
Abstract:
Active learning methods have been considered with increased interest in the
statistical learning community. Initially developed within a classification framework, a
lot of extensions are now being proposed to handle multimedia applications. This paper
provides algorithms within a statistical framework to extend active learning for online
content-based image retrieval (CBIR). The classification framework is presented with
experiments to compare several powerful classification techniques in this information
retrieval context. Focusing on interactive methods, active learning strategy is then
described. The limitations of this approach for CBIR are emphasized before presenting
our new active selection process RETIN. First, as any active method is sensitive to the
boundary estimation between classes, the RETIN strategy carries out a boundary
correction to make the retrieval process more robust. Second, the criterion of
generalization error to optimize the active learning selection is modified to better
represent the CBIR objective of database ranking. Third, a batch processing of images
is proposed. Our strategy leads to a fast and efficient active learning scheme to retrieve
sets of online images (query concept). Experiments on large databases show that the
RETIN method performs well in comparison to several other active strategies.
28. MAY A New Model For Dissemination Of Xml Systems
2008 And
Abstract: Cyberneti
The paper proposes an approach to content dissemination that exploits the cs
structural properties of an Extensible Markup Language (XML) document object
model in order to provide an efficient dissemination and at the same time assuring
content integrity and confidentiality. Our approach is based on the notion of encrypted
post order numbers that support the integrity and confidentiality requirements of XML
content as well as facilitate efficient identification, extraction, and distribution of
selected content portions. By using such notion, we develop a structure based routing
scheme that prevents information leaks in the XML data dissemination, and assures
that content is delivered to users according to the access control policies, that is,
policies specifying which users can receive which portions of the contents. Our
proposed dissemination approach further enhances such structure based, policy-based
routing by combining it with multicast in order to achieve high efficiency in terms of
bandwidth usage and speed of data delivery, thereby enhancing scalability. Our
dissemination approach thus represents an efficient and secure mechanism for uses in
applications such as publish–subscribe systems for XML Documents. The publish–
subscribe model restricts the consumer and document source information to the routers
to which they register with. Our framework facilitates dissemination of contents with
varying degrees of confidentiality and integrity requirements in a mix of trusted and
un-trusted networks, which is prevalent in current settings across enterprise networks
and the web. Also, it does not require the routers to be aware of any security policy in
the sense that the routers do not need to implement any policy related to access control.
29. APRIL A Novel Framework for Semantic Annotation and Personalized Retrieval of
2008 Sports Video
Abstract:
Sports video annotation is important for sports video semantic analysis such as Multimedi
a
event detection and personalization. We propose a novel approach for sports video
semantic annotation and personalized retrieval. Different from the state of the art sports
video analysis methods which heavily rely on audio/visual features, the proposed
approach incorporates web-casting text into sports video analysis.
Compared with previous approaches, the contributions of our approach include
the following. 1) The event detection accuracy is significantly improved due to the
incorporation of web-casting text analysis. 2) The proposed approach is able to detect
exact event boundary and extract event semantics that is very difficult or impossible to
be handled by previous approaches. 3) The proposed method is able to create
personalized summary from both general and specific point of view related to
particular game, event, player or team according to user’s preference. We present the
framework of our approach and details of text analysis, video analysis, text/video
alignment, and personalized retrieval. The experimental results on event boundary
detection in sports video are encouraging and comparable to the manually selected
events. The evaluation on personalized retrieval is effective in helping meet users’
expectations.
30. APRIL An Efficient Time-Bound Hierarchical Key Management Scheme for Secure Dependb
-JUNE Broadcasting le And
2008 Secure
Abstract: Computi
In electronic subscription and pay TV systems, data can be organized and encrypted ng
using symmetric key algorithms according to predefined time periods and user
privileges and then broadcast to users. This requires an efficient way of managing the
encryption keys. In this scenario, time-bound key management schemes for a hierarchy
were proposed. Both schemes are insecure against collusion attacks. In this paper, we
propose a new key assignment scheme for access control, which is both efficient and
secure. Elliptic-curve cryptography is deployed in this scheme. We also provide the
analysis of the scheme with respect to security and efficiency issues.
31. JULY Analyzing and Managing Role-Based Access Control Policies Data
2008 Abstract: Engineer
ing
In the computerized world all the data are saved on electronically. It also contains more
sensitive data. In computer systems security, role-based access control is an approach
to restricting system access to authorized users. It is a newer alternative approach to
mandatory access control and discretionary access control. Security critical business
processes are mapped to their digital governments. It needs different security
requirements, such as healthcare industry, digital government, and financial service
institute. So the authorization and authentication play a vital role. Authorization
constraints help the policy architect design and express higher level organizational
rules. Access is the ability to do something with a computer resource (e.g., use, change,
or view). Access control is the means by which the ability is explicitly enabled or
restricted in some way (usually through physical and system-based controls).
Computer- based access controls can prescribe not only who or what process may have
access to a specific system resource, but also the type of access that is permitted. These
controls may be implemented in the computer system or in external devices.
32. 2008 Compaction of Schedules and a Two-Stage Approach for Duplication Parallel
And
Abstract: Distributed
Many DAG scheduling algorithms generate schedules that require prohibitively Systems
large number of processors. To address this problem, we propose a generic algorithm,
SC, to minimize the processor requirement of any given valid schedule. SC preserves
the schedule length of the original schedule and reduces processor count by merging
processor schedules and removing redundant duplicate tasks. To the best of our
knowledge, this is the first algorithm to address this highly unexplored aspect of DAG
scheduling. On the average, SC reduced the processor requirement 91%, 82% and 72%
for schedules generated by PLW, TCSD and CPFD algorithms, respectively. SC
algorithm has a low complexity when compared to most duplication based algorithms.
Moreover, it decouples processor economization from schedule length minimization
problem. To take advantage of these features of SC, we also propose a scheduling
algorithm SDS, having the same time complexity as SC. Our experiments demonstrate
that, schedules generated by SDS are only 3% longer than CPFD, one of the best
algorithms in that respect. SDS and SC together form a two-stage scheduling algorithm
that produces schedules with high quality and low processor requirement, and has
lower complexity than the comparable algorithms that produce similar high quality
results.
33. JAN- Credit Card Fraud Detection Using Hidden Markov Models Secure
MAR computin
Abstract: g
2008
Due to a rapid advancement in the electronic commerce technology, the use of
credit cards has dramatically increased. As credit card becomes the most popular mode
of payment for both online as well as regular purchase, cases of fraud associated with it
are also rising. In this paper, we model the sequence of operations in credit card
transaction processing using a Hidden Markov Model (HMM) and show how it can be
used for the detection of frauds. An HMM is initially trained with the normal behavior
of a cardholder. If an incoming credit card transaction is not accepted by the trained
HMM with sufficiently high probability, it is considered to be fraudulent. At the same
time, we try to ensure that genuine transactions are not rejected. We present detailed
experimental results to show the effectiveness of our approach and compare it with
other techniques available in the literature.
34. Efficient 2-D Grayscale Morphological Transformations With Arbitrary Flat
Structuring Elements
Abstract:
JAN An efficient algorithm is presented for the computation of grayscale
Image
2008 morphological operations with arbitrary 2-D flat structuring elements (S.E.). The processing
required computing time is independent of the image content and of the number of gray
levels used. It always outperforms the only existing comparable method, which was
proposed in the work by Van Droogenbroeck and Talbot, by a factor between 3.5 and
35.1, depending on the image type and shape of S.E. So far, filtering using multiple
S.E.s is always done by performing the operator for each size and shape of the S.E.
separately. With our method, filtering with multiple S.E.s can be performed by a single
operator for a slightly reduced computational cost per size or shape, which makes this
method more suitable for use in granulometries, dilation-erosion scale spaces, and
template matching using the hit-or-miss transform. The discussion focuses on erosions
and dilations, from which other transformations can be derived.
35. APRIL Efficient Broadcasting using Network Coding Networki
2008 ng
Abstract:
We consider the problem of broadcasting in an ad hoc wireless network, where
all nodes of the network are sources that want to transmit information to all other
nodes. Our figure of merit is energy efficiency, a critical design parameter for wireless
networks since it directly affects battery life and thus network lifetime. We prove that
applying ideas from network coding allows realizing significant benefits in terms of
energy efficiency for the problem of broadcasting, and proposing very simple
algorithms that allow realizing these benefits in practice. In particular, our theoretical
analysis shows that network coding improves performance by a constant factor in fixed
networks. We calculate this factor exactly for some canonical configurations. We then
show that in networks where the topology dynamically changes, for example due to
mobility, and where operations are restricted to simple distributed algorithms, network
coding can offer improvements of a factor, where is the number of nodes in the
network. We use the insights gained from the theoretical analysis to propose low-
complexity distributed algorithms for realistic wireless ad hoc scenarios, discuss a
number of practical considerations, and evaluate our algorithms through packet level
simulation.
36. 2008 Efficient Broadcasting with Guaranteed Coverage in Mobile AD-HOC Networks Mobile
computin
Abstract: g
We study an efficient broadcast scheme in mobile ad hoc networks (MANETs).
The objective is to determine a small set of forward nodes to ensure full coverage. We
first study several methods that guarantee coverage when the local view of each node
on its neighborhood information is updated in a timely manner. Then we consider a
general case, where nodes move even during the broadcast process, making it
impractical to maintain up-to-date and consistent local views. A formal framework is
used to model inaccurate local views in MANETs, where full coverage is guaranteed if
three sufficient conditions, connectivity, link availability, and consistency, are met.
Three solutions are proposed to satisfy those conditions. First we give a minimal
transmission range that maintains the connectivity of the virtual network constructed
from local views. Then we use two transmission ranges, one for neighborhood
information collection and the other for actual data transmission, to form a buffer zone
that guarantees the availability of logical links in the physical network. Finally we
propose a mechanism called aggregated local view to ensure consistent local views. By
these, we extend Wu and Dai’s coverage condition for broadcasting in a network with
mobile nodes. The effectiveness of the proposed scheme is confirmed via both
performance analysis and simulation study.
37. APRIL Efficient Resource Allocation for Wireless Multicast Mobile
2008 Computin
Abstract: g
In this paper, we propose a bandwidth-efficient multicast mechanism for
heterogeneous wireless networks. We reduce the bandwidth cost of an Internet Protocol
(IP) multicast tree by adaptively selecting the cell and the wireless technology for each
mobile host to join the multicast group. Our mechanism enables more mobile hosts to
cluster together and leads to the use of fewer cells to save the scarce wireless
bandwidth. Besides, the paths in the multicast tree connecting to the selected cells
share more common links to save the wireline bandwidth. Our mechanism supports the
dynamic group membership and offers mobility of group members. Moreover, our
mechanism requires no modification to the current IP multicast routing protocols. We
formulate the selection of the cell and the wireless technology for each mobile host in
the heterogeneous wireless networks as an optimization problem. We use Integer
Linear Programming to model the problem and show that the problem is NP-hard. To
solve the problem, we propose a distributed algorithm based on Lagrangean relaxation
and a network protocol based on the algorithm. The simulation results show that our
mechanism can effectively save the wireless and wireline bandwidth as compared to
the traditional IP multicast.
38. FEB Efficient Routing In Intermittently Connected Mobile Networks the Multiple Networki
2008 ng
Copy Case
Abstract:
Intermittently connected mobile networks are wireless networks where most of
the time there does not exist a complete path from the source to the destination. There
are many real networks that follow this model, for example, wildlife tracking sensor
networks, military networks, vehicular ad hoc networks, etc. In this context,
conventional routing schemes fail, because they try to establish complete end-to-end
paths, before any data is sent. To deal with such networks researchers have suggested
using flooding-based routing schemes. While flooding-based schemes have a high
probability of delivery, they waste a lot of energy and suffer from severe contention
which can significantly degrade their performance. Furthermore, proposed efforts to
reduce the overhead of flooding-based schemes have often been plagued by large
delays. With this in mind, we introduce a new family of routing schemes that “spray” a
few message copies into the network, and then route each copy independently towards
the destination. We show that, if carefully designed, spray routing not only performs
significantly fewer transmissions per message, but also has lower average delivery
delays than existing schemes; furthermore, it is highly scalable and retains good
performance under a large range of scenarios.
39. Estimation of Defects Based on Defect Decay Model ED3M
Abstract:
An accurate prediction of the number of defects in a software product during
system testing contributes not only to the management of the system testing process but
also to the estimation of the product’s required maintenance. Here, a new approach,
called Estimation of Defects based on Defect Decay Model (ED3M) is presented that
computes an estimate of the total number of defects in an ongoing testing process.
MAY-
JUNE
ED3M is based on estimation theory. Unlike many existing approaches, the technique Software
2008 presented here does not depend on historical data from previous projects or any engineerin
assumptions about the requirements and/or testers’ productivity. It is a completely g
automated approach that relies only on the data collected during an ongoing testing
process. This is a key advantage of the ED3M approach as it makes it widely
applicable in different testing environments. Here, the ED3M approach has been
evaluated using five data sets from large industrial projects and two data sets from the
literature. In addition, a performance analysis has been conducted using simulated data
sets to explore its behavior using different models for the input data. The results are
very promising; they indicate the ED3M approach provides accurate estimates with as
fast or better convergence time in comparison to well-known alternative techniques,
while only using defect data as the input.
40. MARC Fuzzy Control Model Optimization for Behavior-Constant Traffic Routing Under Intelligent
H 2008 transport
Information Provision a-tion
Abstract: systems
This paper presents an H-infinity filtering approach to optimize a fuzzy control
model used to determine behavior consistent (BC) information-based control strategies
to improve the performance of congested dynamic traffic networks. By adjusting the
associated membership function parameters to better respond to nonlinearities and
modeling errors, the approach is able to enhance the computational performance of the
fuzzy control model. Computational efficiency is an important aspect in this problem
context, because the information strategies are required in sub real time to be real-time
deployable. Experiments are performed to evaluate the effectiveness of the approach.
The results indicate that the optimized fuzzy control model contributes in determining
the BC information-based control strategies in significantly less computational time
than when the default controller is used. Hence, the proposed H-infinity approach
contributes to the development of an efficient and robust information-based control
approach.
41. JUNE Grid Service Discovery with Rough Sets Data
2008 Engineeri
Abstract: ng
The computational grid is rapidly evolving into a service-oriented computing
infrastructure that facilitates resource sharing and large-scale problem solving over the
Internet. Service discovery becomes an issue of vital importance in utilizing grid
facilities. This paper presents ROSSE, a Rough sets-based search engine for grid
service discovery. Building on the Rough sets theory, ROSSE is novel in its capability
to deal with the uncertainty of properties when matching services. In this way, ROSSE
can discover the services that are most relevant to a service query from a functional
point of view. Since functionally matched services may have distinct nonfunctional
properties related to the quality of service (QoS), ROSSE introduces a QoS model to
further filter matched services with their QoS values to maximize user satisfaction in
service discovery. ROSSE is evaluated from the aspects of accuracy and efficiency in
discovery of computing services.
42. JUNE Hardware enhanced association rule mining with Hashing and Pipelining Data
2008 engineerin
Abstract: g
Generally speaking, to implement Apriori-based association rule mining in
hardware, one has to load candidate itemsets and a database into the hardware. Since
the capacity of the hardware architecture is fixed, if the number of candidate itemsets
or the number of items in the database is larger than the hardware capacity, the items
are loaded into the hardware separately. The time complexity of those steps that need
to load candidate itemsets or database items into the hardware is in proportion to the
number of candidate itemsets multiplied by the number of items in the database. Too
many candidate itemsets and a large database would create a performance bottleneck.
In this paper, we propose a Hash-based and Pipelined (abbreviated as HAPPI)
architecture for hardware enhanced association rule mining. We apply the pipeline
methodology in the HAPPI architecture to compare itemsets with the database and
collect useful information for reducing the number of candidate item sets and items in
the database simultaneously. When the database is fed into the hardware, candidate
itemsets are compared with the items in the database to find frequent itemsets. At the
same time, trimming information is collected from each transaction. In addition,
itemsets are generated from transactions and hashed into a hash table. The useful
trimming information and the hash table enable us to reduce the number of items in the
database and the number of candidate itemsets. Therefore, we can effectively reduce
the frequency of loading the database into the hardware. As such, HAPPI solves the
bottleneck problem in a priori-based hardware schemes. We also derive some
properties to investigate the performance of this hardware implementation. As shown
by the experiment results, HAPPI significantly outperforms the previous hardware
approach and the software algorithm in terms of execution time.
43. JUNE HBA Distributed Metadata Management for Large Cluster-Based Storage Parallel
2008 and
Systems distribute
Abstract: d systems
An efficient and distributed scheme for file mapping or file lookup is critical in
decentralizing metadata management within a group of metadata servers. This paper
presents a novel technique called Hierarchical Bloom Filter Arrays (HBA) to map
filenames to the metadata servers holding their metadata. Two levels of probabilistic
arrays, namely, the Bloom filter arrays with different levels of accuracies, are used on
each metadata server. One array, with lower accuracy and representing the distribution
of the entire metadata, trades accuracy for significantly reduced memory overhead,
whereas the other array, with higher accuracy, caches partial distribution information
and exploits the temporal locality of file access patterns. Both arrays are replicated to
all metadata servers to support fast local lookups. We evaluate HBA through extensive
trace-driven simulations and implementation in Linux. Simulation results show our
HBA design to be highly effective and efficient in improving the performance and
scalability of file systems in clusters with 1,000 to 10,000 nodes (or superclusters) and
with the amount of data in the petabyte scale or higher. Our implementation indicates
that HBA can reduce the metadata operation time of a single-metadata-server
architecture by a factor of up to 43.9 when the system is configured with 16 metadata
servers.
44. MAY Localized Sensor Area Coverage with Low Communication Overhead Mobile
2008 computin
Abstract: g,
We propose several localized sensor area coverage protocols for heterogeneous
sensors, each with arbitrary sensing and transmission radii. The approach has a very
small communication overhead since prior knowledge about neighbor existence is not
required. Each node selects a random time out and listens to messages sent by other
nodes before the time out expires. Sensor nodes whose sensing area is not fully covered
(or fully covered but with a disconnected set of active sensors) when the deadline
expires decide to remain active for the considered round and transmit an activity
message announcing it. There are four variants in our approach, depending on whether
or not withdrawal and retreat messages are transmitted. Covered nodes decide to sleep,
with or without transmitting a withdrawal message to inform neighbors about the
status. After hearing from more neighbors, active sensors may observe that they
became covered and may decide to alter their original decision and transmit a retreat
message. Our simulations show a largely reduced message overhead while preserving
coverage quality for the ideal MAC/physical layer. Compared to an existing method
(based on hello messages followed by retreat ones and where excessive message loss
contributed to excessive coverage holes), our approach has shown robustness in a
model with collisions and/or a realistic physical layer.
45. JUNE On Guaranteed Smooth Switching for Buffered, Crossbar Switches Networki
2008 ng
Abstract:
Scalability considerations drive the evolution of switch design from output
queuing to input queuing and further to combined input and cross point queuing
(CICQ). However, CICQ switches with credit-based flow control face new challenges
of scalability and predictability. In this paper, we propose a novel approach of rate-
based smoothed switching, and design a CICQ switch called the smoothed buffered
crossbar or sBUX. First, the concept of smoothness is developed from two
complementary perspectives of covering and spacing, which, commonly known as
fairness and jitter, are unified in the same model. Second, a smoothed multiplexer
sMUX is designed that allocates bandwidth among competing flows sharing a link and
guarantees almost ideal smoothness for each flow. Third, the buffered crossbar sBUX
is designed that uses the scheduler sMUX at each input and output, and a two-cell
buffer at each cross point. It is proved that sBUX guarantees 100% throughput for real-
time services and almost ideal smoothness for each flow. Fourth, an on-line bandwidth
regulator is designed that periodically estimates bandwidth demand and generates
admissible allocations, which enables sBUX to support best-effort services. Simulation
shows almost 100% throughput and multi-microsecond average delay. In particular,
neither credit-based flow control nor speedup is used, and arbitrary fabric-internal
latency is allowed between line cards and the switch core, simplifying the switch
implementation.
46. APRIL Orthogonal Data Embedding for Binary Images in Morphological Transform Multimedi
2008 a
Domain-A High-Capacity Approach
Abstract:
This paper proposes a data-hiding technique for binary images in
morphological transform domain for authentication purpose. To achieve blind
watermark extraction, it is difficult to use the detail coefficients directly as a location
map to determine the data-hiding locations. Hence, we view flipping an edge pixel in
binary images as shifting the edge location one pixel horizontally and vertically. Based
on this observation, we propose an interlaced morphological binary wavelet transform
to track the shifted edges, which thus facilitates blind watermark extraction and
incorporation of cryptographic signature. Unlike existing block-based approach, in
which the block size is constrained by 3X3 pixels or larger, we process an image in
2X2 pixel blocks. This allows flexibility in tracking the edges and also achieves low
computational complexity. The two processing cases that flipping the candidates of one
does not affect the flippability conditions of another are employed for orthogonal
embedding, which renders more suitable candidates can be identified such that a larger
capacity can be achieved. A novel effective Backward-Forward Minimization method
is proposed, which considers both backwardly those neighboring processed
embeddable candidates and forwardly those unprocessed flippable candidates that may
be affected by flipping the current pixel. In this way, the total visual distortion can be
minimized. Experimental results demonstrate the validity of our arguments.
47. FEB Probabilistic Packet Marking for Large-Scale IP Traceback Networki
2008 ng
Abstract:
This paper presents an approach to IP Traceback based on the probabilistic
packet marking paradigm. Our approach, which we call randomize-and-link, uses large
checksum cords to “link” message fragments in a way that is highly scalable, for the
checksums serve both as associative addresses and data integrity verifiers. The main
advantage of these checksum cords is that they spread the addresses of possible router
messages across a spectrum that is too large for the attacker to easily create messages
that collide with legitimate messages.
48. APRIL Rate Allocation & Network Lifetime Problem for Wireless Sensor Networks Networki
2008 ng
Abstract:
An important performance consideration for wireless sensor networks is the
amount of information collected by all the nodes in the network over the course of
network lifetime. Since the objective of maximizing the sum of rates of all the nodes in
the network can lead to a severe bias in rate allocation among the nodes, we advocate
the use of lexicographical max-min (LMM) rate allocation. To calculate the LMM rate
allocation vector, we develop a polynomial-time algorithm by exploiting the
parametric analysis (PA) technique from linear program (LP), which we call serial LP
with Parametric Analysis (SLP-PA). We show that the SLP-PA can be also employed
to address the LMM node lifetime problem much more efficiently than a state-of-the-
art algorithm proposed in the literature. More important, we show that there exists an
elegant duality relationship between the LMM rate allocation problem and the LMM
node lifetime problem. Therefore, it is sufficient to solve only one of the two problems.
Important insights can be obtained by inferring duality results for the other problem.
49. APRIL Rate less Forward Error Correction for Topology-Transparent Scheduling Networki
2008 ng
Abstract:
Topology-transparent scheduling for mobile wireless ad hoc networks has
been treated as a theoretical curiosity. This paper makes two contributions towards its
practical deployment: 1) We generalize the combinatorial requirement on the schedules
and show that the solution is a cover-free family. As a result, a much wider number and
variety of constructions for schedules exist to match network conditions. 2) In
simulation, we closely match the theoretical bound on expected throughput. The bound
was derived assuming acknowledgments are available immediately. We use rate less
forward error correction (RFEC) as an acknowledgment scheme with minimal
computational overhead. Since the wireless medium is inherently unreliable, RFEC
also offers some measure of automatic adaptation to channel load. These contributions
renew interest in topology-transparent scheduling when delay is a principal objective.
50. JUNE Securing User-Controlled Routing Infrastructures Networki
2008 ng
Abstract:
In this paper we design an infrastructure for preventing the un trusted parties
getting access to the routing information. And also we are achieving flexible and
efficient communication. However there are some difficulties in doing so they are the
new security vulnerability that is introduced. The flexible control plane of these
infrastructures can be exploited to launch many types of powerful attacks with little
effort. In this paper, we make several contributions towards studying security issues in
forwarding infrastructures (FIs). We present a general model for an FI; analyze
potential security vulnerabilities, and present techniques to address these
vulnerabilities. The main technique that we introduce in this paper is the use of simple
lightweight cryptographic constraints on forwarding entries. We show that it is possible
to prevent a large class of Attacks on end-hosts and bound the flooding attacks that can
be launched on the infrastructure nodes to a small constant value
51. JAN- Semantics Based Design for Secure Web Services Software
FEB Engineeri
Abstract: ng
2008
We outline a methodology for designing and composing services in a secure
manner. In particular, we are concerned with safety properties of service behavior.
Services can enforce security policies locally and can invoke other services that respect
given security contracts. This call-by-contract mechanism offers a significant set of
opportunities, each driving secure ways to compose services. We discuss how we can
correctly plan service compositions in several relevant classes of services and security
properties. With this aim, we propose a graphical modeling framework based on a
foundational calculus called λreq [13]. Our formalism features dynamic and static
semantics, thus allowing for formal reasoning about systems. Static analysis and model
checking techniques provide the designer with useful information to assess and fix
possible vulnerabilities.
52. FEB Semantic Ideation Learning for Agent- Based E-Brainstorming (web application) Data
2008 engineerin
Abstract: g
Brainstorming can assist organizations in generating creative ideas using
teamwork and collaboration. However, the role of information technology in
brainstorming is merely that of an assistant that passively supports the progression of
brainstorming sessions rather than proactively engaging in the sessions. This paper
integrates the unique association thinking of humans with an intelligent agent
technique to devise an automated decision agent called the Semantic Ideation Learning
Agent (SILA) that can represent a session participant who is actively participating in
brainstorming. SILAs are grounded on the three association capabilities of human
thinking (similarity, contiguity, and contrast). Furthermore, a Collective Brainstorming
Decision System (CBDS) is built to construct an environment where SILAs can learn
and share their knowledge with each other. Additionally, CBDS is integrated into an
intelligent care project (i-Care) for the purpose of innovated e-service recommendation.
Preliminarily, evaluation results indicate that the proposed system advances e-
brainstorming by crossing the three key boundaries of human ideation capability
(understanding, cognition boundary, and endurance).
53. JUN Statistical Techniques for Detecting Traffic Anomalies Through Packet Header Networki
2008 ng
Data
Abstract:
This paper proposes a traffic anomaly detector, operated in postmortem and in
real-time, by passively monitoring packet headers of traffic. The frequent attacks on
network infrastructure, using various forms of denial of service attacks, have led to an
increased need for developing techniques for analyzing network traffic. If efficient
analysis tools were available, it could become possible to detect the attacks, anomalies
and to take action to contain the attacks appropriately before they have had time to
propagate across the network. In this paper, we suggest a technique for traffic anomaly
detection based on analyzing correlation of destination IP addresses in outgoing traffic
at an egress router. This address correlation data are transformed using discrete wavelet
transform for effective detection of anomalies through statistical analysis. Results from
trace-driven evaluation suggest that proposed approach could provide an effective
means of detecting anomalies close to the source. We also present a multidimensional
indicator using the correlation of port numbers and the number of flows as a means of
detecting anomalies.
54. APR- Temporal Portioning of Communication Resources in an Integrated Architecture Dependab
JUN le And
Abstract: Secure
2008
Integrated architectures in the automotive and avionic domain promise Computin
improved resource utilization and enable a better coordination of application g
subsystems compared to federated systems. An integrated architecture shares the
system’s communication resources by using a single physical network for exchanging
messages of multiple application subsystems. Similarly, the computational resources
(for example, memory and CPU time) of each node computer are available to multiple
software components. In order to support seamless system integration without
unintended side effects in such an integrated architecture, it is important to ensure that
the software components do not interfere through the use of these shared resources. For
this reason, the DECOS integrated architecture encapsulates application subsystems
and their constituting software components. At the level of the communication system,
virtual networks on top of an underlying time-triggered physical network exhibit
predefined temporal properties (that is, bandwidth, latency, and latency jitter). Due to
encapsulation, the temporal properties of messages sent by a software component are
independent from the behavior of other software components, in particular from those
within other application subsystems. This paper presents the mechanisms for the
temporal partitioning of communication resources in the Dependable Embedded
Components and Systems (DECOS) integrated architecture. Furthermore, experimental
evidence is provided in order to demonstrate that the messages sent by one software
component do not affect the temporal properties of messages exchanged by other
software components. Rigid temporal partitioning is achievable while at the same time
meeting the performance requirements imposed by present-day automotive applications
and those envisioned for the future (for example, X-by-wire). For this purpose, we use
an experimental framework with an implementation of virtual networks on top of a
Time Division Multiple Access (TDMA)-controlled Ethernet network.
55. MAR- The Effect of Pairs in Program Design Tasks Software
APRIL Engineeri
Abstract: ng
2008
Pair programming involves two developers simultaneously collaborating with each
other on the same programming task to design and code a solution. Algorithm design
and its implementation are normally interwoven in that implementation often provides
feedback to enhance the design. Previous controlled pair programming experiments did
not explore the efficacy of pairs versus individuals in program design-related tasks
separately from coding. Variations in programmer skills in a particular language or an
integrated development environment and the understanding of programming
instructions can mask the skill of subjects in program design-related tasks.
Programming aptitude tests (PATs) have been shown to correlate with programming
performance. PATs do not require understanding of programming instructions and do
not require a skill in any specific computer language. Two controlled experiments were
conducted, with full-time professional programmers being the subjects who worked on
increasingly complex programming aptitude tasks related to problem solving and
algorithmic design. In both experiments, pairs significantly outperformed individuals,
providing evidence of the value of pairs in program design-related tasks.
56. JAN- Trustworthy Computing Under Resource Constraints With The Down Policy
MAR Abstract:
2008
Trustworthy computing modules like secure coprocessors (ScP) are already in
extensive use today, albeit limited predominantly to scenarios where constraints on
cost are not a serious limiting factor. However, inexpensive trustworthy computers are
required for many evolving application scenarios. The problem of realizing
inexpensive ScPs for large-scale networks consisting of low complexity devices have
not received adequate consideration thus far. We introduce two strategies toward
realizing low-cost ScPs. The first is the decrypt only when necessary (DOWN) policy,
which can substantially improve the ability of low-cost ScPs to protect their secrets.
The DOWN policy relies on the ability to operate with fractional parts of secrets.
Taking full advantage of the DOWN policy requires consideration of the nature of
computations performed with secrets and even the mechanisms employed for
distribution of secrets. We discuss the feasibility of extending the DOWN policy to
various asymmetric and symmetric cryptographic primitives. The second is
cryptographic authentication strategies which employ only symmetric cryptographic
primitives, based on novel ID-based key predistribution schemes that demand very low
complexity of operations to be performed by the ScP and can take good advantage of
the DOWN policy.
57. FEB Vision Based Processing For Real-Time 3-D Data Acquisition Based On Coded Image
2008 processing
Structured Light
Abstract:
Structured light vision systems have been successfully used for accurate
measurement of 3-D surfaces in computer vision. However, their applications are
mainly limited to scanning stationary objects so far since tens of images have to be
captured for recovering one 3-D scene. This paper presents an idea for real-time
acquisition of 3-D surface data by a specially coded vision system. To achieve 3-D
measurement for a dynamic scene, the data acquisition must be performed with only a
single image. A principle of uniquely color-encoded pattern projection is proposed to
design a color matrix for improving the reconstruction efficiency. A color projector is
controlled by a computer to generate the desired color patterns in the scene. The unique
indexing of the light codes is crucial here for color projection since it is essential that
each light grid be uniquely identified by incorporating local neighborhoods so that 3-D
reconstruction can be performed with only local analysis of a single image. A scheme
is presented to describe such a vision processing method for fast 3-D data acquisition.
Practical experimental performance is provided to analyze the efficiency of the
proposed methods.
58. JAN Watermarking Relational Databases Using Optimization Based Techniques Data
2008 engineerin
(NETWORKING) g
Abstract:
Proving ownership rights on outsourced relational databases is a crucial
issue in today’s internet-based application environments and in many content
distribution applications. In this paper, we present a mechanism for proof of ownership
based on the secure embedding of a robust imperceptible watermark in relational data.
We formulate the watermarking of relational databases as a constrained optimization
problem and discuss efficient techniques to solve the optimization problem and to
handle the constraints. Our watermarking technique is resilient to watermark
synchronization errors because it uses a partitioning approach that does not require
marker tuples. Our approach overcomes a major weakness in previously proposed
watermarking techniques. Watermark decoding is based on a threshold-based technique
characterized by an optimal threshold that minimizes the probability of decoding
errors. We implemented a proof of concept implementation of our watermarking
technique and showed by experimental results that our technique is resilient to tuple
deletion, alteration, and insertion attacks.
59. 2008 Watermarking Relational Databases Using Optimization-Based Techniques Data
engineerin
(web based) g
Abstract:
Proof of ownership based on the secure embedding of a robust imperceptible
watermark in relational data. We formulate the watermarking of relational databases as
a constrained optimization problem and discuss efficient techniques to solve the
optimization problem and to handle the constraints. Watermark decoding is based on a
threshold-based technique characterized by an optimal threshold that minimizes the
probability of decoding errors. We implemented a proof of concept implementation of
our watermarking technique and showed by experimental results that our technique is
resilient to tuple deletion, alteration, and insertion attacks. A watermark describes
information that can be used to prove the ownership of data such as the owner, origin,
or recipient of the content. We develop an efficient technique for watermark detection
that is based on an optimal threshold. The optimal threshold is selected by minimizing
the probability of decoding error.
60. 2008 XML Database Migration
Abstract:
As technology evolves over time, a common problem is the migration of
software applications from one technology base to another. This paper is a practical
experience report based on IBM Net. It is Commerce to Web Sphere Commerce Suite
(WCS) migration. It identifies the problems and issues in the migration of applications
using traditional database access (SQL) to applications using the Enterprise Java Bean
(EJB) programming model, and presents a practical methodology in facilitating such
migration. It also describes a tool built on this methodology that has been released on
IBM’s alpha Works site. From the experience so gained, this paper points to a number
of future enhancement areas in the methodology and associated technology research.
61. 2007 A Coupled Statistical Model for Face Shape Recovery from Brightness Images Image
Processin
Abstract: g
We focus on the problem of developing a coupled statistical model that can
be used to recover facial shape from brightness images of faces. We study three
alternative representations for facial shape. These are the surface height function, the
surface gradient, and a Fourier basis representation. We jointly capture variations in
intensity and the surface shape representations using a coupled statistical model. The
model is constructed by performing principal components analysis on sets of
parameters describing the contents of the intensity images and the facial shape
representations. By fitting the coupled model to intensity data, facial shape is implicitly
recovered from the shape parameters. Experiments show that the coupled model is able
to generate accurate shape from out-of-training-sample intensity images.
Parallel And
62. JULY An SSL Back-End Forwarding Scheme in Cluster-Based Web Servers Distributed
2007 Abstract: Systems
State-of-the-art cluster-based data centers consisting of three tiers (Web server,
application server, and database server) are being used to host complex Web services
such as e-commerce applications. The application server handles dynamic and sensitive
Web contents that need protection from eavesdropping, tampering, and forgery.
Although the Secure Sockets Layer (SSL) is the most popular protocol to provide a
secure channel between a client and a cluster-based network server, its high overhead
degrades the server performance considerably and, thus, affects the server scalability.
Therefore, improving the performance of SSL-enabled network servers is critical for
designing scalable and high-performance data centers. In this paper, we examine the
impact of SSL offering and SSL-session-aware distribution in cluster-based network
servers. We propose a back-end forwarding scheme, called ssl_with_bf, that employs a
low-overhead user-level communication mechanism like Virtual Interface Architecture
(VIA) to achieve a good load balance among server nodes. We compare three
distribution models for network servers, Round Robin (RR), ssl_with_session, and
ssl_with_bf, through simulation. The experimental results with 16-node and 32-node
cluster configurations show that, although the session reuse of ssl_with_session is
critical to improve the performance of application servers, the proposed back-end
forwarding scheme can further enhance the performance due to better load balancing.
The ssl_with_bf scheme can minimize the average latency by about 40 percent and
improve throughput across a variety of workloads.
63. 2007 Clustering and Sequential Pattern Mining of Online Collaborative Learning Data Data
Engineeri
Abstract: ng
Group work is widespread in education. The growing use of online tools
supporting group work generates huge amounts of data. We aim to exploit this data to
support mirroring: presenting useful high-level views of information about the group,
together with desired patterns characterizing the behavior of strong groups. The goal is
to enable the groups and their facilitators to see relevant aspects of the group’s
operation and provide feedback if these are more likely to be associated with positive
or negative outcomes and where the problems are.
We explore how useful mirror information can be extracted via a theory-driven
approach and a range of clustering and sequential pattern mining. The context is a
senior software development project where students use the collaboration tool TRAC.
We extract patterns distinguishing the better from the weaker groups and get insights in
the success factors. The results point to the importance of leadership and group
interaction, and give promising indications if they are occurring. Patterns indicating
good individual practices were also identified. We found that some key measures can
be mined from early data. The results are promising for advising groups at the start and
early identification of effective and poor practices, in time for remediation.
64. 2007 Detecting Wide Lines Using Isotropic Nonlinear Filtering Image
processing
Abstract:
Lines provide important information in images, and line detection is crucial in
many applications. However, most of the existing algorithms focus only on the
extraction of line positions, ignoring line thickness. This paper presents a novel wide
line detector using an isotropic nonlinear filter. Unlike most existing edge and line
detectors which use directional derivatives, our proposed wide line detector applies a
nonlinear filter to extract a line completely without any derivative. The detector is
based on the isotropic responses via circular masks. A general scheme for the analysis
of the robustness of the proposed wide line detector is introduced and the dynamic
selection of parameters is developed. In addition, this paper investigates the
relationship between the size of circular masks and the width of detected lines. A
sequence of tests has been conducted on a variety of image samples and our
experimental results demonstrate the feasibility and effectiveness of the proposed
method.
65. JAN Hiding Sensitive Association Rules with Limited Side Effects Knowledg
2007 e And
Abstract: Data
Data mining techniques have been widely used in various applications. Engineeri
However, the misuse of these techniques may lead to the disclosure of sensitive ng
information. Researchers have recently made efforts at hiding sensitive association
rules. Nevertheless, undesired side effects, e.g., non-sensitive rules falsely hidden and
spurious rules falsely generated may be produced in the rule hiding process. In this
paper, we present a novel approach that strategically modifies a few transactions in the
transaction database to decrease the supports or confidences of sensitive rules without
producing the side effects. Since the correlation among rules can make it impossible to
achieve this goal, in this paper, we propose heuristic methods for increasing the
number of hidden sensitive rules and reducing the number of modified entries. The
experimental results show the effectiveness of our approach, i.e., undesired side effects
are avoided in the rule hiding process. The results also report that in most cases, all the
sensitive rules are hidden without spurious rules falsely generated. Moreover, the good
scalability of our approach in terms of database size and the influence of the correlation
among rules on rule hiding are observed.
66. JUNE Hierarchical Pattern-Based Clustering for Grouping Web Transactions Networki
2007 ng
Abstract:
Grouping customer transactions into segments is important in order to obtain
better understanding of customers’ pattern. Currently, the hierarchical pattern-based
clustering has been used to group customer transactions into segments. However, the
processing time is still high due to difference parameter used between two clusters. In
this paper, the difference will be based on the different between the summations of
each cluster. The simulation involving several sets of web data reveal that the proposed
model improves the greedy hierarchical pattern-based clustering model up to fifty
percent.
67. JUNE P-Fusion : A P2P Architecture for Internet-Scale Content-Based Search and Parallel
2007 And
Retrieval Distribute
Abstract: d Systems
The emerging Peer-to-Peer (P2P) model has become a very powerful and attractive
paradigm for developing Internet-scale systems for sharing resources, including files
and documents. The distributed nature of these systems, where nodes are typically
located across different networks and domains, inherently hinders the efficient retrieval
of information. In this paper, we consider the effects of topologically aware overlay
construction techniques on efficient P2P keyword search algorithms. We present the
Peer Fusion (P-Fusion) architecture that aims to efficiently integrate heterogeneous
information that is geographically scattered on peers of different networks. Our
approach builds on work in unstructured P2P systems and uses only local knowledge.
Our empirical results, using the P-Fusion middleware architecture and data sets from
AKAMAI’s Internet mapping infrastructure (AKAMAI), the Active Measurement
Project (NLANR), and the Text Retrieval Conference (TREC) show that the
architecture we propose is both efficient and practical.
68. JAN Software
2007 API-Based and Information-Theoretic Metrics for Measuring the Quality of Engineeri
Software Modularization ng
Abstract:
We present in this paper a new set of metrics that measure the quality of
modularization of a non-object-oriented software system. We have proposed a set of
design principles to capture the notion of modularity and defined metrics centered on
these principles. These metrics characterize the software from a variety of perspectives:
structural, architectural, and notions such as the similarity of purpose and commonality
of goals. (By structural, we are referring to inter module coupling-based notions, and
by architectural, we mean the horizontal layering of modules in large software
systems.) We employ the notion of API (Application Programming Interface) as the
basis for our structural metrics. The rest of the metrics we present are in support of
those that are based on API. Some of the important support metrics include those that
characterize each module on the basis of the similarity of purpose of the services
offered by the module. We tested our metrics on some popular open source systems
and some large legacy-code business applications. To validate the metrics, we
compared the results obtained on human-modularized versions of the software with
those obtained on randomized versions of the code. For randomized versions, the
assignment of the individual functions to modules was randomized
69. JAN- Provably Secure Three-Party Authenticated Quantum Key Distribution Protocols Dependab
MAR le And
Abstract: Secure
2007
This work presents quantum key distribution protocols (QKDPs) to safeguard Computin
security in large networks, ushering in new directions in classical cryptography and g
quantum cryptography. Two three-party QKDPs, one with implicit user authentication
and the other with explicit mutual authentication, are proposed to demonstrate the
merits of the new combination, which include the following: 1) security against such
attacks as man-in-the-middle, eavesdropping and replay, 2) efficiency is improved as
the proposed protocols contain the fewest number of communication rounds among
existing QKDPs, and 3) two parties can share and use a long-term secret (repeatedly).
To prove the security of the proposed schemes, this work also presents a new primitive
called the Unbiased-Chosen Basis (UCB) assumption.
70. 2007 Virus Spread in Networks Networki
ng
Abstract:
The influence of the network characteristics on the virus spread is analyzed in a
new – the N-intertwined Markov chain – model, whose only approximation lies in the
application of mean field theory. The mean field approximation is quantified in detail.
The N-intertwined model has been compared with the exact 2n -state Markov model
and with previously proposed “homogeneous” or “local” models. The sharp epidemic
threshold τc, which is a consequence of mean field theory, is rigorously shown to be
equal to τc = 1 / λmax (A), where λmax (A) is the largest eigen value – the spectral
radius – of the adjacency matrix A. A continued fraction expansion of the steady-state
infection probability at node j is presented as well as several upper bounds.
71. MAY Morphological Granulometric Features of Nucleus in Automatic Bone Marrow Informati
2007 on
White Blood Cell Classification Technolog
Abstract: y
The proportion of counts of different types of white blood cells in the bone In
Biomedici
marrow, called differential counts, provides invaluable information to doctors for ne
diagnosis. Due to the tedious nature of the differential white blood cell counting
process, an automatic system is preferable. In this paper, we investigate whether
information about the nucleus alone is adequate to classify white blood cells. This is
important because segmentation of nucleus is much easier than the segmentation of the
entire cell, especially in the bone marrow where the white blood cell density is very
high. In the experiments, a set of manually segmented images of the nucleus are used
to decouple segmentation errors.
We analyze a set of white-blood-cell-nucleus-based features using mathematical
morphology. Fivefold cross validation is used in the experiments in which Bayes’
classifiers and artificial neural networks are applied as classifiers. The classification
performances are evaluated by two evaluation measures: traditional and class wise
classification rates. Furthermore, we compare our results with other classifiers and
previously proposed nucleus-based features. The results show that the features using
nucleus alone can be utilized to achieve a classification rate of 77% on the test sets.
Moreover, the classification performance is better in the class wise sense when the a
priori information is suppressed in both the classifiers.
72. JAN Digital Image Processing Techniques for the Detection and Removal of Cracks in Image
2006 Processin
Digitized Paintings g
Abstract:
An integrated methodology for the detection and removal of cracks on
digitized paintings is presented in this paper. The cracks are detected by thresholding
the output of the morphological top-hat transform. Afterward, the thin dark brush
strokes which have been misidentified as cracks are removed using either a median
radial basis function neural network on hue and saturation data or a semi-automatic
procedure based on region growing. Finally, crack filling using order statistics filters or
controlled anisotropic diffusion is performed. The methodology has been shown to
perform very well on digitized paintings suffering from cracks.
73. OCT Policies for Caching OLAP Queries Parallel
2006 and
Abstract: distribute
The Internet now offers more than just simple information to the users. d systems
Decision makers an now issue analytical queries in order to identify useful trends and
patterns. Such queries are often referred to as On-Line-Analytical Processing (OLAP).
Typically, pages carrying query results do not exhibit temporal locality and, therefore,
are not considered for caching at Internet proxies. This project reduces the response
time for OLAP queries originating from geographically distributed private LANs. The
theme of our project is to formulate a cost model for characterizing the respective
latencies, taking into consideration the combined effects of both common Web access
and query processing. We use a cache admittance that operates on a Web-OLAP input,
outperforming both pure-Web and pure-OLAP caching schemes.
74. 2006 Visual Web Mining of Organizational Web Sites
Abstract:
Existing web usage mining (WUM) tools do not indicate which data mining
algorithms are used or provide effective graphical visualizations of the results obtained.
WUM techniques can be used to determine typical navigation patterns in an
organizational web site. The process of combining WUM and information visualization
techniques in order to discover useful information about web usage patterns is called
visual web mining. The goal of this paper is to discuss the development of a visual web
mining prototype, called Web Patterns, which allows the user to effectively visualize
web usage patterns.
75. SEP Workflow Mining: Discovering Process Models from Event Logs Knowledg
2004 e And
Abstract: Data
Contemporary workflow management systems are driven by explicit process Engineeri
models, i.e., a completely specified workflow design is required in order to enact a ng
given workflow process. Creating a workflow design is a complicated time-consuming
process and, typically, there are discrepancies between the actual workflow processes
and the processes as perceived by the management. Therefore, we have developed
techniques for discovering workflow models. The starting point for such techniques is
a so-called “workflow log” containing information about the workflow process as it is
actually being executed. We present a new algorithm to extract a process model from
such a log and represent it in terms of a Petri net. However, we will also demonstrate
that it is not possible to discover arbitrary workflow processes. In this paper, we
explore a class of workflow processes that can be discovered. We show that the
algorithm can successfully mine any workflow represented by a so-called SWF-net.

Vous aimerez peut-être aussi