Vous êtes sur la page 1sur 13

MOBILE COMPUTING

CREATED BY ANUBHAV TRIVEDI {aka  Evilanubhav}

Q 1: What is cellular IP? Explain the difference between Cellular IP and


Mobile IP ?

Ans.

1. The Mobile IP protocol is considered to have limitations in its capability to


handle large numbers of Mobile Stations moving fast between different
radio cells.
2. To overcome this problem Cellular IP was introduced.

3. Hosts connecting to the Internet via a wireless interface are likely to change
their point of access frequently. A mechanism is required that ensures that
packets addressed to moving hosts are successfully delivered with high
probability.
4. During a handover, packet losses may occur due to delayed propagation of
new location information. These losses should be minimized in order to
avoid a degradation of service quality as handover become more frequent.
5. Cellular IP provides mobility and handover support for frequently moving
hosts. It is intended to be used on a local level, for instance in a campus or
metropolitan area network.
6. Cellular IP can interwork with Mobile IP to support wide area mobility, that
is, mobility between Cellular IP Networks.
7. A Cellular IP network comprises a gateway router that connects the Cellular
IP network to the Internet as well as several Cellular IP nodes that are
Internet, Mobile IP Interdomain handover Wireless access network
Micromobility Intradomain handovers Gateway foreign agent responsible
for the Cellular IP routing and mobile hosts which support the Cellular IP
protocol.
8. In Cellular IP, none of the nodes know the exact location of a mobile host.
Packets addressed to a mobile host are routed to its current base station on a
hop-by-hop basis.
9. Mappings are created and updated based on the packets transmitted by
mobile hosts. Its uses two parallel structures of mappings through Paging
Caches(PC) and Routing Caches(RC). PC maintains mappings for mobile
hosts and have large timeout interval, in the order of seconds or minutes. RC
maintains mappings for mobile host currently receiving data or expecting to
receive data and timeout are in the Packet timescale.
10. Mobile hosts that are not actively transmitting or receiving data (idle state)
but want to stay reachable, have the opportunity to maintain paging cache
entries. A mobile host with installed route cache entries is said to be in
active state.
11. On Cellular IP nodes, where both a route and a paging cache are maintained,
packet forwarding in downlink direction is done in the same way for routing
and paging with priority to the route cache entries. If a Cellular IP node, that
does not maintain a paging cache, receives a downlink packet for a mobile
host for which it has no routing entry in its route cache, it broadcasts the
packet to all its downlink neighbours. By this mechanism groups of several,
usually adjacent base stations are built in which idle mobile hosts are
searched when a packet has to be delivered to them. Those groups of base
stations are called paging areas.
12.Cellular IP provides two handover mechanisms: A hard handover and a
semi-soft handover mechanism. For a hard handover, the wireless interface
of a mobile host changes from one base station to another at once. For the
semi-soft handover the mobile host switches to the new base station,
transmits a route update message with a flag indicating the semi-soft
handover and returns immediately to the old base station in order to listen
for packets destined to it.
13.The route update message reconfigures the route caches on the way to the
gateway router as usually, except for the route cache on the cross-over node,
where the new path branches off from the old path. In that node downlink
packets are duplicated and sent along both paths until a new route update
message.
Paging Caches  Routing 
(PC) Caches(RC)

Location 1 1 2 3 4 Service Area

N N N

X  X X
Location 1 Location 2 Location 3

Mobile IP Cellular IP

1. It is not suitable for fast moving 1. It is was introduced for fast


mobile hosts. moving mobile hosts.
2. Smooth Handoff is not possible. 2. Smooth Handoff is possible.
3. There is no suppression between 3. There is suppression between
active and idle user. active and idle user.
4. It has low latency. 4. It has high latency
5. It has low battery life 5. It has high battery life.
6. It is based on macromobility. 6. It is based on micromobility.
7. It was introduced in 1996. 7. It was introduced in 1998.
8. Handoff is initiated by foreign 8. Handoff is initiated by mobile
agent. host.
9. It is coarse grained. 9. It is fine grained.
10. Mapping is done by mobility 10. Mapping is done by Paging
binding and digital lists. caches and Routing caches.
11. Concept of discovery, registration 11. No concept of discovery,
and tunneling is used. registration and tunneling is used.
Q 2: Explain predictive location management scheme in detail.
Ans.
1. In the operation of wireless personal communication service (PCS) networks,
mobility management deals with the tracking, storage, maintenance, and retrieval
of mobile location information. Two commonly used standards, the EIA/TIA
Interim Standard 41 in North America and the Global System for Mobile
Communications in Europe , partition their coverage areas into a number of
location areas(LA), each consisting of a group of cells.

2. When a mobile enters an LA, it reports to the network the information about its
current new location (location update).

3. When an incoming call arrives, the network simultaneously pages the mobile
(terminal paging) in all cells within the LA where the mobile currently resides. In
these standards, the LA coverage is fixed for all users.
4. Although dynamic LA management is possible , LA-based schemes, in general,
are not flexible enough to adapt to different and differing user traffic and mobility
patterns.

5. In order to reduce the paging costs, mobiles inform the n/w of their locations
from time to time ; this is called location updating.

6. There is a tradeoff between location update cost and paging cost. If a mobile
expands power and bandwidth to update its location more often, it can reduce the
area that needs to be paged when a call arrives. On the other hand, if updates are
performed less frequently, more power and bandwidth will be expanded on paging,
since larger areas need to be paged.

7. The cost of mobility management over any given time period is the sum of the
cost of the location updates and the cost of paging.

8. Dynamic mobility management schemes discard the notion of LA borders. A


mobile in these schemes updates its location based on either elapsed time, number
of crossed cell borders, or traveled distance.

9. In particular, in the distance-based scheme, a mobile performs location update


whenever it is some threshold distance away from the location where it last
updated. For a system with memoryless random-walk mobility pattern, the
distance-based scheme has been proven to result in less mobility management
cost (location update cost plus paging cost) than schemes based on time or number
of cell boundary crossings.

10. In our proposed predictive distance-based mobility management scheme, the


future location of a mobile is predicted based on the probability density function of
the mobile’s location, which is, in turn, given by the Gauss-Markov model based
on its location and velocity at the time of the last location update. The prediction
information is made available to both the network and the mobiles. Therefore, a
mobile is aware of the network’s prediction of its location in time.

11. The mobile checks its position periodically (location inspection) and performs
location update whenever it reaches some threshold distance (update distance)
away from the predicted location. To locate a mobile, the network pages the
mobile starting from the predicted location and outwards, in a shortest-distance-
first order, and until the mobile is found.

Q 3 : Explain data dissemination by broadcast .


Ans. Many emerging applications involve the dissemination of data to large
populations of clients. Examples of such dissemination-based applications include
information feeds (e.g., stock and sports tickers or news wires), traffic information
systems, electronic newsletters, software distribution, and entertainment delivery.
A key enabling technology for dissemination-based applications has been the
development and increasing availability of mechanisms for high-bandwidth data
delivery.
Data broadcast technology stands to play a primary role in dissemination-based
applications for two reasons. First, data dissemination is inherently a 1-to-n
process. That is, data is distributed from a small number of sources to a much
larger number of clients that have overlapping interests. Thus, any particular data
item is likely to be distributed to many clients. Second, much of the
communication technology that has enabled large-scale dissemination supports
broadcast, and is, in some cases, intended primarily for broadcast use.

Communication Asymmetry
A key aspect of dissemination-based systems is their inherent communications
asymmetry. By communications asymmetry, we mean that the volume of data
transmitted in the downstream direction (i.e., from server(s) to clients) is much
greater than the volume transmitted in the upstream direction.
Such asymmetry can result from several factors, including:
Network Asymmetry — In many networks the bandwidth of the communication
channel from the server to the clients is much greater than the bandwidth in the
opposite direction. An environment in which clients have no backchannel is an
extreme example of network asymmetry. Less extreme examples include wireless
networks with high-speed downlinks but slow (e.g., cellular) uplinks, and cable
television networks, where bandwidth into the home is on the order of megabits
per second, while a small number of slow channels (on the order of tens of kilobits
per second) connect a neighborhood to the cable head.

Client to Server Ratio — A system with a small number of servers and a much
larger number of clients (as is the case in dissemination-based applications) also
results in asymmetry because the server message and data processing capacity
must be divided among all of the clients. Thus, clients must be careful to avoid
“swamping” the server with too many messages and/or too much data.

Data Volume — A third way that asymmetry arises is due to the volume of data that
is transmitted in each direction. Information retrieval applications typically involve
a small request message containing a few query terms or a URL (i.e., the “mouse
and key clicks”), and result in the transfer of a much larger object or set of objects
in response. For such applications, the downstream bandwidth requirements of
each client are much higher than the upstream bandwidth requirements.

Updates and New Information — Finally, asymmetry can also arise in an


environment where newly created items or updates to existing data items must be
disseminated to clients. In such cases, there is a natural (asymmetric) flow of data
in the downstream direction.

There are four types of broadcast techniques :


1. Static Broadcast
2. Dynamic Broadcast
3. Selective Broadcast
4. Optimized Broadcast

Static Broadcast : A single sender transmits information simultaneously to many 


receivers. The transmitter and all the receivers are synchronously connected along 
all communication time. A main interesting result says that it is possible to 
transmit at a higher rate than that is achieved by time­sharing the channel between 
the receivers.

In a typical situation, there is a common information that is needed to be 
transmitted to all receivers.

Dynamic Broadcast : Video-on-demand (VOD) will one day allow customers to


select any given video from a large on-line video library and watch it on their
televisions without any further delay.
This situation has resulted in many proposals aimed at reducing the bandwidth
requirements of video-on-demand services. Despite all their differences, all these
proposals are based on the same idea, namely, sharing as many data as possible
among overlapping requests for the same video.

Selective Broadcast : Servers broadcast selected information only. The overall


system is divided into two overlapping groups:

1. Publication Groups :- In this it is decided what data is to be broadcasted


depending upon its importance.

2. On Demand :- In this server choose the next item based on their request.
The frequently requested items are called “ Hot Spots “ For eg. Stock Info,
airlines schedule etc. are demanded frequently , so they are broadcasted
regularly after some time interval.

Optimized Broadcast : It works by letting the server be active and send updated
data only as soon as the new piece of data is available on the network and server
resources are only used when change in information contents has happened and
the clients gets the updated data as soon as possible after the change of data has
took place.

Q 4: Explain energy efficient indexing for push based data delivery model.
Ans. It is distinguished between two fundamental modes of providing users with
information:
1. Data Broadcasting: Accessing broadcasted data does not require any uplink
transmission and is listen only. Querying involves simple filtering of the
incoming data stream according to a user specified filter.

2. Interactive/On demand : The client request a piece of data on the uplink


channel and the server responds by sending this data to the client.

In Practice , a mixture of the above two is used. The most frequently demanded
item, the so called “ Hot Spots” will be broadcasted creating a storage on the air.
For example , stock info.,airline schedules etc.

Data 1 Data M

Previous  
Next
Index 1 Index 2 index m

The constraints of limited available energy is expected to drive all solutions to


mobile computing on palmtops.
The power consumption in doze mode is only 50 uw and in active mode is only
250 mw { The ratio of power consumption in active mode to doze mode is 5000}.

When palmtop is listening to the channel , CPU must be in active mode for
examining data packets.
CPU is a much more significant energy consumer than the receiver itself and since
it has to active to examine the incoming packets it may lead to waste of energy .

Therefore it is beneficial if palmtops can slip into doze mode most of the time and
come in active mode only when data of interest is expected to arrive.

 Energy Efficient solutions are important due to the following reasons


1. It make it possible to use smaller and less powerful batteries to run the
same set of applications for the same amount of time.
2. Smaller batteries are important from the portability poit of view, as
palmtops can be compact and weight less.
3. With the same batteries , the unit can run for a very long time without the
problem of changing battery or recharging.
4. Every battery which Is disposed is an environmental hazard.
Due to the above reasons the directory of the file is broadcasted in the
form of an index in the broadcasted channel. The index we consider is a
multilevel index.

For a file being broadcasted the two parameters are:

1. Access Time: Avg. time elapsed from the moment a client wants a record by a
primary key , to the point when the record is downloaded. The access time can
be determined by two parameters.
 Probe wait: When an initial probe is made into the broadcast channel the
client gets the info. About occurrence of the next nearest index information
relevant to the required data .
The avg. duration for getting to this nearest index info Is called probe wait.
 Broadcast wait: the avg, duration from the point of the index info relevant
to the required data is encountered , to the point when the required record is
downloaded is called broadcast wait.

2. Tuning Time : Amount of time spent by a client listening to the channel.

(1,m) indexing : It is a allocation method in which the index is broadcasted m


times during the broadcast of one version of a file. All buckets have an offset to the
beginning of the next index segment.
The access protocol for record with key is as follows:
1. Tune into the current bucket in the broadcast channel.
2. Read the offset to determine the address of the next nearest index segement.
3. Go into doze mode and tune in at the broadcast of the index segment.
4. From the index segment determine when the data bucket contained the
record with primary key is broadcasted. This is accomplished by successive
probes, by following the pointers in the multi level index.
5. Tune in again when the bucket containing the record with primary key is
broadcasted and download the record.

Q 5: Design the coda file system and explain the different states of VENUS,
draw the state transition diagram also.
Ans.

1. Coda was designed to be a scalable, secure, and highly available distributed


file system. An important goal was to achieve a high degree of naming and
location transparency so that the system would appear to its users very similar to a
pure local file system. By also taking high availability into account, the designers
of Coda have also tried to reach a high degree of failure transparency.

2. Coda is a descendant of version 2 of the Andrew File System (AFS), which


was also developed at CMU , and inherits many of its architectural features from
AFS. AFS was designed to support the entire CMU community, which implied that
approximately 10,000 workstations would need to have access to the system. To
meet this requirement, AFS nodes are partitioned into two groups. One group
consists of a relatively small number of dedicated Vice file servers, which are
centrally administered. The other group consists of a very much larger collection of
Virtue workstations that give users and processes access to the file system.
3. Coda follows the same organization as AFS. Every Virtue workstation hosts
a user-level process called Venus, whose role is similar to that of an NFS
client. A Venus process is responsible for providing access to the files that
are maintained by the Vice file servers. In Coda, Venus is also responsible
for allowing the client to continue operation even if access to the file servers
is (temporarily) impossible.

4. There is a separate Virtual File System (VFS) layer that intercepts all calls
from client applications, and forwards these calls either to the local file
system or to Venus. This organization with VFS is the same as in NFS.
Venus, in turn, communicates with Vice file servers using a user-level RPC
system. The RPC system is constructed on top of UDP datagrams and
provides at-most-once semantics.

There are three different server-side processes. The great majority of the work is
done by the actual Vice file servers, which are responsible for maintaining a local
collection of files.

Communication
Interprocess communication in Coda is performed using RPCs. However, the
RPC2 system for Coda is much more sophisticated than traditional RPC systems
such as ONC RPC, which is used by NFS.

RPC2 offers reliable RPCs on top of the (unreliable) UDP protocol. Each time
a remote procedure is called, the RPC2 client code starts a new thread that sends
an invocation request to the server and subsequently blocks until it receives an
answer.

An interesting aspect of RPC2 is its support for side effects. A side effect is a
mechanism by which the client and server can communicate using an
application-specific protocol.

RPC2 allows the client and the server to set up a separate connection for
transferring the video data to the client on time. Connection setup is done as a side
effect of an RPC call to the server.

Processes
Coda maintains a clear distinction between client and server processes.
Clients are represented by Venus processes; servers appear as Vice processes.
Both type of processes are internally organized as a collection of concurrent
threads. Threads in Coda are nonpreemptive and operate entirely in user space.

Naming
Coda maintains a naming system analogous to that of UNIX. Files are grouped into
units referred to as volumes. Usually a volume corresponds to a collection of files
associated with a user. Examples of volumes include collections of shared binary
or source files, and so on. Like disk partitions, volumes can be mounted.

Volumes are important for two reasons :

First, they form the basic unit by which the entire name space is constructed. This
construction takes place by mounting volumes at mount points. A mount point in
Coda is a leaf node of a volume that refers to the root node of another volume.

The second reason why volumes are important, is that they form the unit for
server-side replication.

State Transition diagram

       Hoarding

Disconnection Reintegration completed

Disconnection

      Emulation      Reintegration
Reconnection
Filling the cache in advance with the appropriate files is called hoarding.
Normally, a client will be in the HOARDING state. In this state, the client is
connected to (at least) one server that contains a copy of the volume. While in this
state, the client can contact the server and issue file requests to perform its work.
Simultaneously, it will also attempt to keep its cache filled with useful data (e.g.,
files, file attributes, and directories).

At a certain point, the number of servers in the client’s AVSG will drop to zero,
bringing it into an EMULATION state in which the behavior of a server for
the volume will have to be emulated on the client’s machine. In practice, this
means that all file requests will be directly serviced using the locally cached copy
of the file. Note that while a client is in its EMULATION state, it may still be able
to contact servers that manage other volumes. In such cases, disconnection will
generally have been caused by a server failure rather than that the client has been
disconnected from the network.

Finally, when reconnection occurs, the client enters the REINTEGRATION


state in which it transfers updates to the server in order to make them permanent.
It is during reintegration that conflicts are detected and, where possible,
automatically resolved. As shown in Figure, it is possible that during reintegration
the connection with the server is lost again, bringing the client back into the
EMULATION state.