Research Projects

The following are some of the research projects currently under investigation by various members of the Oceans research group. For more details about each project, please visit that project's Home Page.

Client Prefetching for the Web
Is it possible to prefetch WWW documents? Is it possible for clients to perform prefetching based on previous user access patterns? How effective would such techniques be in terms of reducing server load and service time? These and other questions are the subject of this research project, which attempts to characterize the spatial locality of reference for Web documents and to exploit this locality to perform client-initiated prefetching.

Investigators: Azer Bestavros, Carlos Cunha, and Martin Mroz.

The Commonwealth Server
The goal of the Commonwealth Server Project is to make high-performance servers common, by demonstrating that they can be constructed in a scalable manner from relatively low-cost components. The strength of the Commonwealth Server is the combination of appealing performance/cost ratios with easily scaled performance. The design goal of the Commonwealth Server Project is software allowing inexpensive PC-class machines to act as a server that can be scaled up, in a cost-effective manner, to 100 million hits/day.

Investigators: Mark Crovella, Azer Bestavros, David Yates, and Virgilio Almeida.

Dynamic Network Measurement and Prediction
This goal of this project is the development of techniques and tools for dynamically measuring and predicting resource availability in wide-area networks. Originally motivated by the Server Selection Problem, the uses for our techniques have expanded into the general notion of application-level congestion avoidance. Our tools are designed to give applications information about the current latency, link speed, and congestion to arbitrary hosts, and to provide estimates of such values in the future. Using such information, applications can find "nearby" servers, can avoid congested paths, and can minimize latency of transfers.

Investigators: Bob Carter and Mark Crovella

In this project we develop Wave, a fully distributed protocol and an associated set of policies to maximize total server throughput by balancing the load among document and cache servers across an internet. Wave is being designed so as to respond very quickly to global changes in load, without introducing instabilities. We believe that distributed document dissemination services should integrate---to a limited extent---caching with routing, broadcasting and name resolution. In particular, Wave places cache copies along virtual routes that client requests follow to document home sites, so as to intercept and fulfill these requests on-the-fly. If virtual routes are constrained to be close approximations of physical routes, then the communication overhead of our scheme can be kept small. We have extended MaRS (Maryland Routing Simulator) to model caching behavior and to monitor and gossip load information.

Investigators: Abdelsalam Heddaya, and Sulaiman Mirdad.

Is it possible for servers to predict what their clients would request in the future and service (or provide hints) about such requests speculatively? How effective would such techniques be in terms of reducing server load and service time? These and other questions are the subject of this research project, which tries to capitalize on the temporal and spatial locality of reference for Web documents to perform server-initiated speculative service.

Investigators: Azer Bestavros, and Chau Anh Nguyen.

The primary goal of this project is to develop a world wide web image search tool, for searching web documents based on image content. Unlike keyword-based search, search by image content allows users to guide a search through the selection (or creation) of example images. The technical challenges associated with this project are in part due to the staggering scale of the world wide web, and in part due to the problem of developing effective image representations for very fast search based on image content. In addition, this project will address issues relating to developing user interfaces for a web search by image content browser.

Principal Investigator: Stan Sclaroff.

The primary goal of this project is to develop protocols for the dissemination of information on a supply/demand basis from servers to their clients. This project relies on a particular future model of the Internet where in addition to clients and servers, service proxies offer their storage and bandwidth capacities ``for rent''. By analyzing the access patterns of its clients, a server could capitalize on the geographical locality of reference for its popular Web documents to decide on replication and placement strategies.

Investigators: Azer Bestavros and Carlos Cunha.

Maintainer: A.Bestavros Created on: 1994.05.02 Updated on: 1996.08.30