by Nancy Ann Daly.; Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1987.; MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.; This research was supported by the National Science Foundation and the Defense Advanced Research Projects Agency.; Bibliography: leaves 112-114.
In this thesis, a computer vision application was implemented in the StreamIt language. The reference application was an image-based motion estimation algorithm written in MATLAB. The intent of this thesis is to contribute a case study of StreamIt, an architecture-independent language designed for stream programming. In particular, it observes StreamIt's ability to preserve block structure and to handle otherwise complicated functionality in an elegant fashion, especially in comparison to the reference implementation, coded in MATLAB. Furthermore, we observe a runtime of the StreamIt-coded application 2% faster than that of MATLAB. Lastly, the potentials of boosted performance with parallelization of subroutines within the program are discussed.; by Abdulbasier Aziz.; Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.; Includes bibliographical references (p. 57-59).
Paper insulated lead covered (PILC) cables have played an important role in underground power distribution for a hundred years. Replacing aged PILC before failure is critical to managing power distribution. Three prominent failure mechanisms accelerate cable aging: temperature stresses, moisture ingress, and partial discharge. The research focuses on the effect of temperature and moisture on the effective (complex) permittivity of the cable insulation. Measurements are performed using cylindrical dielectrometry sensors designed to be wrapped around the cable. The lead sheath of the cable is removed so that the sensors can be placed directly in contact with the insulation. From the measurements, the electrical properties of the material as a function of temperature and transient moisture diffusion are found. A theoretical treatment of the interdigital dielectrometry sensors with a cylindrical geometry is presented. Two classes of the geometry are studied. The periodic sensor geometry has electrodes aligned with the cylindrical axis and periodic around the circumference. The z periodic sensor geometry has the electrodes forming rings around the cylinder that are periodic along the cylindrical axis. The material is modeled by concentric rings of homogeneous materials. The electric field solution consists of an infinite summation of Fourier series terms. In finding the field solution and as a consequence of it...
This research demonstrates that Suspended Particle Device (SPD) film is a viable option for the development of large-scale transparent display systems. The thesis analyzes the SPD film from an architectural display application standpoint, observing its steady-state and dynamic electro-optical response and evaluating different strategies to address, drive and control the independent transparency levels of an SPD pixel array. Although passive matrix multiplexing can also be implemented, it is determined that a directly addressed, multi-line strobing technique is best suited given the constraints of AC drive, power consumption and load bi-directional conductivity of the currently available film. With regards to transparency control, a high-voltage pulse width modulation strategy proves optimal in terms of functionality and component overhead. A prototype display system that incorporates these findings is built, illustrating the key electrical considerations, design features and versatility that ought to be part of future implementations of the system.; (cont.) "A huge electronic display on a skyscraper facade can be interesting to passing pedestrians, but if you're inside the building it simply blocks your view. Researchers at MIT's Media Laboratory and Department of Urban Studies and Planning are developing a transparent display that doesn't entirely block incoming light. The group is adapting a commercial available film used in electronic window shades...
Conventionally, high-speed routers are built using custom hardware, typically dubbed as network processors. A prominent example of such a network processor is the Intel IXP1200. Such a network processor typically takes years of effort in the design, fabrication and refinement of the custom hardware, and, worst, must be frequently redesigned to meet the oft-changing requirements of emerging network applications. This thesis presents the design and implementation of a software gigabit network router on a general-purpose microprocessor using MIT's Raw microprocessor. The Raw processor, developed by the Computer Architecture Group at MIT, has sixteen RISC processors, arranged as a grid, that communicate through programmable switches and hardware network interconnects with single-cycle latencies. As opposed to previous high-speed network routers, the Raw router is built without using any custom hardware, and achieves its performance by carefully programming and orchestrating, in software, the interconnects within the Raw chip. Our Raw implementation uses stream-oriented abstractions and differs significantly from that of commercial network processors, which use memory-oriented semantics. Consequently, the Raw router is not only flexible in its architecture and easy to upgrade...
Intelligent autonomous agents working cooperatively accomplish tasks more efficiently than single agents, as well as tasks that were infeasible for the single agent. For example, a single agent transporting a ton of blocks will need to take multiple trips, possibly even more trips than it can make, and will take longer than several agents transporting the blocks in parallel. Unexpected events can result in the need for agents to change their plans in order to adapt. During replanning, the expense of communication can hinder the amount of information passed. In this thesis, agents reduce the communication load while adapting to environmental events by examining what changes will be made to teammate plans and by passing information only on a need-to-know basis. More specifically, in this thesis, we describe a method whereby cooperating agents use identical planners and knowledge of the other agents' capabilities in order to pass information about the environmental conditions they observe that are necessary for their teammates to infer the correct actions to take. The agent must also pass conditions only to the teammates who are affected by the changes. Given that not all agents will have the same information about environmental conditions...
This thesis presents a tool that assists in the development of timed systems, giving users simulation and verification capacities to guarantee timing properties in computer programs. The IOA Toolkit, a toolset for developing distributed systems, is extended to permit time-dependant properties to be described using a subset of the Timed IOA (TIOA) language. This thesis also includes a translator from TIOA into UPPAAL, another toolset. This translator allows the simulation of IOA and TIOA programs in UPPAAL's easy-to-use interface, and the checking of their properties with UPPAAL's model checker. This thesis lays the framework for an extension of the toolkit to utilize the full TIOA language, and to allow composed automata to be simulated. A scheme for translating composed automata into UPPAAL is also presented.; by Christine Margaret Robson.; Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.; Includes bibliographical references (p. 83-87).
Shadows are important to computer graphics because they add realism and help the viewer identify spatial relationships. Shadows are also useful story-telling devices. For instance, artists carefully choose the shape, softness, and placement of shadows to establish mood or character. Many shadow generation techniques developed over the years have been used successfully in offline movie production. It is still challenging, however, to compute high-quality shadows in real-time for dynamic scenes. This thesis presents two efficient shadow algorithms. Although these algorithms are designed to run in real-time on graphics hardware, they are also well-suited to offline rendering systems. First, we describe a hybrid algorithm for rendering hard shadows accurately and efficiently. Our method combines the strengths of two existing techniques, shadow maps and shadow volumes. We first use a shadow map to identify the pixels in the image that lie near shadow discontinuities. Then, we perform the shadow-volume computation only at these pixels to ensure accurate shadow edges. This approach simultaneously avoids the edge aliasing artifacts of standard shadow maps and avoids the high fillrate consumption of standard shadow volumes. The algorithm relies on a hardware mechanism that we call a computation mask for rapidly rejecting non-silhouette pixels during rasterization. Second...
One of the goals of AI is to produce a computer system that can plan and act intelligently in the real world. It is difficult to do so, in part because real-world domains are very large. Existing research generally deals with the large domain size using a static representation and exploiting a single type of domain structure. This leads either to an inability to complete planning on larger domains or to poor solution quality because pertinent information is discarded. This thesis creates a framework that encapsulates existing and new abstraction and approximation methods into modules and combines arbitrary modules into a 'hierarchy that allows for dynamic representation changes. The combination of different abstraction methods allows many qualitatively different types of structure in the domain to be exploited simultaneously. The ability to change the representation dynamically allows the framework to take advantage of how different domain subparts are relevant in different ways at different times. Since the current plan tracks the current representation, choosing to simplify (or omit) distant or improbable areas of the domain sacrifices little in the way of solution quality while making the planning problem considerably easier.; (cont.) The module hierarchy approach leads to greater abstraction that is tailored to the domain and therefore need not give up hope of creating reasonable solutions. While there are no optimality guarantees...
Reliability and security are quickly becoming users' biggest concern due to the increasing reliance on computers in all areas of society. Hardware-enforced, fine-grained memory protection can increase the reliability and security of computer systems, but will be adopted only if the protection mechanism does not compromise performance, and if the hardware mechanism can be used easily by existing software. Mondriaan memory protection (MMP) provides fine-grained memory protection for a linear address space, while supporting an efficient hardware implementation. MMP's use of linear addressing makes it compatible with current software programming models and program binaries, and it is also backwards compatible with current operating systems and instruction sets. MMP can be implemented efficiently because it separates protection information from program data, allowing protection information to be compressed and cached efficiently. This organization is similar to paging hardware, where the translation information for a page of data bytes is compressed to a single translation value and cached in the TLB. MMP stores protection information in tables in protected system memory, just as paging hardware stores translation information in page tables. MMP is well suited to improve the robustness of modern software. Modern software development favors modules (or plugins) as a way to structure and provide extensibility for large systems...
People's names, dates, locations, organizations, and various numeric expressions, collectively called Named Entities, are used to convey specific meanings to humans in the same way that identifiers and constants convey meaning to a computer language interpreter. Natural Language Question Answering can benefit from understanding the meaning of these expressions because answers in a text are often phrased differently from questions and from each other. For example, "9/11" might mean the same as "September 11th" and "Mayor Rudy Giuliani" might be the same person as "Rudolph Giuliani". Sepia, the system presented here, uses a lexicon of lambda expressions and a mildly context-sensitive parser to create a data structure for each named entity. The parser and grammar design are inspired by Combinatory Categorial Grammar. The data structures are designed to capture semantic dependencies using common syntactic forms. Sepia differs from other natural language parsers in that it does not use a pipeline architecture. As yet there is no statistical component in the architecture. To evaluate Sepia, I use examples tp illustrate its qualitative differences from other named entity systems, I measure component performance on Automatic Content Extraction (ACE) competition held-out training data. and I assess end-to-end performance in the Infolab's TREC-12 Question Answering competition entry. Sepia will compete in the ACE Entity Detection and Tracking track at the end of September.; by Gregory A. Marton.; Thesis (S.M.)--Massachusetts Institute of Technology...
The proliferation of mobile devices and their tendency to present information proactively has led to an increase in device generated interruptions experienced by users. These interruptions are not confined to a particular physical space and are omnipresent. One possible strategy to lower the perceived burden of these interruptions is to cluster non-time-sensitive interruptions and deliver them during a physical activity transition. Since a user is already "interrupting" the current activity to engage in a new activity, the user will be more receptive to an interruption at this moment. This work compares the user's receptivity to an interruption triggered by an activity transition against a randomly generated interruption. A mobile computer system detects an activity transition with the use of wireless accelerometers. The results demonstrate that using this strategy reduces the perceived burden of the interruption.; by Joyce Ho.; Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.; Includes bibliographical references (p. 113-116).
This thesis presents the nonlinear analysis, design, fabrication, and testing of an axial-gap magnetic induction micro machine, which is a two-phase planar motor in which the rotor is suspended above the stator via mechanical springs, or tethers. The micro motor is fabricated from thick layers of electroplated NiFe and copper, by our collaborators at Georgia Institute of Technology. The rotor and the stator cores are 4 mm in diameter each, and the entire motor is about 2 mm thick. During fabrication, SU-8 epoxy is used as a structural mold material for the electroplated cores. The tethers are designed to be compliant in the azimuthal direction, while preventing axial deflections and maintaining a constant air gap. This enables accurate measurements of deflections within the rotor plane via a computer microvision system. The small scale of the magnetic induction micro machine, in conjunction with the good thermal contact between its electroplated stator layers, ensures an isothermal device which can be cooled very effectively. Current densities over 109 A/m2 simultaneously through each phase is repeatedly achieved during experiments; this density is over two orders of magnitude larger than what can be achieved in conventional macro-scale machines. More than 5 Nm of torque is obtained for an air gap of about 5 zm...
The proliferation of mobile computing devices and wireless networking products over the past decade has led to an increasingly nomadic computing lifestyle. A computer is no longer an immobile, gargantuan machine that remains in one place for the lifetime of its operation. Today's personal computing devices are portable, and Internet access is becoming ubiquitous. A well-traveled laptop user might use half a dozen different networks throughout the course of a day: a cable modem from home, wide-area wireless on the commute, wired Ethernet at the office, a Bluetooth network in the car, and a wireless, local-area network at the airport or the neighborhood coffee shop. Mobile hosts are prone to frequent, unexpected disconnections that vary greatly in duration. Despite the prevalence of these multi-homed mobile devices, today's operating systems on both mobile hosts and fixed Internet servers lack fine-grained support for network applications on intermittently connected hosts. We argue that network communication is well-modeled by a session abstraction, and present Migrate, an architecture based on system support for a flexible session primitive. Migrate works with application-selected naming services to enable seamless, mobile "suspend/resume" operation of legacy applications and provide enhanced functionality for mobile-aware...
A class of novel computers uses solute concentrations of distinct chemical species as logic signals and diffusion for signal transport. I establish a bound on the speed, density, and error rate of such computers from first principles. I let the chemical computer engineer choose a "design tuple" of independent parameters: number of chemical species, total solute concentration, signal molecule size, and a parameter called the "cell size". I establish a functional relation between the design tuple, and the "performance tuple": (operating frequency, signal density, error rate). I give a lower bound on the probability of a logic error in one computation step, and an upper bound on the frequency of operation, both as functions of the design tuple. I evaluate these bounds for ssDNA oligomers, and conclude that DNA computation has unacceptable error rates if the hybridization regions are less than eight nucleotides in length. I then argue that, given a suitable scalar-valued performance metric as a function over performance tuples, there is a globally optimal design tuple maximizing performance. I present two conjectures seeking to explain (a) why neurons use small molecules to transport information, and (b) why cells have the size they do. In part two I develop such a performance metric based on Toffoli's computation capacity and computation density...
This thesis presents the nonlinear analysis, design, fabrication, and testing of an axial-gap magnetic induction micro machine, which is a two-phase planar motor in which the rotor is suspended above the stator via mechanical springs, or tethers. The micro motor is fabricated from thick layers of electroplated NiFe and copper, by our collaborators at Georgia Institute of Technology. The rotor and the stator cores are 4 mm in diameter each, and the entire motor is about 2 mm thick. During fabrication, SU-8 epoxy is used as a structural mold material for the electroplated cores. The tethers are designed to be compliant in the azimuthal direction, while preventing axial deflections and maintaining a constant air gap. This enables accurate measurements of deflections within the rotor plane via a computer microvision system. The small scale of the magnetic induction micro machine, in conjunction with the good thermal contact between its electroplated stator layers, ensures an isothermal device which can be cooled very effectively. Current densities over 109 A/m2 simultaneously through each phase is repeatedly achieved during experiments; this density is over two orders of magnitude larger than what can be achieved in conventional macro-scale machines.; (cont.) More than 5 Nm of torque is obtained for an air gap of about 5 zm...
Computer decision making systems are aids that are becoming used in an increasing number and variety of applications. These systems allow people to interface with sources of information far too large and complicated to process themselves. As these systems become used on more advanced and complicated tasks, they will need to become more intelligent. One step towards this is the creation of a multi-agent decision making system that uses behavior inspired by the interactions of groups of people to allow a set of agents with humanistic characteristics to interact with one another. I implemented a program, AgentCommittee, which incorporates such behavior. AgentCommittee uses a set of characteristics that include extroversion, fatigue, resistance, confidence, and competitiveness in a series of one-on-one interactions to arrive at a group decision. The goal of AgentCommittee is not to find the best or optimal answer, but to produce a variety of good answers. I tested this program on a set of data from Compaq's EachMovie database and found that AgentCommittee was 20% more successful at finding relationships between the genre of a movie and a user's opinion of that movie than a random output generator.; by John J. Bevilacqua.; Thesis (M. Eng.)--Massachusetts Institute of Technology...
In order to deploy intelligent, next-generation applications on Unmanned Aerial Vehicles (UAVs), we must first develop a software architecture that supports onboard computation and flexible communication. This thesis presents an Onboard Planning Module developed from an embedded PC/104 Linux-based computer that communicates directly with the UAV's autopilot to retrieve telemetry data and update the UAV's flight path. A serial communication program exchanges data with the UAV's autopilot while a multithreaded module enables concurrent onboard Mixed-Integer Linear Programming (MILP) optimization. The Mission Manager Graphical User Interface (GUI) monitors the status of each Onboard Planning Module on a team of UAVs using the onboard planning protocol. Two task assignment scenarios are simulated to demonstrate the system operating with both a single and multiple UAV task selection algorithm.; by Michael J. Matczynski.; Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.; Includes bibliographical references (leaves 52-54).
Recently, additive combinatorics has blossomed into a vibrant area in
mathematical sciences. But it seems to be a difficult area to define - perhaps
because of a blend of ideas and techniques from several seemingly unrelated
contexts which are used there. One might say that additive combinatorics is a
branch of mathematics concerning the study of combinatorial properties of
algebraic objects, for instance, Abelian groups, rings, or fields. This
emerging field has seen tremendous advances over the last few years, and has
recently become a focus of attention among both mathematicians and computer
scientists. This fascinating area has been enriched by its formidable links to
combinatorics, number theory, harmonic analysis, ergodic theory, and some other
branches; all deeply cross-fertilize each other, holding great promise for all
of them! In this exposition, we attempt to provide an overview of some
breakthroughs in this field, together with a number of seminal applications to
sundry parts of mathematics and some other disciplines, with emphasis on
computer science and cryptography.; Comment: 37 pages. In Proceedings of the International Number Theory
Conference in Memory of Alf van der Poorten, to appear
We study the statistics of height and balanced height in the binary search
tree problem in computer science. The search tree problem is first mapped to a
fragmentation problem which is then further mapped to a modified directed
polymer problem on a Cayley tree. We employ the techniques of traveling fronts
to solve the polymer problem and translate back to derive exact asymptotic
properties in the original search tree problem. The second mapping allows us
not only to re-derive the already known results for random binary trees but to
obtain new exact results for search trees where the entries arrive according to
an arbitrary distribution, not necessarily randomly. Besides it allows us to
derive the asymptotic shape of the full probability distribution of height and
not just its moments. Our results are then generalized to $m$-ary search trees
with arbitrary distribution. An attempt has been made to make the article
accessible to both physicists and computer scientists.; Comment: 16 pages, two-column revtex