by John Michael Spinelli.; Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1989.; GRSN 409686; Includes bibliographical references (leaves 107-110).
by Joan Marie Sulecki.; Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1983.; MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING; Bibliography: leaf 89.
The utility and purpose of a node in a wireless sensor network is intimately tied to the physical space in which it is distributed. As such, it is advantageous under most circumstances for a sensor node to know its position. In this work, we present two systems for localizing a network of roughly 60 sensor nodes distributed over an area of 1-m2. One is based on a linear lateration technique, while the second approach utilizes non-linear optimization techniques, namely spectral graph drawing and mesh relaxation. In both cases, localization is accomplished by generating distance constraints based on ultrasound time-of-flight measurements to distinct, global sensor stimuli. These distance constraints alone are sufficient to achieve localization; no a priori knowledge of sensor node coordinates or the coordinates of the global sensor events are required. Using this technique, we have achieved a localization error of 2.30-cm and an error standard deviation of 2.36-cm.; by Michael Joseph Broxton.; Thesis (M. Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.; Includes bibliographical references (p. 117-124).
Ventricular Tachycardia (VT) is a rapid arrhythmia, most commonly due to reentrant electrical activity in the heart. A common treatment for VT is Radio-Frequency Ablation (RFA), which is minimally invasive, but requires maintenance of VT until the target site for ablation is determined. Most patients with VT cannot tolerate this maintenance phase due to hemodynamic instability and for those who are hemodynamically stable, the RFA procedure is successful in permanently terminating the VT in only approximately half of the cases. Therefore, the need for an RFA procedure that accurately localizes the site for ablation, or exit site of the reentry circuit, and is safe for unstable patients is evident. We believe utilization of the Single Equivalent Moving Dipole model and inverse problem in cardiology will prove to be efficient in localizing the exit site of the reentry circuit and guiding the ablation catheter to that localized site during the RFA procedure. In principle, our RFA technique only requires a single beat of VT to localize the exit site of the reentry circuit. The objective of this thesis is to determine in a simulation model if one can guide a catheter to the exit site of the reentry circuit using body surface potentials in order to ablate that site with radio-frequency energy.; (cont.) In our new approach to RFA...
The goal of the "Fresh Breeze Project" is to develop a multi-core chip architecture that supports a better programming model for parallel computing. This architecture combines simultaneous multithreading, a global shared address space, no memory update, and a cycle-free heap to provide a platform for robust, general-purpose, parallel computation. These design choices help simplify classically hard problems such as memory coherency, control flow, and synchronization. An HDL implementation of the core execution unit of a single processing core (many cores are on a single chip) forms the basis of further simulation and synthesis. The design must first be broken down into functional logic blocks and translated into hardware modules. The language Bluespec Verilog allows this description to be constructed in terms of higher-level "guarded atomic actions" triggered by a rule based system.; by Albert Chiou.; Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.; Includes bibliographical references (p. 32).
Proofs by induction are central to many computer science areas such as data structures, theory of computation, programming languages, program efficiency-time complexity, and program correctness. Proofs by induction can also improve students’ understanding and performance of computer science concepts such as programming languages, algorithm design, and recursion, as well as serve as a medium for teaching them. ^ Even though students are exposed to proofs by induction in many courses of their curricula, they still have difficulties understanding and performing them. This impacts the whole course of their studies, since proofs by induction are omnipresent in computer science. Specifically, students do not gain conceptual understanding of induction early in the curriculum and as a result, they have difficulties applying it to more advanced areas later on in their studies. ^ The goal of my dissertation is twofold: (1) identifying sources of computer science students’ difficulties with proofs by induction, and (2) developing a new approach to teaching proofs by induction by way of an interactive and multimodal electronic book (e-book). For the first goal, I undertook a study to identify possible sources of computer science students’ difficulties with proofs by induction. Its results suggest that there is a close correlation between students’ understanding of inductive definitions and their understanding and performance of proofs by induction. For designing and developing my e-book...
Proofs by induction are central to many computer science areas such as data structures, theory of computation, programming languages, program efficiency-time complexity, and program correctness. Proofs by induction can also improve students’ understanding of and performance with computer science concepts such as programming languages, algorithm design, and recursion, as well as serve as a medium for teaching them. Even though students are exposed to proofs by induction in many courses of their curricula, they still have difficulties understanding and performing them. This impacts the whole course of their studies, since proofs by induction are omnipresent in computer science. Specifically, students do not gain conceptual understanding of induction early in the curriculum and as a result, they have difficulties applying it to more advanced areas later on in their studies. The goal of my dissertation is twofold: 1. identifying sources of computer science students’ difficulties with proofs by induction, and 2. developing a new approach to teaching proofs by induction by way of an interactive and multimodal electronic book (e-book). For the first goal, I undertook a study to identify possible sources of computer science students’ difficulties with proofs by induction. Its results suggest that there is a close correlation between students’ understanding of inductive definitions and their understanding and performance of proofs by induction. For designing and developing my e-book...
In this work, the benefits of using 3-D integration in the fabrication of Field Programmable Gate Arrays (FPGAs) are analyzed. A CAD tool has been developed to specify 3-dimensional FPGA architectures and map RTL descriptions of circuits to these 3-D FPGAs. The CAD tool was created from the widely used Versatile Place and Route (VPR) CAD tool for 2-D FPGAs. The tool performs timing-driven placement of logic blocks in the 3-dimensional grid of the FPGA using a two-stage Simulated Annealing (SA) process. The SA algorithm in the original VPR tool has been modified to focus more directly on minimizing the critical path delay of the circuit and hence maximizing the performance of the mapped circuit. After placing the logic blocks, the tool generates a Routing-Resource graph from the 3-D FPGA architecture for the VPR router. This allows the efficient Pathfinder-based VPR router to be used without any modification for the 3-D architecture. The CAD tool that was developed for mapping circuits to the fabricated 3-D FPGA is also used for exploring the design space for the 3-D FPGA architecture. A significant contribution of this work is a dual-interconnect architecture for the 3-D FPGA which has parasitic capacitance comparable to 2-D FPGAs. The nets routed in a 3-D FPGA are divided into intra-layer nets and inter-layer nets...
Prostate cancer's high incidence and high survivability motivate its treatment using tightly focused radiation therapy. Brachytherapy treatment, the implantation of radioactive seeds into the prostate, is increasing in popularity, spurred by advances in medical imaging techniques for prostate visualization. Successful brachytherapy requires precise positioning of implant seeds within the pelvic anatomy. Following implantation, precise localization of individual seeds is required to evaluate treatment, but this remains an open challenge. This thesis addresses the seed localization problem with contributions for improving seed-based registration of MR and CT post-implant images. A model for non-rigid, affine prostate motion is presented and demonstrated to improve on current techniques of rigid registration. Also, an evaluation of the benefit of using multiple, rather than a few, seeds is presented, along with a scheme for validating registrations using manually detected seeds in MR and CT volumes. Finally, a scheme for automatic seed-based MR and CT registration by aligning all seeds is suggested, with supporting algorithms for CT seed-finding and unmatched feature registration. A call for an MR seed-finder is issued, for this is the final component needed to achieve automatic and complete seed-based MR and CT registration.; by Elizabeth S. Kim.; Thesis (M. Eng.)--Massachusetts Institute of Technology...
Access to the work of others is something that is too often taken for
granted, yet problematic and difficult to be obtained unless someone pays for
it. Green and gold open access are claimed to be a solution to this problem.
While open access is gaining momentum in some fields, there is a limited and
seasoned knowledge about self-archiving in computer science. In particular,
there is an inadequate understanding of author-based self-archiving awareness,
practice, and inhibitors. This article reports an exploratory study of the
awareness of self-archiving, the practice of self-archiving, and the inhibitors
of self-archiving among authors in an Italian computer science faculty.
Forty-nine individuals among interns, PhD students, researchers, and professors
were recruited in a questionnaire (response rate of 72.8%). The quantitative
and qualitative responses suggested that there is still work needed in terms of
advocating green open access to computer science authors who seldom
self-archive and when they do, they often infringe the copyright transfer
agreements (CTAs) of the publishers. In addition, tools from the open-source
community are needed to facilitate author-based self-archiving, which should
comprise of an automatic check of the CTAs. The study identified nine factors
inhibiting the act of self-archiving among computer scientists. As a first
Computer science enrollments have started to rise again, but the percentage
of women undergraduates in computer science is still low. Some studies indicate
this might be due to a lack of awareness of computer science at the high school
level. We present our experiences running a 5-year, high school outreach
program that introduces information about computer science within the context
of required chemistry courses. We developed interactive worksheets using
Molecular Workbench that help the students learn chemistry and computer science
concepts related to relevant events such as the gulf oil spill. Our evaluation
of the effectiveness of this approach indicates that the students do become
more aware of computer science as a discipline, but system support issues in
the classroom can make the approach difficult for teachers and discouraging for
the students.; Comment: 8 pages, 2 figures
Leveraging the prevailing interest in computer games among college students,
both for entertainment and as a possible career path, is a major reason for the
increasing prevalence of computer game design courses in computer science
curricula. Because implementing a computer game requires strong programming
skills, game design courses are most often restricted to more advanced computer
science students. This paper reports on a ready-made game design and
experimentation framework, implemented in Java, that makes game programming
more widely accessible. This framework, called Labyrinth, enables students at
all programming skill levels to participate in computer game design. We
describe the architecture of the framework, and discuss programming projects
suitable for a wide variety of computer science courses, from capstone to
non-major.; Comment: 5 pages, 3 figures
In contrast to many other scientific disciplines, computer science considers
conference publications. Conferences have the advantage of providing fast
publication of papers and of bringing researchers together to present and
discuss the paper with peers. Previous work on knowledge mapping focused on the
map of all sciences or a particular domain based on ISI published JCR (Journal
Citation Report). Although this data covers most of important journals, it
lacks computer science conference and workshop proceedings. That results in an
imprecise and incomplete analysis of the computer science knowledge. This paper
presents an analysis on the computer science knowledge network constructed from
all types of publications, aiming at providing a complete view of computer
science research. Based on the combination of two important digital libraries
(DBLP and CiteSeerX), we study the knowledge network created at
journal/conference level using citation linkage, to identify the development of
sub-disciplines. We investigate the collaborative and citation behavior of
journals/conferences by analyzing the properties of their co-authorship and
citation subgraphs. The paper draws several important conclusions. First,
conferences constitute social structures that shape the computer science
This volume contains the proceedings of the Eighth Workshop on Fixed Points
in Computer Science which took place on 24 March 2012 in Tallinn, Estonia as an
ETAPS-affiliated workshop. Past workshops have been held in Brno (1998,
MFCS/CSL workshop), Paris (2000, LC workshop), Florence (2001, PLI workshop),
Copenhagen (2002, LICS (FLoC) workshop), Warsaw (2003, ETAPS workshop), Coimbra
(2009, CSL workshop), and Brno (2010, MFCS-CSL workshop).
Fixed points play a fundamental role in several areas of computer science and
logic by justifying induction and recursive definitions. The construction and
properties of fixed points have been investigated in many different frameworks
such as: design and implementation of programming languages, program logics,
and databases. The aim of this workshop is to provide a forum for researchers
to present their results to those members of the computer science and logic
communities who study or apply the theory of fixed points.; Comment: For more information about FICS 2012, please visit the webpage of the
This volume contains the proceedings of the Ninth Workshop on Fixed Points in
Computer Science which took place on the September 1st, 2013 in Torino, Italy
as a CSL-affiliated workshop. Past workshops have been held in Brno (1998,
MFCS/CSL workshop), Paris (2000, LC workshop), Florence (2001, PLI workshop),
Copenhagen (2002, LICS (FLoC) workshop), Warsaw (2003, ETAPS workshop), Coimbra
(2009, CSL workshop), Brno (2010, MFCS-CSL workshop), Tallinn (2012, CSL
workshop). Fixed points play a fundamental role in several areas of computer
science. They are used to justify (co)recursive definitions and associated
reasoning techniques. The construction and properties of fixed points have been
investigated in many different settings such as: design and implementation of
programming languages, logics, verification, databases. The aim of this
workshop is to provide a forum for researchers to present their results to
those members of the computer science and logic communities who study or apply
the theory of fixed points.
Computer science is a relatively young discipline combining science,
engineering, and mathematics. The main flavors of computer science research
involve the theoretical development of conceptual models for the different
aspects of computing and the more applicative building of software artifacts
and assessment of their properties. In the computer science publication
culture, conferences are an important vehicle to quickly move ideas, and
journals often publish deeper versions of papers already presented at
conferences. These peculiarities of the discipline make computer science an
original research field within the sciences, and, therefore, the assessment of
classical bibliometric laws is particularly important for this field. In this
paper, we study the skewness of the distribution of citations to papers
published in computer science publication venues (journals and conferences). We
find that the skewness in the distribution of mean citedness of different
venues combines with the asymmetry in citedness of articles in each venue,
resulting in a highly asymmetric citation distribution with a power law tail.
Furthermore, the skewness of conference publications is more pronounced than
the asymmetry of journal papers. Finally, the impact of journal papers...
Despite its relative youth, computer science has become a well-established discipline, granting over 2% of the bachelors degrees in the United States (U.S. Department of Education, 2010). For this reason, it is important that we understand the nature of computer science and the likely direction for the development of inquiry in computer science in the future. This paper examines several perspectives on the nature of the methods of computer science inquiry. These are empiricist methods, rationalist methods, and an engineering stance. It argues that empiricist and rationalist stances play identifiable roles in the scientific nature of computer science reasoning but that the engineering stance does not. Following the trend in the maturation of other sciences, this paper recommends an overhaul in computer science curricula.