Página 1 dos resultados de 34 itens digitais encontrados em 0.014 segundos

‣ Dynamical models for computer viruses propagation

PIQUEIRA, Jose R. C.; CESAR, Felipe Barbosa
Fonte: HINDAWI PUBLISHING CORPORATION Publicador: HINDAWI PUBLISHING CORPORATION
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
38.65484%
Nowadays, digital computer systems and networks are the main engineering tools, being used in planning, design, operation, and control of all sizes of building, transportation, machinery, business, and life maintaining devices. Consequently, computer viruses became one of the most important sources of uncertainty, contributing to decrease the reliability of vital activities. A lot of antivirus programs have been developed, but they are limited to detecting and removing infections, based on previous knowledge of the virus code. In spite of having good adaptation capability, these programs work just as vaccines against diseases and are not able to prevent new infections based on the network state. Here, a trial on modeling computer viruses propagation dynamics relates it to other notable events occurring in the network permitting to establish preventive policies in the network management. Data from three different viruses are collected in the Internet and two different identification techniques, autoregressive and Fourier analyses, are applied showing that it is possible to forecast the dynamics of a new virus propagation by using the data collected from other viruses that formerly infected the network. Copyright (c) 2008 J. R. C. Piqueira and F. B. Cesar. This is an open access article distributed under the Creative Commons Attribution License...

‣ Análise do processo de conformação de chapas utilizando simulação computacional e engenharia reversa como ferramentas integradas no desenvolvimento e construção de estampos automotivos.; Sheet metal forming process analysis using computer simulation and reverse engineering as integrated tools in automotives stamping design and construction.

Damoulis, Gleiton Luiz
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Tese de Doutorado Formato: application/pdf
Publicado em 05/10/2010 Português
Relevância na Pesquisa
48.60023%
Em anos recentes, os processos de conformação de chapas automotivas têm sido drasticamente modificados. A utilização de equipamentos de medição metrológicos óticos sem contato e respectivos softwares baseados em fotogrametria, bem como o uso de programas de simulação de estampagem baseados no Método dos Elementos Finitos (FEM), estão se tornando uma rotina no desenvolvimento de ferramentais de estampagem, visto que a confiabilidade, precisão de resultados e facilidade de uso em relação à topologia superficial do ferramental, representou um grande salto tecnológico. Entretanto, por maior que tenha sido o avanço, persistem ainda problemas relacionados ao custo benefício quanto à adoção de certas técnicas e a possibilidade de utilizar ambos os sistemas de forma a um complementar o outro. Neste sentido, o objetivo desta tese é analisar o processo de conformação de chapas utilizando simulação computacional e engenharia reversa como ferramentas integradas no desenvolvimento e construção de estampos automotivos. São descritos casos industriais, cujos resultados demonstram que novas técnicas podem ser aplicadas na definição e modelagem do processo de estampagem de chapas metálicas, utilizando a metrologia ótica...

‣ SafeJava : a unified type system for safe programming; Unified type system for safe programming

Boyapati, Chandrasekhar, 1973-
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 164 p.; 8038595 bytes; 8038403 bytes; application/pdf; application/pdf
Português
Relevância na Pesquisa
69.13485%
Making software reliable is one of the most important technological challenges facing our society today. This thesis presents a new type system that addresses this problem by statically preventing several important classes of programming errors. If a program type checks, we guarantee at compile time that the program does not contain any of those errors. We designed our type system in the context of a Java-like object-oriented language; we call the resulting system SafeJava. The SafeJava type system offers significant software engineering benefits. Specifically, it provides a statically enforceable way of specifying object encapsulation and enables local reasoning about program correctness; it combines effects clauses with encapsulation to enable modular checking of methods in the presence of subtyping; it statically prevents data races and deadlocks in multithreaded programs, which are known to be some of the most difficult programming errors to detect, reproduce, and eliminate; it enables software upgrades in persistent object stores to be defined modularly and implemented efficiently; it statically ensures memory safety in programs that manage their own memory using regions; and it also statically ensures that real-time threads in real-time programs are not interrupted for unbounded amounts of time because of garbage collection pauses. Moreover...

‣ SYSGEN : production costing and reliability model user documentation

Finger, Susan
Fonte: MIT Energy Laboratory Publicador: MIT Energy Laboratory
Tipo: Relatório Formato: 6040020 bytes; application/pdf
Português
Relevância na Pesquisa
47.763228%
"Updated February 1980".; Sponsored by the Dept. of Energy under Contract no. EX-76-A-01-2295.

‣ Auto-configuration of Savants in a complex, variable network

Yu, Joseph Hon
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 64 p.; 2569339 bytes; 2571914 bytes; application/pdf; application/pdf
Português
Relevância na Pesquisa
68.452773%
In this thesis, present a system design that enables Savants to automatically configure both their network settings and their required application programs when connected to an intelligent data management and application system. Savants are intelligent routers in a large network used to manage the data and events related to communications with electronic identification tags [10]. The ubiquitous nature of the identification tags and the access points that communicate with them requires an information and management system that is equally ubiquitous and able to deal with huge volumes of data. The Savant systems were designed to be such a ubiquitous information and management system. Deploying any ubiquitous system is difficult, and automation is required to streamline its deployment and improve system management, reliability, and performance. My solution to this auto-configuration problem uses NETCONF as a standard language and protocol for configuration communication among Savants. It also uses the Content-Addressable Network (CAN) as a discovery service to help Savants locate configuration information, since a new Savant may not have information about the network structure. With these tools, new Savants can configure themselves automatically with the help of other Savants.; (cont.) Specifically...

‣ Construction and testing of an 80C86 based communications controller for the Petite Amateur Navy Satellite (PANSAT)

Tobin, Stephen M.
Fonte: Monterey, California: Naval Postgraduate School Publicador: Monterey, California: Naval Postgraduate School
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
28.275005%
Approved for public release; distribution is unlimited.; This thesis describes the testing of a prototype of a satellite computer communications controller, based on the 80C86 microprocessor, which is to be placed on the Petite Amateur Navy Satellite (PANSAT) for launch in 1993 on a two year mission. First, the background of the justification for PANSAT is described. PANSAT will serve primarily as an inexpensive store-and-forward orbiting mailbox, and secondarily it will serve as a teaching and learning vehicle for NPS faculty and students. then there are reviews of the requirements and design concepts involved in the initial paper design by a previous thesis student. A reliability analysis is done, validating the reliability of the design. Finally, concepts considered in the wire-wrapped prototype construction are explained, followed by an extensive description of initial circuit testing and development of various machine language circuit test programs

‣ Seismic Reliability of Flow and Communication Networks

Barlow, R.E.; Der Kiureghian, A.; Moghtaderizadeh, M.; Sato, T.; Wood, R.K.
Fonte: Escola de Pós-Graduação Naval Publicador: Escola de Pós-Graduação Naval
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
38.229194%
Lifeline Earthquake Engineering, The Current State of Knowledge 1981, Proceedings of the Second TCLEE Specialty Conference, ASCE, J. Smith editor, American Society of Civil Engineers, pp. 81-96.; An efficient method for seismic reliability assessment of lifeline networks is developed. Lifeline component failures resulting from ground shaking and fault differental movement are analyzed using performance functions given in terms of earthquake variables. An improved fault-rupture model which considers the ruptured area produced by an earthquake on the fault plane is utilized. A new, polynomially bounded method for computing lifeline network reliability is developed. It is shown that for a fixed earthquake magnitude on a fult, the network will take on at most 2n states where n is the number of network components. Computing the seismic realibility of large networks becomes feasible using these new techniques. A water distribution system is analyzed using newly developed computer programs.

‣ A software prototype for a Command, Control, Communications and Intelligence (C_x001B_p3_x001B_sI) workstation

Coskun, Vedat; Kesoglu, Cengiz
Fonte: Monterey, California: Naval Postgraduate School Publicador: Monterey, California: Naval Postgraduate School
Tipo: Tese de Doutorado Formato: xvii, 309 p. ill.
Português
Relevância na Pesquisa
48.140757%
Approved for public release; distribution unlimited.; Developing large hard-real-time systems in a traditional way usually creates inconsistencies among the user's needs, the requirements and the implementation. Rapid Prototyping by using Prototype System Description Language (PSDL) and Computer Aided Prototyping System (CAPS) minimizes the time and resource costs, and maximizes reliability. In this technique, designer builds the prototype with the initial requirements, and the user evaluates the actual behavior of the prototype against its expected behavior. If prototype fails to execute properly, the user and the designer work together to change the requirements and the prototype, unit the prototype captures the critical aspects of the software system. This thesis uses the rapid prototyping approach to produce an Ada software prototype of C3I workstation, which provides commonality and connectivity between naval platforms and land bases by providing the ability to process tactical data from many interfaces in real-time. The major emphasis of the prototype is to support C3I information management functions, message generation and information display.; Ltjg., Turkish Navy; Ltjg., Turkish Navy

‣ Analysis of the reliability disparity and reliability growth analysis of a combat system using AMSAA extended reliability growth models

Er, Kim Hua.
Fonte: Monterey, California. Naval Postgraduate School Publicador: Monterey, California. Naval Postgraduate School
Tipo: Tese de Doutorado Formato: xiv, 89p. : ill. ;
Português
Relevância na Pesquisa
58.803604%
The first part of this thesis aims to identify and analyze what aspects of the MIL-HDBK-217 prediction model are causing the large variation between prediction and field reliability. The key findings of the literature research suggest that the main reason for the inaccuracy in prediction is because of the constant failure rate assumption used in MIL-HDBK-217 is usually not applicable. Secondly, even if the constant failure rate assumption is applicable, the disparity may still exist in the presence of design and quality related problems in new systems. A possible solution is to apply reliability growth testing (RGT) to new systems during the development phase in an attempt to remove these design deficiencies so that the system's reliability will grow and approach the predicted value. In view of the importance of RGT in minimizing the disparity, this thesis provides a detailed application of the AMSAA Extended Reliability Growth Models to the reliability growth analysis of a combat system. It shows how program managers can analyze test data using commercial software to estimate the system demonstrated reliability and the increased in reliability due to delayed fixes.

‣ Qualitative and quantitative reliability analysis of safety systems

Karimi, Roohollah
Fonte: MIT Energy Laboratory Publicador: MIT Energy Laboratory
Tipo: Relatório Formato: 11009616 bytes; application/pdf
Português
Relevância na Pesquisa
69.14928%
A code has been developed for the comprehensive analysis of a fault tree.' The code designated UNRAC (UNReliability Analysis Code) calculates the following characteristics of an Tnput fauTt tree: a) minimal cut sets, b) top event unavailability as point estimate and/or in time dependent form, c) quantitative importance of each component involved, and d) error bound on the top event unavailability UNRAC can analyze fault trees, with any kind of gates (EOR, NAND, NOR, AND, OR), up to a maximum of 250 components and/or gates. For generating minimal cut sets the method of bit manipu- lation is employed. In order to calculate each component's time dependent unavailability, a general and consistent set of mathematical models is developed and the repair time density function is allowed to be represented by constant, exponen- tial, 2nd order erlangian and log-normal distributions. A normally operating component is represented by a three-state model in order to be able to incorporate probabilities for revealed faults, non-revealed faults and false failures in unavailability calculations. For importance analysis, a routine is developed that will rearrange the fault tree to evaluate the importance of each component to system failure...

‣ The Use of Cuckoo Search in Estimating the Parameters of Software Reliability Growth Models

AL-Saati, Dr. Najla Akram; Abd-AlKareem, Marwa
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 23/07/2013 Português
Relevância na Pesquisa
48.54702%
This work aims to investigate the reliability of software products as an important attribute of computer programs; it helps to decide the degree of trustworthiness a program has in accomplishing its specific functions. This is done using the Software Reliability Growth Models (SRGMs) through the estimation of their parameters. The parameters are estimated in this work based on the available failure data and with the search techniques of Swarm Intelligence, namely, the Cuckoo Search (CS) due to its efficiency, effectiveness and robustness. A number of SRGMs is studied, and the results are compared to Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO) and extended ACO. Results show that CS outperformed both PSO and ACO in finding better parameters tested using identical datasets. It was sometimes outperformed by the extended ACO. Also in this work, the percentages of training data to testing data are investigated to show their impact on the results.

‣ Robust solutions of uncertain mixed-integer linear programs using structural-reliability and decomposition techniques

Mínguez, Roberto
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 30/09/2014 Português
Relevância na Pesquisa
38.537932%
Structural-reliability based techniques has been an area of active research in structural design during the last decades, and different methods have been developed, such as First Order Second Moment (FOSM) or First Order Reliability (FORM) methods. The same has occurred with robust optimization, which is a framework for modeling optimization problems involving data uncertainty. If we focus on linear programming (LP) problems with uncertain data and hard constraints within an ellipsoidal uncertainty set, this paper provides a different interpretation of their robust counterpart (RC) inspired from structural-reliability methods. This new interpretation allows the proposal of an ad-hoc decomposition technique to solve the RC problem with the following advantages: i) it improves tractability, specially for large-scale problems and those including binary decisions, and ii) it provides exact bounds for the probability of constraint violation in case the probability distribution of uncertain parameters are completely defined by using first and second probability moments. An attractive aspect of our method is that it decomposes the initial linear mathematical programming problem in a deterministic linear master problem of the same size of the original problem and different quadratically constraint problems (QCP) of considerable lower size. The optimal solution is achieved through the solution of these master and subproblems within an iterative scheme.; Comment: 35 pages...

‣ Automatic Coding Rule Conformance Checking Using Logic Programs

Marpons-Ucero, Guillem; Mariño, Julio; Herranz, Ángel; Fredlund, Lars-Åke; Carro, Manuel; Moreno-Navarro, Juan José
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 02/11/2007 Português
Relevância na Pesquisa
38.229473%
Some approaches to increasing program reliability involve a disciplined use of programming languages so as to minimise the hazards introduced by error-prone features. This is realised by writing code that is constrained to a subset of the a priori admissible programs, and that, moreover, may use only a subset of the language. These subsets are determined by a collection of so-called coding rules.; Comment: Paper presented at the 17th Workshop on Logic-based Methods in Programming Environments (WLPE2007)

‣ Collective Mind: cleaning up the research and experimentation mess in computer engineering using crowdsourcing, big data and machine learning

Fursin, Grigori
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 11/08/2013 Português
Relevância na Pesquisa
38.435183%
Software and hardware co-design and optimization of HPC systems has become intolerably complex, ad-hoc, time consuming and error prone due to enormous number of available design and optimization choices, complex interactions between all software and hardware components, and multiple strict requirements placed on performance, power consumption, size, reliability and cost. We present our novel long-term holistic and practical solution to this problem based on customizable, plugin-based, schema-free, heterogeneous, open-source Collective Mind repository and infrastructure with unified web interfaces and on-line advise system. This collaborative framework distributes analysis and multi-objective off-line and on-line auto-tuning of computer systems among many participants while utilizing any available smart phone, tablet, laptop, cluster or data center, and continuously observing, classifying and modeling their realistic behavior. Any unexpected behavior is analyzed using shared data mining and predictive modeling plugins or exposed to the community at cTuning.org for collaborative explanation, top-down complexity reduction, incremental problem decomposition and detection of correlating program, architecture or run-time properties (features). Gradually increasing optimization knowledge helps to continuously improve optimization heuristics of any compiler...

‣ Reflection and Hyper-Programming in Persistent Programming Systems

Kirby, Graham
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 17/06/2010 Português
Relevância na Pesquisa
28.540205%
The work presented in this thesis seeks to improve programmer productivity in the following ways: - by reducing the amount of code that has to be written to construct an application; - by increasing the reliability of the code written; and - by improving the programmer's understanding of the persistent environment in which applications are constructed. Two programming techniques that may be used to pursue these goals in a persistent environment are type-safe linguistic reflection and hyper-programming. The first provides a mechanism by which the programmer can write generators that, when executed, produce new program representations. This allows the specification of programs that are highly generic yet depend in non-trivial ways on the types of the data on which they operate. Genericity promotes software reuse which in turn reduces the amount of new code that has to be written. Hyper-programming allows a source program to contain links to data items in the persistent store. This improves program reliability by allowing certain program checking to be performed earlier than is otherwise possible. It also reduces the amount of code written by permitting direct links to data in the place of textual descriptions. Both techniques contribute to the understanding of the persistent environment through supporting the implementation of store browsing tools and allowing source representations to be associated with all executable programs in the persistent store. This thesis describes in detail the structure of type-safe linguistic reflection and hyper-programming...

‣ Stochastic Contracts for Runtime Checking of Component-based Real-time Systems

Nandi, Chandrakana; Monot, Aurelien; Oriol, Manuel
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 10/01/2015 Português
Relevância na Pesquisa
28.243848%
This paper introduces a new technique for dynamic verification of component-based real-time systems based on statistical inference. Verifying such systems requires checking two types of properties: functional and real-time. For functional properties, a standard approach for ensuring correctness is Design by Contract: annotating programs with executable pre- and postconditions. We extend contracts for specifying real-time properties. In the industry, components are often bought from vendors and meant to be used off-the-shelf which makes it very difficult to determine their execution times and express related properties. We present a solution to this problem by using statistical inference for estimating the properties. The contract framework allows application developers to express contracts like "the execution time of component $X$ lies within $\gamma$ standard deviations from the mean execution time". Experiments based on industrial case studies show that this framework can be smoothly integrated into existing control applications, thereby increasing their reliability while having an acceptable execution time overhead (less than 10%).; Comment: 6 pages, 4 figures

‣ A Symbolic Execution Algorithm for Constraint-Based Testing of Database Programs

Marcozzi, Michaël; Vanhoof, Wim; Hainaut, Jean-Luc
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 23/01/2015 Português
Relevância na Pesquisa
38.245298%
In so-called constraint-based testing, symbolic execution is a common technique used as a part of the process to generate test data for imperative programs. Databases are ubiquitous in software and testing of programs manipulating databases is thus essential to enhance the reliability of software. This work proposes and evaluates experimentally a symbolic ex- ecution algorithm for constraint-based testing of database programs. First, we describe SimpleDB, a formal language which offers a minimal and well-defined syntax and seman- tics, to model common interaction scenarios between pro- grams and databases. Secondly, we detail the proposed al- gorithm for symbolic execution of SimpleDB models. This algorithm considers a SimpleDB program as a sequence of operations over a set of relational variables, modeling both the database tables and the program variables. By inte- grating this relational model of the program with classical static symbolic execution, the algorithm can generate a set of path constraints for any finite path to test in the control- flow graph of the program. Solutions of these constraints are test inputs for the program, including an initial content for the database. When the program is executed with respect to these inputs...

‣ Synthesis of Parametric Programs using Genetic Programming and Model Checking

Katz, Gal; Peled, Doron
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 26/02/2014 Português
Relevância na Pesquisa
38.338726%
Formal methods apply algorithms based on mathematical principles to enhance the reliability of systems. It would only be natural to try to progress from verification, model checking or testing a system against its formal specification into constructing it automatically. Classical algorithmic synthesis theory provides interesting algorithms but also alarming high complexity and undecidability results. The use of genetic programming, in combination with model checking and testing, provides a powerful heuristic to synthesize programs. The method is not completely automatic, as it is fine tuned by a user that sets up the specification and parameters. It also does not guarantee to always succeed and converge towards a solution that satisfies all the required properties. However, we applied it successfully on quite nontrivial examples and managed to find solutions to hard programming challenges, as well as to improve and to correct code. We describe here several versions of our method for synthesizing sequential and concurrent systems.; Comment: In Proceedings INFINITY 2013, arXiv:1402.6610

‣ Adaptive Robust Transmission Network Expansion Planning using Structural Reliability and Decomposition Techniques

Mínguez, Roberto; García-Bertrand, Raquel; Arroyo, José Manuel
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 26/01/2015 Português
Relevância na Pesquisa
38.229194%
Structural reliability and decomposition techniques have recently proved to be appropriate tools for solving robust uncertain mixed-integer linear programs using ellipsoidal uncertainty sets. In fact, its computational performance makes this type of problem to be an alternative method in terms of tractability with respect to robust problems based on cardinality constrained uncertainty sets. This paper extends the use of these techniques for solving an adaptive robust optimization (ARO) problem, i.e. the adaptive robust solution of the transmission network expansion planning for energy systems. The formulation of this type of problem materializes on a three-level mixed-integer optimization formulation, which based on structural reliability methods, can be solved using an ad-hoc decomposition technique. The method allows the use of the correlation structure of the uncertain variables involved by means of their variance-covariance matrix, and besides, it provides a new interpretation of the robust problem based on quantile optimization. We also compare results with respect to robust optimization methods that consider cardinality constrained uncertainty sets. Numerical results on an illustrative example, the IEEE-24 and IEEE 118-bus test systems demonstrate that the algorithm is comparable in terms of computational performance with respect to existing robust methods with the additional advantage that the correlation structure of the uncertain variables involved can be incorporated straightforwardly.; Comment: 32 pages...

‣ Soft error propagation in floating-point programs

Li, Sha
Fonte: University of Delaware Publicador: University of Delaware
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
48.36839%
Li, Xiaoming; As technology scales, VLSI performance has experienced an exponential growth. As feature sizes shrink, however, we will face new challenges such as soft errors (singleevent upsets) to maintain the reliability of circuits. Recent studies have tried to address soft errors with error detection and correction techniques such as error-correcting codes or redundant execution. However, these techniques come at a cost of additional storage or lower performance. We present a different approach to address soft errors. We start from building a quantitative understanding of the error propagation in software and propose a systematic evaluation of the impact of bit flip caused by soft errors on floating-point operations. Furthermore, we introduce a novel model to deal with soft errors. More specifically, we assume soft errors have occurred in memory and try to know how the errors will manifest in the results of programs. Therefore, some soft errors can be tolerated if the error in result is smaller than the intrinsic inaccuracy of floating-point representations or within a predefined range. We focus on analyzing error propagation for floating-point arithmetic operations. Our approach is motivated by interval analysis. We model the rounding effect of floating-point numbers...