A melhor ferramenta para a sua pesquisa, trabalho e TCC!
Página 1 dos resultados de 84250 itens digitais encontrados em 0.084 segundos
- Biblioteca Digitais de Teses e Dissertações da USP
- Universidade Federal do Rio Grande do Sul
- Escola Superior de Tecnologia da Saúde de Lisboa
- Associação Brasileira de Ciências Mecânicas
- Massachusetts Institute of Technology, Operations Research Center
- Universidade Federal de Goiás; Brasil; UFG; Programa de Pós-graduação em Ciência da Computação (INF); Instituto de Informática - INF (RG)
- Universidade Nacional da Austrália
- IEEE; United States
- Universidade de Tubinga
- Elsevier
- IEEE Computer Society; online
- Universidade Nova de Lisboa
- IEEE: FUZZY-IEEE/IFES'95
- Institute of Electrical and Electronics Engineers (IEEE Inc)
- UNAM, Centro de Ciencias Aplicadas y Desarrollo Tecnológico
- Mais Publicadores...
‣ Avaliação dos testes e algoritmos empregados na triagem de doadores de sangue para o vírus da hepatite C; Evaluation of anti-HCV and HCV RNA tests and analysis of algorithm for blood donors screening
Fonte: Biblioteca Digitais de Teses e Dissertações da USP
Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Dissertação de Mestrado
Formato: application/pdf
Publicado em 08/12/2004
Português
Relevância na Pesquisa
36.249407%
#Algorithm#Algoritmo#Blood transfusion#Doação entre vivos (Avaliação)#Hematologic tests#Hepatite C virus#Immunoblot#Immunoblot#Índice de reatividade do teste#Polymerase chain reaction#Reação da polimerase em cadeia
O diagnóstico da infecção pelo vírus da hepatite C (HCV) é obtido através de testes de triagem para anti-HCV pelo ELISA e para confirmar os positivos, por teste suplementar mais específico, o immunoblot (IB). A infecção ativa é determinada pelas técnicas moleculares como por exemplo a PCR. Os testes sorológicos são métodos indiretos, baseados na detecção de anticorpos específicos, estando portanto, sujeito a vários fatores que limitam a sua eficiência diagnóstica, podendo gerar resultados inespecíficos. Um dos objetivos deste trabalho foi o de avaliar a eficiência diagnóstica dos testes para o diagnóstico da infecção pelo HCV, em condições de rotina diagnóstica, em um grande número de amostras de doadores de sangue e analisar o custo benefício de três diferentes algoritmos, recentemente propostos pelo Centro de Controle de Doenças e Prevenção (EUA). Foram estudadas 692 amostras de soros provenientes de doadores de sangue, sendo 522 positivas e 170 inconclusivas pelo ELISA de triagem (ELlSA-T) realizado no momento da doação. Esses doadores retornaram à Fundação PróSangue/ Hemocentro de São Paulo para a coleta da segunda amostra de sangue para confirmação dos resultados obtidos na doação. Em todas as amostras de retorno foram realizadas ELISA (ELISA-R) e a PCR...
Link permanente para citações:
‣ Metaheurística para otimização de forma e dimensão de estruturas mecânicas com restrições de tensões e frequências naturais pelo algoritmo Firefly; Shape and size optimization of mechanical structures with stress and dynamic constraints by the Firefly algorithm
Fonte: Universidade Federal do Rio Grande do Sul
Publicador: Universidade Federal do Rio Grande do Sul
Tipo: Trabalho de Conclusão de Curso
Formato: application/pdf
Português
Relevância na Pesquisa
36.249407%
#Engenharia mecanica#Shape and size optimization#Spatial trusses#Natural frequencies#Firefly algorithm#Metaheuristic algorithm
Neste trabalho são feitos estudos visando a otimização de algumas estruturas mecânicas utili-zando um algorítimo chamado de “Firefly” ou Vaga-lume. O objetivo desse estudo é realizar uma possível redução de massa e eventual mudança de forma e dimensão destas estruturas, respeitando limitações mecânicas do material, restrições de projeto assim como restrições di-nâmicas como as frequências naturais. Os resultados são comparados com outros resultados de algoritmos já existentes na literatura, quando houver. Como resultado para esta análise, mostra-se que o método é, em alguns casos, mais eficiente na otimização dos exemplos apre-sentados que os encontrados na literatura. O trabalho justifica-se pelo fato do algoritmo ser metaheurístico e, portanto facilmente programável. Além do mais o algoritmo não requer a ava-liação de gradientes da função a ser otimizada e possui uma parcela aleatória que o torna ro-busto em problemas de otimização como o é no caso de otimização de forma e dimensão com restrições de tensão e frequências naturais.; This work deals with the optimization of some mechanical structures using a metaheuristic algo-rithm called Firefly Algorithm. This work aims at reducing mass and eventually changing shape and size of such mechanical structures keeping material mechanical strengths and dynamic constraints such as natural frequencies on safe regions. The results are compared with those...
Link permanente para citações:
‣ A Modified Branch and Bound Algorithm to Solve the Transmission Expansion Planning Problem
Fonte: Ieee
Publicador: Ieee
Tipo: Conferência ou Objeto de Conferência
Formato: 234-238
Português
Relevância na Pesquisa
36.249407%
In this paper a novel Branch and Bound (B&B) algorithm to solve the transmission expansion planning which is a non-convex mixed integer nonlinear programming problem (MINLP) is presented. Based on defining the options of the separating variables and makes a search in breadth, we call this algorithm a B&BML algorithm. The proposed algorithm is implemented in AMPL and an open source Ipopt solver is used to solve the nonlinear programming (NLP) problems of all candidates in the B&B tree. Strategies have been developed to address the problem of non-linearity and non-convexity of the search region. The proposed algorithm is applied to the problem of long-term transmission expansion planning modeled as an MINLP problem. The proposed algorithm has carried out on five commonly used test systems such as Garver 6-Bus, IEEE 24-Bus, 46-Bus South Brazilian test systems, Bolivian 57-Bus, and Colombian 93-Bus. Results show that the proposed methodology not only can find the best known solution but it also yields a large reduction between 24% to 77.6% in the number of NLP problems regarding to the size of the systems.
Link permanente para citações:
‣ Comparação entre o pencil beam convolution algorithm e o analytical anisotropic algorithm em tumores de mama
Fonte: Escola Superior de Tecnologia da Saúde de Lisboa
Publicador: Escola Superior de Tecnologia da Saúde de Lisboa
Tipo: Dissertação de Mestrado
Publicado em //2013
Português
Relevância na Pesquisa
36.26778%
#Radioterapia#Cancro da mama#Analytical Anisotropic Algorithm#AAA#Pencil Beam Convolution Algorithm#Radiotherapy#Breast cancer
Mestrado em Radioterapia; Este trabalho pretende efetuar uma comparação entre o algoritmo PBC e o AAA em
tumores de mama. Realizou-se o cálculo da dose em 40 casos clínicos, com o algoritmo
PBC em todos planeamentos dosimétricos da amostra. De seguida, avaliou-se no histograma dose-volume, as doses e as percentagens de volume de todos os volumes de
interesse. Posteriormente, para os mesmos doentes, calculou-se a dose e percentagens
com o AAA, realizando a mesma avaliação efetuada para o algoritmo PBC. Com o auxílio do software SPSS versão 21.0 obtiveram-se os resultados, sendo estes apresentados em formato de tabela. Identificam-se diferenças estatisticamente significativas no tempo de cálculo da dose entre o algoritmo PBC e o AAA, sendo que o AAA apresenta um tempo de cálculo mais elevado na grande maioria dos casos. Observam-se diferenças estatisticamente significativas na dose média (ρ=0.021) e máxima (ρ=0.000) do PTV, na dose média (ρ=0.000), máxima (ρ=0.000), V60% (ρ=0.000), V80% (ρ=0.000) e V100% (ρ=0.000) da pele, e na dose média (ρ=0.000),
máxima (ρ=0.000), V10% (ρ=0.000), V20% (ρ=0.000) e V30% (ρ=0.000) do pulmão, entre
o algoritmo PBC e o AAA. Apenas o D95% (ρ=0.830) do PTV não apresenta diferenças
estatisticamente significativas entre os dois algoritmos utilizados. Estudos realizados com medições experimentais demonstram que o AAA é mais preciso...
Link permanente para citações:
‣ An improved version of Inverse Distance Weighting metamodel assisted Harmony Search algorithm for truss design optimization
Fonte: Associação Brasileira de Ciências Mecânicas
Publicador: Associação Brasileira de Ciências Mecânicas
Tipo: Artigo de Revista Científica
Formato: text/html
Publicado em 01/03/2013
Português
Relevância na Pesquisa
36.249407%
#Surrogate-based optimization#Metamodeling#Harmony Search Algorithm#Inverse Distance Weighting model
This paper focuses on a metamodel-based design optimization algorithm. The intention is to improve its computational cost and convergence rate. Metamodel-based optimization method introduced here, provides the necessary means to reduce the computational cost and convergence rate of the optimization through a surrogate. This algorithm is a combination of a high quality approximation technique called Inverse Distance Weighting and a meta-heuristic algorithm called Harmony Search. The outcome is then polished by a semi-tabu search algorithm. This algorithm adopts a filtering system and determines solution vectors where exact simulation should be applied. The performance of the algorithm is evaluated by standard truss design problems and there has been a significant decrease in the computational effort and improvement of convergence rate.
Link permanente para citações:
‣ A Potential Reduction Algorithm With User-Specified Phase I - Phase II Balance, for Solving a Linear Program from an Infeasible Warm Start
Fonte: Massachusetts Institute of Technology, Operations Research Center
Publicador: Massachusetts Institute of Technology, Operations Research Center
Tipo: Trabalho em Andamento
Formato: 1329655 bytes; application/pdf
Português
Relevância na Pesquisa
36.249407%
This paper develops a potential reduction algorithm for solving a linear-programming problem directly from a "warm start" initial point that is neither feasible nor optimal. The algorithm is of an "interior point" variety that seeks to reduce a single potential function which simultaneously coerces feasibility improvement (Phase I) and objective value improvement (Phase II). The key feature of the algorithm is the ability to specify beforehand the desired balance between infeasibility and nonoptimality in the following sense. Given a prespecified balancing parameter /3 > 0, the algorithm maintains the following Phase I - Phase II "/3-balancing constraint" throughout: (cTx- Z*) < /3TX, where cTx is the objective function, z* is the (unknown) optimal objective value of the linear program, and Tx measures the infeasibility of the current iterate x. This balancing constraint can be used to either emphasize rapid attainment of feasibility (set large) at the possible expense of good objective function values or to emphasize rapid attainment of good objective values (set /3 small) at the possible expense of a lower infeasibility gap. The algorithm exhibits the following advantageous features: (i) the iterate solutions monotonically decrease the infeasibility measure...
Link permanente para citações:
‣ Algoritmo evolutivo multi-objetivo em tabelas para seleção de variáveis em classificação multivariada; Multi-objective evolutionary algorithm on tables for variable selection in multivariate classification
Fonte: Universidade Federal de Goiás; Brasil; UFG; Programa de Pós-graduação em Ciência da Computação (INF); Instituto de Informática - INF (RG)
Publicador: Universidade Federal de Goiás; Brasil; UFG; Programa de Pós-graduação em Ciência da Computação (INF); Instituto de Informática - INF (RG)
Tipo: Dissertação
Formato: application/pdf
Português
Relevância na Pesquisa
36.249407%
#Seleção de variáveis#Classificação multivariada#Análise discriminante linear#Algoritmo evolutivo multi-objetivo em tabelas#Variable selection#Multivariate classification#Linear discriminant analysis#Multiobjective evolutionary algorithm on tables#CIENCIA DA COMPUTACAO::MATEMATICA DA COMPUTACAO
This work proposes the use of multi-objective evolutionary algorithm on tables (AEMT)
for variable selection in classification problems, using linear discriminant analysis. The
proposed algorithm aims to find minimal subsets of the original variables, robust classifiers
that model without significant loss in classification ability. The results of the classifiers
modeled by the solutions found by this algorithm are compared in this work to
those found by mono-objective formulations (such as PLS, APS and own implementations
of a Simple Genetic Algorithm) and multi-objective formulations (such as the simple
genetic algorithm multi -objective - MULTI-GA - and the NSGA II). As a case study,
the algorithm was applied in the selection of spectral variables for classification by linear
discriminant analysis (LDA) of samples of biodiesel / diesel. The results showed that the
evolutionary formulations are solutions with a smaller number of variables (on average)
and a better error rate (average) and compared to the PLS APS. The formulation of the
AEMT proposal with the fitness functions: medium risk classification, number of selected
variables and number of correlated variables in the model, found solutions with a lower
average errors found by the NSGA II and the MULTI-GA...
Link permanente para citações:
‣ An error analysis of a unitary Hessenberg QR algorithm
Fonte: Universidade Nacional da Austrália
Publicador: Universidade Nacional da Austrália
Tipo: Working/Technical Paper
Formato: 319681 bytes; 356 bytes; application/pdf; application/octet-stream
Português
Relevância na Pesquisa
36.249407%
Several direct implementations of the QR algorithm for a unitary Hessenberg matrix are numerically unstable. In this paper we give an analysis showing how the instability in a particular rational form of the algorithm specialized to the case of a unimodular shift comes from two sources: loss of accuracy due to cancellation in a particular formula and a dynamic instability in the propagation of the normalization conditions on the Schur parameters and complementary parameters used to represent the matrix. The first problem can be fixed through the use of an alternate formula proposed by Gragg. The second problem can be controlled by not relying on the fact that the matrix is numerically unitary to enforce implicitly the unimodularity of the computed shift; if the shift is explicitly normalized then experiments suggest that the algorithm is stable in practice although stability cannot be proven. A third small modification, introduced to eliminate a potential for a relatively slow exponential growth in normalization errors leads to a provably stable algorithm. This stable rational algorithm for computing the eigenvalues leads directly to a stable algorithm for computing a complete eigenvalue decomposition.; no
Link permanente para citações:
‣ Solving a real-world wheat blending problem using a hybrid evolutionary algorithm
Fonte: IEEE; United States
Publicador: IEEE; United States
Tipo: Conference paper
Publicado em //2013
Português
Relevância na Pesquisa
36.28287%
A novel hybrid algorithm is proposed to solve the Australian wheat blending problem. The major part of the problem can be modeled with a linear programming model but the unique constraints make many existing algorithms fail. The algorithm starts with a heuristic that follows pre-defined rules to reduce the search space. Then the linear-relaxed problem is solved using a standard linear programming algorithm, and the result is used to guide an evolutionary-based algorithm while exploring the infeasible regions. Constraint violations are de-penalised if the same choice is made in the linear-relaxed solution. In fact, a hybrid of an evolutionary algorithm, a heuristic method and a linear programming solver is used in the main loop to improve the solution while maintaining the feasibility. A heuristic based initialization method and a local search based post-tuning method are also incorporated into the algorithm. The proposed algorithm has been tested on real data from past years, from small to large cases. Results show that the algorithm is able to find quality results in all cases and outperforms the existing method in use in terms of both quality and speed.; Xiang Li, Mohammad Reza Bonyadi, Zbigniew Michalewicz, Luigi Barone
Link permanente para citações:
‣ Multichannel FSLMS algorithm based active headrest
Fonte: IEEE; United States
Publicador: IEEE; United States
Tipo: Conference paper
Publicado em //2013
Português
Relevância na Pesquisa
36.28287%
#Active noise control (ANC)#filtered-s LMS algorithm#multichannel ANC#nonlinear ANC#active headrest.
The multichannel Filtered-S LMS (FSLMS) algorithm has been used efficiently in nonlinear active noise control (ANC) because of its improved performance and low computational complexity. However, the performance of this algorithm has not yet been evaluated in a real-time ANC system. This correspondence shows the real-time performance of the trigonometric functional expansion based FSLMS algorithm compared to the Filtered-X LMS (FXLMS) algorithm. A multichannel active headrest with one reference, two control sources and two error microphones is used to test the algorithm. Three different primary nonlinear noise cases are studied and it is shown that in all three cases, the FSLMS algorithm is capable of attenuating the primary noise whereas the FXLMS algorithm completely fails to achieve any noise reduction. Insight into the FSLMS algorithm as a suitable noise controller for transformer noise is also presented.; Debi Prasad Das, Danielle J. Moreau and Ben S. Cazzolato
Link permanente para citações:
‣ A stable cubically convergent GR algorithm and Krylov subspace methods for non-Hermitian matrix eigenvalue problems; A stable cubically convergent GR algorithm and Krylov subspace methods for non-Hermitian matrix eigenvalue problems; Ein stabiles kubisch konvergentes GR-Verfahren und Krylov-Verfahren für nichthermitesche Matrixeigenwertprobleme
Fonte: Universidade de Tubinga
Publicador: Universidade de Tubinga
Tipo: Dissertação
Português
Relevância na Pesquisa
36.26778%
#Eigenwert#510#Eigenwert , QR-Algorithmus , GR-Verfahren , Lanczos-Verfahren , Krylov-Verfahren#eigenvalue , QR algorithm , GR algorithm , Lanczos algorithm , Krylov methods
In dieser Dissertation werden Krylov-Verfahren und
Zerlegungsalgorithmen (GR-Algorithmen)
zur Eigenwertberechnung von beliebigen Matrizen untersucht.
Es wird gezeigt, dass das allgemeine restarted Krylov-Verfahren
mathematisch äquivalent zum allgemeinen GR-Algorithmus
ist. Ausgehend von diesem Ergebnis wird ein neues,
numerisch stabiles GR-Verfahren entwickelt.
Es wird bewiesen, dass dieses Verfahren,
angewandt auf eine beliebig gegebene Matrix mit paarweise
verschiedenen Eigenwerten, unter sehr schwachen Voraussetzungen
kubisch konvergiert. Man beachte, dass das QR-Verfahren
unter diesen Voraussetzungen i.a. nur quadratisch konvergiert.; In this thesis Krylov methods and algorithms of decomposition type
(GR algorithms) for the eigenvalue computation of arbitrary matrices are discussed.
It is shown that the general restarted Krylov method is mathematically
equivalent to the general GR algorithm. Using this connection, a new,
numerical stable GR algorithm is developed.
It is proved that this algorithm converges cubically under mild conditions,
when applied to any given matrix with distinct eigenvalues.
Notice, that the QR algorithm converges typically quadratically
under these conditions.
Link permanente para citações:
‣ Entwurf und Evaluierung eines adaptiven Ersetzungsalgorithmus für den Diskcache eines Hierarchischen-Speicher-Management-Systems; Design and evaluation of an adaptive replacement algorithm for the diskcache of an hierarchical storage managment system
Fonte: Universidade de Tubinga
Publicador: Universidade de Tubinga
Tipo: Dissertação
Português
Relevância na Pesquisa
36.29549%
#Speicher , Algorithmus , Adaptives System , Computersimulation#004#Hierarchisches Speichermanagement#Hierarchical storage management , cache , adaptiv algorithm , computersimulation
Dok. 2 besteht aus einem ISO-9660-CD-Image und enthält den Sourcecode und die Eingabedaten der Simulation.
*********************************
Das Ziel dieser Arbeit bestand darin, einen adaptiven Ersetzungsalgorithmus
für den Diskcache eines Hierarchischen Speicher Management Systems zu
entwickeln, der die besonderen Eigenschaften des Diskcache ausnutzt und das
Ersetzungsverhalten verbessert. Vorgestellt wird der Objekt-LRU
Ersetzungsalgorithmus (OLRU). Der OLRU Ersetzungsalgorithmus unterscheidet
sich von bisherigen Ersetzungsalgorithmen, indem er die Attribute der
Cacheobjekte nutzt, um die Ersetzung zu beeinflußen.
Eine der Besonderheiten des Diskcache besteht in der Speicherung ganzer
Dateien. Durch die unterschiedliche Größe der Dateien bedingt, wird eine
Kombinationen von Objekten ersetzt. Dabei werden die Attribute der Objekte
durch eine Bewertungsfunktion zusammengefaßt, die es erlaubt, die am besten
geeignete Kombinationen von Objekten zu ersetzen.
Um eine Adaption des OLRU Algorithmus zu ermöglichen, wurde eine online
Optimierung der Parameter der Bewertungsfunktion in den Algorithmus
integriert. Die Optimierung erfolgt unter Verwendung eines Genetischen
Algorithmus.
Der OLRU Algorithmus wurde innerhalb einer Simulationsumgebung eingesetzt.
Durch die Verwendung von zwei Zugriffmustern (Traces)...
Link permanente para citações:
‣ LOW-COMPLEXITY AND HIGH-PERFORMANCE SOFT MIMO DETECTION BASED ON DISTRIBUTED M-ALGORITHM THROUGH TRELLIS-DIAGRAM
Fonte: IEEE
Publicador: IEEE
Tipo: Conference paper
Português
Relevância na Pesquisa
36.26778%
This paper presents a novel low-complexity multiple-input multipleoutput (MIMO) detection scheme using a distributed M-algorithm (DM) to achieve high performance soft MIMO detection. To reduce the searching complexity, we build a MIMO trellis graph and split the searching operations among different nodes, where each node will apply the M-algorithm. Instead of keeping a global candidate list as the traditional detector does, this algorithm keeps multiple small candidate lists to generate soft information. Since the DM algorithm can achieve good BER performance with a small M, the sorting cost of the DM algorithm is lower than that of the conventional K-best MIMO algorithm. The proposed algorithm is very suitable for high speed parallel processing.
Link permanente para citações:
‣ On Euclid's algorithm and elementary number theory
Fonte: Elsevier
Publicador: Elsevier
Tipo: Artigo de Revista Científica
Publicado em //2011
Português
Relevância na Pesquisa
36.28287%
#Number theory#Calculational method#Greatest common divisor#Euclid’s algorithm#Invariant#Eisenstein array#Stern–Brocot tree#Algorithm derivation#Enumeration algorithm#Rational number
Algorithms can be used to prove and to discover new theorems. This paper shows how algorithmic skills in general, and the notion of invariance in particular, can be used to derive many results from Euclid’s algorithm. We illustrate how to use the algorithm as a verification interface (i.e., how to verify theorems) and as a construction interface (i.e., how to investigate and derive new theorems). The theorems that we verify are well-known and most of them are included in standard number-theory books. The new results concern distributivity properties of the greatest common divisor and a new algorithm for efficiently enumerating the positive rationals in two different ways. One way is known and is due to Moshe Newman. The second is new and corresponds to a deforestation of the Stern-Brocot tree of rationals. We show that both enumerations stem from the same simple algorithm. In this way, we construct a Stern-Brocot enumeration algorithm with the same time and space complexity as Newman’s algorithm. A short review of the original papers by Stern and Brocot is also included.
Link permanente para citações:
‣ A parallel algorithm for 2D square packing
Fonte: IEEE Computer Society; online
Publicador: IEEE Computer Society; online
Tipo: Conference paper
Publicado em //2013
Português
Relevância na Pesquisa
36.249407%
We focus on the parallelization of two-dimensional square packing problem. In square packing problem, a list of square items need to be packed into a minimum number of unit square bins. All square items have side length smaller than or equal to 1 which is also the side length of each unit square bin. The total area of items that has been packed into one bin cannot exceed 1. Using the idea of harmonic, some squares can be put into the same bin without exceeding the bin limitation of side length 1 We try to concurrently pack all the corresponding squares into one bin by a parallel systerm of computation processing. A 9=4-worst case asymptotic error bound algorithm with time complexity (n) is showed. Let OPT(I) and A(I) denote, respectively, the cost of an optimal solution and the cost produced by an approximation algorithm A for an instance I of the square packing problem. The best upper bound of on-line square packing to date is 2.1439 proved by Han et al. [23] by using complexity weighting functions. However the upper bound of our parallel algorithm is a litter worse than Han’s algorithm, the analysis of our algorithm is more simple and the time complexity is improved. Han’s algorithm needs O(nlogn) time, while our method only needs (n) time.; http://pdcat13.csie.ntust.edu.tw/; Xiaofan Zhao and Hong Shen
Link permanente para citações:
‣ EDP - A new estimation algorithm
Fonte: Universidade Nova de Lisboa
Publicador: Universidade Nova de Lisboa
Tipo: Dissertação de Mestrado
Publicado em /01/2012
Português
Relevância na Pesquisa
36.26778%
#Algorithm#Electricity#Estimation#Temperature#Domínio/Área Científica::Ciências Sociais::Economia e Gestão
The aim of this work project is to analyze the current algorithm used by EDP to estimate their clients’ electrical energy consumptions, create a new algorithm and compare the advantages and disadvantages of both. This new algorithm is different from the current one as it incorporates some effects from temperature variations. The results of the comparison show that this new algorithm with temperature variables performed better than the same algorithm without temperature variables, although there is still potential for further improvements of the current algorithm, if the prediction model is estimated using a sample of daily data, which is the case of the current EDP algorithm.
Link permanente para citações:
‣ An enhanced version of the heat exchange algorithm with excellent energy conservation properties
Fonte: AIP
Publicador: AIP
Tipo: Article; accepted version
Português
Relevância na Pesquisa
36.249407%
This is the author accepted manuscript. The final version is available from AIP via http://dx.doi.org/10.1063/1.4931597; We propose a new algorithm for non-equilibrium molecular dynamics simulations of thermal gradients. The algorithm is an extension of the heat exchange algorithm developed by Hafskjold and co-workers [Mol. Phys. 80, 1389 (1993); Mol. Phys. 81, 251 (1994)], in which a certain amount of heat is added to one region and removed from another by rescaling velocities appropriately. Since the amount of added and removed heat is the same and the dynamics between velocity rescaling steps is Hamiltonian, the heat exchange algorithm is expected to conserve the energy. However, it has been reported previously that the original version of the heat exchange algorithm exhibits a pronounced drift in the total energy, the exact cause of which remained hitherto unclear. Here, we show that the energy drift is due to the truncation error arising from the operator splitting and suggest an additional coordinate integration step as a remedy. The new algorithm retains all the advantages of the original one whilst exhibiting excellent energy conservation as illustrated for a Lennard-Jones liquid and SPC/E water.; PW gratefully acknowledges stimulating discussions with Chongli Qin...
Link permanente para citações:
‣ Evolutionary algorithm based fuzzy c-means algorithm
Fonte: IEEE: FUZZY-IEEE/IFES'95
Publicador: IEEE: FUZZY-IEEE/IFES'95
Tipo: Proceedings
Português
Relevância na Pesquisa
36.26778%
In this paper, a new approach to fuzzy clustering is introduced. This approach, which is based on the application of an evolutionary strategy to the fuzzy c-means clustering algorithm, utilizes the relationship between the various definitions of distance and structures implied in each given data set. As soon as a particular definition of distance is chosen, a particular structure in the data set is implied. Therefore, the search for a structure in given data can be viewed as a search for an appropriate definition of distance. We describe an evolutionary algorithm for determining the “best” distance for given data, where the criterion of goodness is defined in terms of the performance of the fuzzy c-means clustering method. We discuss relevant theoretical aspects as well as experimental results that characterize the utility of the proposed algorithm.; "Evolutionary algorithm based fuzzy c-means algorithm," Proceedings of FUZZY-IEEE/IFES'95. Held in Yokohama, Japan: 20-24 March 1995.
Link permanente para citações:
‣ A Minimax Robust Decoding Algorithm
Fonte: Institute of Electrical and Electronics Engineers (IEEE Inc)
Publicador: Institute of Electrical and Electronics Engineers (IEEE Inc)
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
36.29549%
#Keywords: Decoding algorithm#Impulsive noise#Sum product algorithm#Turbo decoding#Viterbi algorithm#Algorithms#Communication channels (information theory)#Optimization#Probability density function#Signal processing#Spurious signal noise
In this correspondence we study the decoding problem in an uncertain noise environment. If the receiver knows the noise probability density function (pdf) at each time slot or its a priori probability, the standard Viterbi algorithm (VA) or the a posteriori probability (APP) algorithm can achieve optimal performance. However, if the actual noise distribution differs from the noise model used to design the receiver, there can be significant performance degradation due to the model mismatch. The minimax concept is used to minimize the worst possible error performance over a family of possible channel noise pdf's. We show that the optimal robust scheme is difficult to derive; therefore, alternative, practically feasible, robust decoding schemes are presented and implemented on VA decoder and two-way APP decoder. Performance analysis and numerical results show our robust decoders have a performance advantage over standard decoders in uncertain noise channels, with no or little computational overhead. Our robust decoding approach can also explain why for turbo decoding overestimating the noise variance gives better results than underestimating it.
Link permanente para citações:
‣ Mixed-Integer Constrained Optimization Based on Memetic Algorithm
Fonte: UNAM, Centro de Ciencias Aplicadas y Desarrollo Tecnológico
Publicador: UNAM, Centro de Ciencias Aplicadas y Desarrollo Tecnológico
Tipo: Artigo de Revista Científica
Formato: text/html
Publicado em 01/04/2013
Português
Relevância na Pesquisa
36.26778%
#Evolutionary algorithm#memetic algorithm#mixed-integer hybrid differential evolution#Lagrange method
Evolutionary algorithms (EAs) are population-based global search methods. They have been successfully applied to many complex optimization problems. However, EAs are frequently incapable of finding a convergence solution in default of local search mechanisms. Memetic Algorithms (MAs) are hybrid EAs that combine genetic operators with local search methods. With global exploration and local exploitation in search space, MAs are capable of obtaining more high-quality solutions. On the other hand, mixed-integer hybrid differential evolution (MIHDE), as an EA-based search algorithm, has been successfully applied to many mixed-integer optimization problems. In this paper, a memetic algorithm based on MIHDE is developed for solving mixed-integer optimization problems. However, most of real-world mixed-integer optimization problems frequently consist of equality and/or inequality constraints. In order to effectively handle constraints, an evolutionary Lagrange method based on memetic algorithm is developed to solve the mixed-integer constrained optimization problems. The proposed algorithm is implemented and tested on two benchmark mixed-integer constrained optimization problems. Experimental results show that the proposed algorithm can find better optimal solutions compared with some other search algorithms. Therefore...
Link permanente para citações: