پرسی فایل

تحقیق، مقاله، پروژه، پاورپوینت

پرسی فایل

تحقیق، مقاله، پروژه، پاورپوینت

دانلود متن لاتین و ترجمه فارسی Hybrid Soft Computing Systems سیستمهای هیبریدی

Soft Computing یک روش محاسباتی است که شامل منطق فازی،محاسبات عصبی ، محاسبات تکمیلی و محاسبات احتمالی می باشدبعد از یک نگاه اجمالی به اجزای Soft Computing ،برخی از مهمترین ترکیبات آنرا مورد بررسی و تجزیه وتحلیل قرار میدهیمما بر روی توسعه کنترل کننده های الگوریتمی هوشمند،همانند استفاده از منطق فازی برای کنترل پارامترهای محاسبات تکمیلی تاکید میکنیم
دسته بندی کامپیوتر و IT
فرمت فایل doc
حجم فایل 248 کیلو بایت
تعداد صفحات فایل 51
دانلود متن لاتین و ترجمه فارسی Hybrid Soft Computing Systems سیستمهای هیبریدی

فروشنده فایل

کد کاربری 7169

متن لاتین و ترجمه فارسی Hybrid Soft Computing Systems سیستمهای هیبریدی

21 صفحه فایل ورد (متن انگلیسی)

30 صفحه فایل ورد (ترجمه فارسی متن)

Hybrid Soft Computing Systems: Where Are We Going?

Piero P. Bonissone1

Abstract.

Soft computing is an association of computing methodologies that includes fuzzy logic, neuro-computing, evolutionary computing,and probabilistic computing. After a brief overview of Soft Computing components, we will analyze some of its most synergistic combinations. We will emphasize the development of smart algorithm-controllers, such as the use of fuzzy logic to control the parameters of evolutionary computing and, conversely, the application of evolutionary algorithms to tune fuzzy controllers. We will focus on three real-world applications of soft computing that leverage the synergism created by hybrid systems.

1 SOFT COMPUTING OVERVIEW

Soft computing (SC) is a term originally coined by Zadeh to denote systems that “… exploit the tolerance for imprecision, uncertainty, and partial truth to achieve tractability, robustness, low solution cost, and better rapport with reality" [1]. Traditionally SC has been comprised by four technical disciplines. The first two, probabilistic reasoning (PR) and fuzzy logic (FL) reasoning systems, are based on knowledge-driven reasoning. The other two technical disciplines, neuro computing (NC) and evolutionary computing (EC), are data-driven search and optimization approaches [2]. Although we have not reached a consensus regarding the scope of SC or the nature of this association [3], the emergence of this new discipline is undeniable [4]. This paper is the reduced version of a much more extensive coverage of this topic, which can be found in [5].

2 SC COMPONENTS AND TAXONOMY

2.1 Fuzzy Computing

The treatment of imprecision and vagueness can be traced back to the work of Post, Kleene, and Lukasiewicz, multiple-valued logicians who in the early 1930's proposed the use of three-valued logic systems (later followed by infinite-valued logic) to represent undetermined, unknown, or other possible intermediate truth-values between the classical Boolean true and false values [6]. In 1937, the philosopher Max Black suggested the use of a consistency profile to represent vague concepts [7]. While vagueness relates to ambiguity, fuzziness addresses the lack of sharp set-boundaries. It was not until 1965, when Zadeh proposed a complete theory of fuzzy sets (and its isomorphic fuzzy logic), that we were able to represent and manipulate ill-defined concepts [8].

1GE Corporate Research and Development, One Research Circle, Niskayuna, NY 12309, USA. email: bonissone@crd.ge.com

In a narrow sense, fuzzy logic could be considered a fuzzification of Lukasiewicz

Aleph-1 multiple-valued logic [9]. In the broader sense, however, this narrow interpretation represents only one of FL’s four facets [10]. More specifically, FL has a logical facet, derived from its multiple-valued logic genealogy; a set-theoretic facet, stemming from the representation of sets with ill-defined boundaries; a relational facet, focused on the representation and use of fuzzy relations; and an epistemic facet, covering the use of FL to fuzzy knowledge based systems and data bases. A comprehensive review of fuzzy logic and fuzzy computing can be found in [11]. Fuzzy logic gives us a language, with syntax and local semantics, in which we can translate qualitative knowledge about the problem to be solved. In particular, FL allows us to use linguistic variables to model dynamic systems. These variables take fuzzy values that are characterized by a label (a sentence generated from the syntax) and a meaning (a membership function determined by a local semantic procedure). The meaning of a linguistic variable may be interpreted as an elastic constraint on its value. These constraints are propagated by fuzzy inference operations, based on the generalized modus-ponens. This reasoning mechanism, with its interpolation properties, gives FL a robustness with respect to variations in the system's parameters, disturbances, etc., which is one of FL's main characteristics [12].

2.2 Probabilistic Computing

Rather than retracing the history of probability, we will focus on the development of probabilistic computing (PC) and illustrate the way it complements fuzzy computing. As depicted in Figure 1, we can divide probabilistic computing into two classes: single-valued and interval-valued systems. Bayesian belief networks (BBNs), based on the original work of Bayes [13], are a typical example of single-valued probabilistic reasoning systems. They started with approximate methods used in first-generation expert systems, such as MYCIN’s confirmation theory [14] and PROSPECTOR’s modified Bayesian rule [15], and evolved into formal methods for propagating probability values over networks [16-17]. In general, probabilistic reasoning systems have exponential complexity, when we need to compute the joint probability distributions for all the variables used in a model. Before the advent of BBNs, it was customary to avoid such computational problems by making unrealistic, global assumptions of conditional independence. By using BBNs we can decrease this complexity by encoding domain knowledge as structural information: the presence or lack of conditional dependency between two variables is indicated by the presence or lack of a link connecting the nodes representing such variables in the network topology. For specialized topologies (trees, poly-trees, directed acyclic graphs), efficient propagation algorithms have been proposed by Kim and Pearl [18]. However, the complexity of multiple–connected BBNs is still exponential in the number of nodes of the largest sub-graph. When a graph decomposition is not possible, we resort to approximate methods, such as clustering and bounding conditioning, and simulation techniques, such as logic samplings and Markov simulations. Dempster-Shafer (DS) systems are a typical example of intervalvalued probabilistic reasoning systems. They provide lower and upper probability bounds instead of a single value as in most BBN cases. The DS theory was developed independently by Dempster [19] and Shafer [20]. Dempster proposed a calculus for dealing with interval-valued probabilities induced by multiple-valued mappings. Shafer, on the other hand, started from an axiomatic approach and defined a calculus of belief functions. His purpose was to compute the credibility (degree of belief) of statements made by different sources, taking into account the sources’ reliability. Although they started from different semantics, both calculi were identical. Probabilistic computing provides a way to evaluate the outcome of systems affected by randomness (or other types of probabilistic uncertainty). PC’s basic inferential mechanism - conditioning - allows us to modify previous estimates of the system's outcome based on new evidence.

2.2.1 Comparing Probabilistic and Fuzzy Computing.

In this brief review of fuzzy and probabilistic computing, we would like to emphasize that randomness and fuzziness capture two different types of uncertainty. In randomness, the uncertainty is derived from the non-deterministic membership of a point from a sample space (describing the set of possible values for the random variable), into a well-defined region of that space (describing the event). A probability value describes the tendency or frequency with which the random variable takes values inside the region. In fuzziness, the uncertainty is derived from the deterministic but partial membership of a point (from a reference space) into an imprecisely defined region of that space. The region is represented by a fuzzy set. The characteristic function of the fuzzy set maps every point from such space into the real-valued interval [0,1], instead of the set {0,1}. A partial membership value does not represent a frequency. Rather, it describes the degree to which that particular element of the universe of discourse satisfies the property that characterizes the fuzzy set. In 1968, Zadeh noted the complementary nature of these two concepts, when he introduced the probability measure of a fuzzy event [21]. In 1981, Smets extended the theory of belief functions to fuzzy sets by defining the belief of a fuzzy event [22]. These are the first two cases of hybrid systems illustrated in Figure 1.

2.3 Neural Computing

The genealogy of neural networks (NN) could be traced back to 1943, when McCulloch and Pitts showed that a network of binary decision units (BDNs) could implement any logical function [23]. Building upon this concept, Rosenblatt proposed a one-layer feedforward network, called a perceptron, and demonstrated that it could be trained to classify patterns [24-26]. Minsky and Papert [27] proved that single-layer perceptrons could only provide linear partitions of the decision space. As such they were not capable of separating nonlinear or non-convex regions. This caused the NN community to focus its efforts on the development of multilayer NNs that could overcome these limitations. The training of these networks, however, was still problematic. Finally, the introduction of backpropagation (BP), independently developed by Werbos [28], Parker [29], and LeCun [30], provided a sound theoretical way to train multi-layered, feed-forward networks with nonlinear activation functions. In 1989, Hornik et al. proved that a three-layer NN (with one input layer, one hidden layer of squashing units, and one output layer of linear units) was a universal functional approximator [31]. Topologically, NNs are divided into feedforward and recurrent networks. The feedforward networks include single- and multiplelayer perceptrons, as well as radial basis functions (RBF) networks [32]. The recurrent networks cover competitive networks, selforganizing maps (SOMs) [33], Hopfield nets [34], and adaptive resonance theory (ART) models [35]. While feed-forward NNs are used in supervised mode, recurrent NNs are typically geared toward unsupervised learning, associative memory, and self-organization. In the context of this paper, we will only consider feed-forward NNs. Given the functional equivalence already proven between RBF and fuzzy systems [36] we will further limit our discussion to multi-layer feed-forward networks. A comprehensive current review of neuro-computing can be found in [37]. Feedforward multilayer NNs are computational structures that

can be trained to learn patterns from examples. They are composed of a network of processing units or neurons. Each neuron performs a weighted sum of its input, using the resulting sum as the argument of a non-linear activation function. Originally the activation functions were sharp thresholds (or Heavyside) functions, which evolved to piecewise linear saturation functions, to differentiable saturation functions (or sigmoids), and to Gaussian functions (for RBFs). By using a training set that samples the relation between inputs and outputs, and a learning method that trains their weight vector to minimize a quadratic error function, neural networks offer the capabilities of a supervised learning algorithm that performs fine-granule local optimization.

2.4 Evolutionary Computing

Evolutionary computing (EC) algorithms exhibit an adaptive behavior that allows them to handle non-linear, high dimensional problems without requiring differentiability or explicit knowledge of the problem structure. As a result, these algorithms are very robust to time-varying behavior, even though they may exhibit low speed of convergence. EC covers many important families of stochastic algorithms, including evolutionary strategies (ES), proposed by Rechenberg [38] and Schwefel [39], evolutionary programming (EP), introduced by Fogel [40-41], and genetic algorithms (GAs), based on the work of Fraser [42], Bremermann [43], Reed et al. [44], and Holland [45-47], which contain as a subset genetic programming (GP), introduced by Koza [48]. The history of EC is too complex to be completely summarized in a few paragraphs. It could be traced back to Friedberg [49], who studied the evolution of a learning machine capable of computing a given input-output function; Fraser [42] and Bremermann [43], who investigated some concepts of genetic algorithms using a binary encoding of the genotype; Barricelli [50], who performed some numerical simulation of evolutionary processes; and Reed et al. [44], who explored similar concepts in a simplified poker game

simulation. The interested reader is referred to [51] for a comprehensive overview of evolutionary computing and to [52] for an encyclopedic treatment of the same subject. A collection of selected papers illustrating the history of EC can be found in [53]. As noted by Fogel [51], ES, EP, and GAs share many common traits: “…Each maintains a population of trial solutions, imposes random changes to those solutions, and incorporates selection to determine which solutions to maintain in future generations...” Fogel also notes that “… GAs emphasize models of genetic operators as observed in nature, such as crossing-over, inversion, and point mutation, and apply these to abstracted chromosomes…” while ES and EP “… emphasize mutational transformations that maintain behavioral linkage between each parent and its offspring.” Finally, we would like to remark that EC components have increasingly shared their typical traits: ES have added recombination operators similar to GAs, while GAs have been extended by the use of real-number-encoded chromosomes, adaptive mutation rates, and additive mutation operators.

سیستمهای ترکیبی Soft Computing :

ما به کجا می رویم؟

چکیده:

Soft Computing یک روش محاسباتی است که شامل منطق فازی،محاسبات عصبی ، محاسبات تکمیلی و محاسبات احتمالی می باشد.بعد از یک نگاه اجمالی به اجزای Soft Computing ،برخی از مهمترین ترکیبات آنرا مورد بررسی و تجزیه وتحلیل قرار میدهیم.ما بر روی توسعه کنترل کننده های الگوریتمی هوشمند،همانند استفاده از منطق فازی برای کنترل پارامترهای محاسبات تکمیلی تاکید میکنیم و در مورد کاربرد الگوریتمهای تکمیلی برای تنظیم کنترل کننده های فازی صحبت خواهیم کرد.ما بر روی سه کاربرد از Soft Computing در جهان واقعی تاکید میکنیم که همگی توسط سیستمهای ترکیبی ایجاد شده اند.

1- نگاه کلی به Soft Computing

Soft Computing (SC) واژه ای است که در ابتدا توسط زاده برای مشخص کردن سیستمهایی که " از خطای بی دقتی، مبهم بودن و کمی درست بودن ،برای کنترل درست،کم هزینه و سازگارتر با جهان واقعی استفاده میکنند."

بطور معمول SC شامل چهار تکنیک می باشد:دوتای اول آن ،سیستمهای استدلال آماری(PR) و منطق فازی(FL) ،بر پایه استدلال بر اساس دانش است . دو تای دیگر،محاسبه عصبی (NC) و محاسبه تکمیلی(EC) ،بر پایه روشهای تحقیق و بهینه سازی بر اساس داده می باشند. با توجه به اینکه ما به یک توافق در مورد چارچوب SC یا ماهیت این پیوستگی دست پیدا نکرده ایم، غیره منتظره بودن این روش جدید انکارناپذیر است. این مقاله نمونه ساده شده ای از این سرفصلهای بسیار گسترده می باشد که می توانید آنها را در پی نوشت 5 پیدا کنید.

2- اجزا و رده بندی SC

1-2 محاسبه فازی

اصلاح اشتباه و ابهام را می توان در کارهای گذشته کلیین و لوکازوئیچ ،منطق دانان چند فازی که در اوایل دهه 1930 استفاده از سیستمهای منطقی سه ارزشی(که بعداً بوسیله منطق با ارزش بینهایت دنبال شد) را برای نشان دادن نامعینی ، مجهول بودن یا سایر ارزشهای احتمالی بین ارزشهای واقعی بین ارزشهای درست و غلط جبر بول کلاسیک را پیشنهاد کردند،دنبال نمود.در سال 1937 ،اندیشمند ماکس بلک پیشنهاد کرد که از یک پروفایل همبستگی برای نشان دادن مفاهیم مبهم استفاده شود. در حالیکه ابهام به نشانه های گنگ و نا مشخص ناشی از لبه های مرزی تیز مربوط میشد.این مسئله تا سال 1965 ادامه پیدا کرد،زمانی که زاده یک تئوری کامل از مجموعه های فازی(که متناظر آن منطق فازی میباشد)را ارائه نمود،که بر اساس آن ما می توانستیم تصویر کلی که بدرستی تعریف نشده است را نشان داده و آنرا کنترل نماییم.

بعبارت دقیقتر،منطق فازی را می توان به صورت یک تابع منطقی از منطق چند ارزشی آلف-1 لوکازوئیچ دانست.اگرچه،در مفهوم وسیعتر،این تعبیر دقیق تنها یکی از چهار جنبه FL را نشان میدهد. بطور خاص ،FL دارای یک جنبه منطقی ،که از اجداد منطقی چند ارزشی آن مشتق شده ،یک جنبه فرضی که از نمایش مجموعه ای از مرزهایی که بدرستی تعیین نشده است نشات گرفته ،یک جنبه ارتباطی ،که برروی نمایش واستفاده از روابط منطقی متمرکز است و یک جنبه اپیستمیک که در برگیرنده استفاده از FL برای دانش فازی مبتنی بر سیستمها و بانکهای اطلاعاتی می باشد،است.

یک بررسی جامع از منطق فازی و محاسبه فازی را می توان در پی نوشت 11 مشاهده کرد.منطق فازی به ما یک زبان همراه با علم نحو و معانی خاص آنرا میدهد ،که توسط آن ما می توانیم اطلاعات کیفی راجع به مشکلی که قرار است حل شود را ترجمه می کند. بطور خاص ،FL به این اجازه را می دهد که از متغیرهای زبان شناسی برای شبیه سازی سیستمهای دینامیکی استفاده کنیم. اینها متغیرهایی با ارزش فازی هستند که بوسیله یک لیبل (عبارت ایجاد شده توسط علم نحو)و یک معنی(یک تابع عضویت که توسط یک دستورالعمل نحوی محلی تعیین شده است) مشخص میشوند.معنی یک کتغیر کلامی می تواند بصورت یک محدودیت قابل انعطاف برای ارزش آن ،تفسیر شود.این محدویتها بوسیله عملیات استنتاجی فازی و بر اساس modus-ponens عمومی شده ،گسترش می یابند.این مکانیسم استدلالی ،همراه با خصوصیات درون یابی آن ،FL را با توجه به تغییر در پارامترهای سیستم ،اختلالات ،و غیره قدرتمند ساخته است که یکی از ویژگیهای اصلی FL هم همین توانمند بودن آن است.

2-2 محاسبه احتمالی

بجای بررسی مجدد تاریخچه احتمال ،ما بر روی توسعه احتمالی (pc)تمرکز کرده وراهی که در محاسبه فازی مورد استفاده قرار می گیرد را نشان می دهیم .همانگونه که در شکل 1 نشان داده شده است ،می توانیم محاسبه احتمالی را به دو گروه تقسیم کنیم :سیستم های یک ارزشی وسیستمهای بین ارزشی .

...


نظرات 0 + ارسال نظر
امکان ثبت نظر جدید برای این مطلب وجود ندارد.