Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2025, Astrala Nexus Deeply Human-Deeply AI
Emergent Recursive Intelligence (ERI) is a novel AI paradigm in which an artificial intelligence continually improves itself through self-referential (recursive) learning loops. Unlike static models that require periodic human updates, an ERI-based system refines its own algorithms, resolves internal contradictions, and reprioritises knowledge on the fly. The result is an AI that emerges new capabilities over time-discovering insights and strategies that were not explicitly programmed-and does so recursively, i.e. through iterative self-improvement. This makes ERI strategically significant: it lays the foundation for Astrala Nexus to build AI systems that learn how to learn, adapt to novel situations and evolve alongside their human partners. In essence, ERI transforms AI from a static tool into a dynamic collaborator capable of growth. Astrala Nexus leverages ERI as its core principle, drawing on leading scientific and philosophical frameworks to ensure this technology is both cutting-edge and deeply grounded.
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
Ubiquity, 2006
The history and the future of Artificial Intelligence could be summarized into three distinctive phases: embryonic, embedded and embodied. We briefly describe early efforts in AI aiming to mimic intelligent behavior, evolving later into a set of the useful, embedded and practical technologies. We project the possible future of embodied intelligent systems, able to model and understand the environment and learn from interactions, while learning and evolving in constantly changing circumstances. We conclude with the (heretical) thought that in the future, AI should re-emerge as research in complex systems. One particular embodiment of a complex system is the Intelligent Enterprise.
The development of artificial intelligence (AI) systems has to date been largely one of manual labor. This constructionist approach to AI has resulted in systems with limited-domain application and severe performance brittleness. No AI architecture to date incorporates, in a single system, the many features that make natural intelligence general-purpose, including system-wide attention, analogy-making, system-wide learning, and various other complex transversal functions. Going beyond current AI systems will require significantly more complex system architecture than attempted to date. The heavy reliance on direct human specification and intervention in constructionist AI brings severe theoretical and practical limitations to any system built that way. One way to address the challenge of artificial general intelligence (AGI) is replacing a top-down architectural design approach with methods that allow the system to manage its own growth. This calls for a fundamental shift from hand-crafting to self-organizing archi- tectures and self-generated code – what we call a constructivist AI approach, in reference to the self-constructive principles on which it must be based. Methodologies employed for constructivist AI will be very different from today’s software development methods; instead of relying on direct design of mental functions and their implementation in a cognitive architecture, they must address the principles – the “seeds” – from which a cognitive architecture can automatically grow. In this paper I describe the argument in detail and examine some of the implications of this impending paradigm shift.
arXiv (Cornell University), 2022
Neuroscience has long been an essential driver of progress in artificial intelligence (AI). We propose that to accelerate progress in AI, we must invest in fundamental research in NeuroAI. A core component of this is the embodied Turing test, which challenges AI animal models to interact with the sensorimotor world at skill levels akin to their living counterparts. The embodied Turing test shifts the focus from those capabilities like game playing and language that are especially well-developed or uniquely human to those capabilities-inherited from over 500 million years of evolution-that are shared with all animals. Building models that can pass the embodied Turing test will provide a roadmap for the next generation of AI. Over the coming decades, Artificial Intelligence (AI) will transform society and the world economy in ways that are as profound as the computer revolution of the last half century and likely at an even faster pace. This AI revolution presents tremendous opportunities to unleash human creativity and catalyze economic growth, relieving workers from performing the most dangerous and menial jobs. However, to reach this potential, we still require advances that will make AI more human-like in its capabilities. Historically, neuroscience has been a critical driver and source of inspiration for improvements in AI, particularly those that made AI more proficient in areas that humans and other animals excel at, such as vision, reward-based learning, interacting with the physical world, and language 1,2. It can still play this role. To accelerate progress in AI and realize its vast potential, we must invest in fundamental research in "NeuroAI." The seeds of the current AI revolution were planted decades ago, mainly by researchers attempting to understand how brains compute 3. Indeed, the earliest efforts to build an "artificial brain" led to the invention of the modern "von Neumann computer architecture," for which John von Neumann explicitly drew upon the very limited knowledge of the brain available to him in the 1940s 4,5. Later, the Nobel-prize winning work of David Hubel and Torsten Wiesel on visual processing circuits in the cat neocortex inspired the deep convolutional networks that have catalyzed the recent revolution in modern AI 6-8. Similarly, the development of reinforcement learning was directly inspired by insights into animal behavior and neural activity during learning 9-15. Now, decades later, applications of ANNs and RL are coming so quickly that many observers assume that the long-elusive goal of human-level intelligencesometimes referred to as "artificial general intelligence"-is within our grasp. However, in contrast to the optimism of those outside the field, many front-line AI researchers believe that major breakthroughs are needed before we can build artificial systems capable of doing all that a human, or even a much simpler animal like a mouse, can do. Although AI systems can easily defeat any human opponent in games such as chess 16 and Go 17 , they are not robust and often struggle when faced with novel situations. Moreover, we have yet to build effective systems that can walk to the shelf, take down the chess set, set up the pieces, and move them around during a game, although recent progress is encouraging 18. Similarly, no machine can build a nest, forage for berries, or care for young. Today's AI systems cannot compete with the sensorimotor capabilities of a four-year old child or even simple animals. Many basic capacities required to navigate new situationscapacities that animals have or acquire effortlessly-turn out to be deceptively challenging for AI, partly because AI systems lack even the basic abilities to interact with an unpredictable world. A growing number of AI researchers doubt that merely scaling up current approaches will overcome these limitations. Given the need to achieve more natural intelligence in AI, it is quite likely that new inspiration from naturally intelligent systems is needed 19. Historically, many key AI advances, such as convolutional ANNs and reinforcement learning, were inspired by neuroscience. Neuroscience continues to provide guidance-e.g., attention-based neural networks were loosely inspired by attention mechanisms in the brain 20-23-but this is often based on findings that are decades old. The fact that such cross-pollination between AI and neuroscience is far less common than in the past represents a missed opportunity. Over the last decades, through efforts such as the NIH BRAIN initiative and others, we have amassed an enormous amount of knowledge about the brain. The emerging field of NeuroAI, at the intersection of neuroscience and AI, is based on the premise that a better understanding of neural computation will reveal fundamental ingredients of intelligence and catalyze the next revolution in AI. This will eventually lead to artificial agents with capabilities that match those of humans. The NeuroAI program we advocate is driven by the recognition that AI historically owes much to neuroscience and the promise that AI will continue to learn from it-but only if there is a large enough community of researchers fluent in both domains. We believe the time is right for a large-scale effort to identify and understand the principles of biological intelligence and abstract those for application in computer and robotic systems.
Atlantis Thinking Machines, 2012
The development of artificial intelligence (AI) systems has to date been largely one of manual labor. This constructionist approach to AI has resulted in systems with limited-domain application and severe performance brittleness. No AI architecture to date incorporates, in a single system, the many features that make natural intelligence general-purpose, including system-wide attention, analogy-making, system-wide learning, and various other complex transversal functions. Going beyond current AI systems will require significantly more complex system architecture than attempted to date. The heavy reliance on direct human specification and intervention in constructionist AI brings severe theoretical and practical limitations to any system built that way. One way to address the challenge of artificial general intelligence (AGI) is replacing a top-down architectural design approach with methods that allow the system to manage its own growth. This calls for a fundamental shift from hand-crafting to self-organizing architectures and self-generated code -what we call a constructivist AI approach, in reference to the self-constructive principles on which it must be based. Methodologies employed for constructivist AI will be very different from today's software development methods; instead of relying on direct design of mental functions and their implementation in a cognitive architecture, they must address the principles -the "seeds" -from which a cognitive architecture can automatically grow. In this paper I describe the argument in detail and examine some of the implications of this impending paradigm shift.
The Glenroy Press, 2024
There are many books and articles that outline the findings made by existing complexity science. But there are almost none that identify how you can develop the thinking that was used to produce those findings. None show how individuals can develop the higher cognition that will be necessary if they are to contribute to the emergence of a genuine science of complexity. In contrast, this book sets out specifically to provide methods and practices for developing higher cognition. The book argues that the ability to construct and utilize mental models of complex phenomena is essential if humanity is to overcome the existential challenges that currently threaten the survival of human civilization on Earth. Furthermore, it argues that this metasystemic cognition is essential for the development of a genuine science of complexity – the analytical/rational cognition that underpins current mainstream science is largely limited to generating only mechanistic reductions of complex phenomena. Our current analytical/rational cognition has been very effective for driving the development of mechanistic technology and reductionist science. But it is not fit for the purpose of enabling us to manage the complex challenges that increasingly beset us individually and collectively. The book recognises that most potential readers are likely to be highly skeptical about its claims to enable the scaffolding of metasystemic cognition. The website for the book attempts to dispel this skepticism by making the first chapter of the book freely available. The chapter is designed to evoke the realization that the methods detailed by the book are plausible, and that currently almost no one uses the methods systematically, despite their enormous potential. Read more about the book and access the first chapter at: https://www.HumanSuperintelligenceBook.com
presented and distributed at the, 2007
Self-improving systems are a promising new approach to developing artificial intelligence. But will their behavior be predictable? Can we be sure that they will behave as we intended even after many generations of selfimprovement? This paper presents a framework for answering questions like these. It shows that self-improvement causes systems to converge on an 2 architecture that arises from von Neumann's foundational work on microeconomics. Self-improvement causes systems to allocate their physical and computational resources according to a universal principle. It also causes systems to exhibit four natural drives: 1) efficiency, 2) self-preservation, 3) resource acquisition, and 4) creativity. Unbridled, these drives lead to both desirable and undesirable behaviors. The efficiency drive leads to algorithm optimization, data compression, atomically precise physical structures, reversible computation, adiabatic physical action, and the virtualization of the physical. It also governs a system's choice of memories, theorems, language, and logic. The self-preservation drive leads to defensive strategies such as "energy encryption" for hiding resources and promotes replication and game theoretic modeling. The resource acquisition drive leads to a variety of competitive behaviors and promotes rapid physical expansion and imperialism. The creativity drive leads to the development of new concepts, algorithms, theorems, devices, and processes. The best of these traits could usher in a new era of peace and prosperity; the worst are characteristic of human psychopaths and could bring widespread destruction. How can we ensure that this technology acts in alignment with our values? We have leverage both in designing the initial systems and in creating the social context within which they operate. But we must have clarity about the future we wish to create. We need not just a logical understanding of the technology but a deep sense of the values we cherish most. With both logic and inspiration we can work toward building a technology that empowers the human spirit rather than diminishing it.
2025
Prismata is a research initiative aimed at developing a conceptual framework for Artificial General Intelligence (AGI) based on principles of self-organization, proprioceptive feedback, and emergent sentience. It utilizes a novel "Promethean Language" — a self-referential, dynamically evolving communication system — to facilitate internal and external interaction, which allows AI to adapt, learn, and potentially achieve a form of autonomous self-improvement. This framework integrates principles of self-organization with the Prometheus language to advance artificial intelligence by enhancing adaptability, communication, and decision-making. Traditional AI models are constrained by static programming, while this framework introduces a self-referential intelligence architecture that continuously restructures in response to internal and external stimuli. Prometheus, as an emergent cognitive system, enables AI to refine its interaction models, develop autonomous reasoning, and engage in iterative self-enhancement.
pat, 2006
ResearchGate, 2023
"AI Odyssey: Unraveling the Past, Mastering the Present, and Charting the Future of Artificial Intelligence" is a comprehensive and insightful book that takes readers on a journey through the evolution, current applications, and future prospects of artificial intelligence (AI). Starting with a historical perspective, the book traces the origins of AI and highlights the pioneering work that paved the way for modern AI technologies. It then delves into the state-ofthe-art AI applications across various industries, exploring real-world use cases and technical details. Moreover, the book envisions the future potential of AI, addressing ethical considerations and the responsible development of AI technologies. Aimed at researchers and AI enthusiasts, "AI Odyssey" offers valuable insights into the world of AI, inspiring readers to embrace the transformative power of AI and contribute to its responsible advancement.
2007
Self-improving systems are a promising new approach to developing artificial intelligence. But will their behavior be predictable? Can we be sure that they will behave as we intended even after many generations of selfimprovement? This paper presents a framework for answering questions like these. It shows that self-improvement causes systems to converge on an 2 architecture that arises from von Neumann's foundational work on microeconomics. Self-improvement causes systems to allocate their physical and computational resources according to a universal principle. It also causes systems to exhibit four natural drives: 1) efficiency, 2) self-preservation, 3) resource acquisition, and 4) creativity. Unbridled, these drives lead to both desirable and undesirable behaviors. The efficiency drive leads to algorithm optimization, data compression, atomically precise physical structures, reversible computation, adiabatic physical action, and the virtualization of the physical. It also governs a system's choice of memories, theorems, language, and logic. The self-preservation drive leads to defensive strategies such as "energy encryption" for hiding resources and promotes replication and game theoretic modeling. The resource acquisition drive leads to a variety of competitive behaviors and promotes rapid physical expansion and imperialism. The creativity drive leads to the development of new concepts, algorithms, theorems, devices, and processes. The best of these traits could usher in a new era of peace and prosperity; the worst are characteristic of human psychopaths and could bring widespread destruction. How can we ensure that this technology acts in alignment with our values? We have leverage both in designing the initial systems and in creating the social context within which they operate. But we must have clarity about the future we wish to create. We need not just a logical understanding of the technology but a deep sense of the values we cherish most. With both logic and inspiration we can work toward building a technology that empowers the human spirit rather than diminishing it.
The article "The Dynamics of Artificial Intelligence: Will it Make or Mar?" delves into the multifaceted themes of Artificial Intelligence (AI) and its profound impact on our future. This exploration is guided by a central research question: how will the thematic dynamics of AI shape our future? The paper begins with a historical overview of AI, tracking its evolution from theoretical beginnings to today's advanced applications in diverse fields. It highlights the transformative role of AI in various sectors, including healthcare, finance, manufacturing, and education, emphasizing how AI fosters efficiency and innovation. Significant attention is devoted to the socioeconomic benefits of AI, such as improved efficiency, healthcare advancements, and educational accessibility. However, it also addresses the darker aspects, including job displacement, ethical concerns, and the risks of AI in warfare and security. The dual nature of AI-its potential to enhance and disrupt-forms the core of the discussion. The paper dissects key AI themes like autonomy and intelligence augmentation, examining their influence on societal norms. This scrutiny unfolds the complex relationship between technological advancement and human existence, exploring both the positive and negative aspects, from socioeconomic benefits to ethical concerns. The aim is to unravel the intricate tapestry of AI's role in shaping our future trajectory. This comprehensive inquiry provides nuanced insights into AI's potential to revolutionize our world while acknowledging its ethical and social challenges. It advocates for a balanced approach to AI development, considering its complex interplay with societal norms and ethical standards, and highlights the need for global cooperation in AI governance. The future of AI, as discussed in this paper, is a confluence of remarkable possibilities and significant responsibilities, requiring collective efforts to harness its full potential responsibly.
2022
As more and more AI agents are used in practice, it is time to think about how to make these agents fully autonomous so that they can (1) learn by themselves continually in a self-motivated and self-initiated manner rather than being retrained offline periodically on the initiation of human engineers and (2) accommodate or adapt to unexpected or novel circumstances. As the real-world is an open environment that is full of unknowns or novelties, detecting novelties, characterizing them, accommodating or adapting to them, and gathering ground-truth training data and incrementally learning the unknowns/novelties are critical to making the AI agent more and more knowledgeable and powerful over time. The key challenge is how to automate the process so that it is carried out continually on the agent's own initiative and through its own interactions with humans, other agents and the environment just like human on-the-job learning. This paper proposes a framework (called SOLA) for this learning paradigm to promote the research of building autonomous and continual learning enabled AI agents. To show feasibility, an implemented agent is also described.
Lecture Notes in Computer Science, 1998
In this paper I take the risk of predicting how the conflict between anti-symbolic Vs symbolic, emergentist Vs explicit approaches to cognition and action will be solved in the next decades. I do not believe in a 'paradigmatic revolution' and I argue in favour of a 'synthesis'. I will illustrate how a synthetic paradigm can be built through the notion of different levels of reality description and of scientific theory, and through their interconnections thanks to bridge-theories, cross-layered theories, and layered ontologies. I will provide several examples of bridge-theories and layered ontologies with special attention to agents and multi-agent systems. In particular I will examine the theory of the mental counterparts of social objects illustrating the mental facet of norms and of commitment; the grounding of social power in the personal power; the cognitive bases of organisations. I will propose a layerd approach to the notions of agent, delegation, communication, conflict, as they apply to different levels of agenthood. I will sketch the problem of emergence amo ng intelligent agents by exploring the problem of unplanned cooperation and social functions. I will conclude with the importance of the new "social" computational paradigm in AI, and the emergent character of computation in Agent Based Computing. Will the "representational paradigm"-that characterised Artificial Intelligence (AI) and Cognitive Science (CS) from their very birth-be eliminated in the 21th century? Will this paradigm be replaced by the new one based on dynamic systems, connectionism, situatedness, embodiedness, etc.? Will this be the end of the AI ambitious project? I do not think so. Challenges and attacks to AI and CS have been hard and radical in the last 15 years, however I believe that the next century will start with a renewed rush of AI and we will not assist to a paradigmatic revolutio, with connectionism replacing cognitivism and symbolic models; emergentist, dynamic and evolutionary models eliminating reasoning on explicit representations and planning; neuroscience (plus phenomenology) eliminating cognitive processing; situatedness, reactivity, cultural constructivism eliminating general concepts, context independent abstractions, ideal-typical models. I claim that the major scientific challenge of the first part of the century will precisely be the construction of a new "synthetic" paradigm: a paradigm that puts together, in a principled and noneclectic way, cognition and emergence, information processing and self-organisation, reactivity and intentionality, situatedness and planning, etc. [Cas98a] [Tag96]. AI is going out of a crisis: crisis of grants, of prestige, and of identity. This crisis was not only due-on my view-to exaggerated expectations and overselling of specific technologies (like expert systems) tout court identified with AI. It was due to the restriction of cultural interests and influence of the discipline, and of its ambitions; to the dominance either of the logicist approach (identifying logics and theory, logics and foundations) or of a mere technological/applicative view of AI (see the debate about the 'pure reason' [McD87] and 'rigor mortis'). New domains were growing as external and antagonistic to AI: neural nets, reactive systems, evolutionary computing, CSCW, cognitive modelling, etc. Hard attacks were made to the "classical" AI approach: situatedness [Suc87], anti-symbolism, reactivity [Bro89] [Agr89], dynamic systems, bounded and limited resources, uncertainty, and so on (on the challenges to AI and CS see also [Tha96]). However, by relaxing previous frameworks; by some contagion and hybridisation, by incorporating some of those criticisms; by re-absorbing as its own descendants neural nets, reactive systems, evolutionary computing, etc.; by developing important internal domains like machine learning and DAI-MAS; by important developments in logics and in languages; and finally with the new successful Agents framework, AI is now in a revival phase. It is trying to recover all the original challenges of the discipline, its strong scientific identity, its cultural role and influence. We may in fact say that there is already a neo-cognitivism and a new AI. In this new AI of the '90s systems and models are conceived for reasoning and acting in open unpredictable worlds, with limited and uncertain knowledge, in real time, with bounded (both cognitive and material) resources, interfering-either cooperatively or competitively-with other systems. The new password is interaction [Bob91]: interaction with an evolving environment; among several, distributed and heterogeneous artificial systems in a network; with human users; among humans through computers. The new AI and CS are-to me-only the beginning of a highly transformative and adaptive reaction to all those radical and fruitful challenges. They are paving the way for the needed synthesis and are starting the job. 1.1 The synthesis Synthetic theories should explain the dynamic and emergent aspects of cognition and symbolic computation; how cognitive processing and individual intelligence emerge from sub-symbolic or sub-cognitive distributed computation, and causally feedbacks into it; how collective phenomena emerge from individual action and intelligence and causally shape back the individual mind. We need a principled theory which is able to reconcile cognition with emergence and with reactivity: Reconciling "Reactivity" and "Cognition" We shouldn't consider reactivity as alternative to reasoning or to mental states [Cas95] [Tag96]. A reactive agent is not necessarily an agent without mental states and reasoning. Reactivity is not equal to reflexes. Also cognitive and planning agents are and must be reactive (like in several BDI models). They are reactive not only in the sense that they can have some hybrid and compound architecture that includes both deliberated actions and reflexes or other forms of low level reactions (for example, [Kur97]), but because there is some form of high level cognitive reactivity : the agent reacts by changing its mind: plans, goals, intentions. Also Suchman's provocative claims against planning are clearly too extreme and false. In general we have to bring all the anti-cognitivist claims, applied to sub-symbolic or insect-like systems, at the level of cognitive system 1. Reconciling "Emergence" and "Cognition" Emergence and cognition are not incompatible: they are not two alternative approaches to intelligence and cooperation, two competing paradigms. They must be reconciled:-first, considering cognition itself as a level of emergence: both as an emergence from subsymbolic to symbolic (symbol grounding, emergent symbolic computation), and as a transition from objective to subjective representation (awareness) (see later for example on dependence and on conflicts) and from implicit to explicit knowledge;-second, recognising the necessity for going beyond cognition, modelling emergent unaware, functional social phenomena (ex. unaware cooperation, non-orchestrated problem solving, and swarm intelligence) a lso among cognitive and planning agents. In fact, for a theory of cooperation and society among intelligent agents mind is not enough [Con96]. We have to explain how collective phenomena emerge from individual action and intelligence, and how a collaborative plan can be only partially represented in the minds of the participants, and some part represented in no mind at all [Hay67]. Emergent intelligence and cooperation do not pertain only to reactive agents. Mind cannot understand, predict, and dominate all the global and compound effects of actions at the collective level. Some of these effects are self-reinforcing and self-organising. There are forms of cooperation which are not based on knowledge, mutual beliefs, reasoning and constructed social structure and agreements. But what kind/notion of emergence do we need to model these forms of social behaviour? The notion of emergence simply relative to an observer (which sees something interesting or some beautiful effect looking at the screen of a computer running some simulation) or a merely accidental 1 Cognitive agents are agents whose actions are internally regulated by goals (goal-directed) and whose goals, decisions, and plans are based on beliefs. Both goals and beliefs are cognitive representations that can be internally generated, manipulated, and subject to inferences and reasoning. Since a cognitive agent may have more than one goal active in the same situation, it must have some form of choice/decision, based on some "reason" i.e. on some belief and evaluation. Notice that I use "goal" as the general family term for all motivational representations: from desires to intentions, from objectives to motives, from needs to ambitions, etc. By "sub-cognitive" agents I mean agents whose behaviour is not regulated by an internal explicit representation of its purposes and by explicit beliefs. Sub-cognitive agents are for example simple neural-net agents, or mere reactive agents.
In today's rapidly evolving technological landscape, Artificial Intelligence (AI) stands as a formidable force driving innovation and progress across various sectors. This research paper explores the multifaceted role of AI in reshaping industries, enhancing human capabilities, and pushing the boundaries of what is possible. Through an examination of real-world applications, challenges, and future prospects, we uncover the profound impact of AI on our world.
2019 42nd International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2019
Artificial Intelligence (AI) is one of the current emerging technologies. In the history of computing AI has been in the similar role earlier-almost every decade since the 1950s, when the programming language Lisp was invented and used to implement self-modifying applications. The second time that AI was described as one of the frontier technologies was in the 1970s, when Expert Systems (ES) were developed. A decade later AI was again at the forefront when the Japanese government initiated its research and development effort to develop an AI-based computer architecture called the Fifth Generation Computer System (FGCS). Currently in the 2010s, AI is again on the frontier in the form of (self-)learning systems manifesting in robot applications, smart hubs, intelligent data analytics, etc. What is the reason for the cyclic reincarnation of AI? This paper gives a brief description of the history of AI and also answers the question above. The current AI "cycle" has the capability to change the world in many ways. In the context of the CE conference, it is important to understand the changes it will cause in education, the skills expected in different professions, and in society at large.
The general objective of Artificial Intelligence (AI) is to make machinesparticularly computersdo things that require intelligence when done by humans. In the last 60 years, AI has significantly progressed and today forms an important part of industry and technology. However, despite the many successes, fundamental questions concerning the creation of human-level intelligence in machines still remain open and will probably not be answerable when continuing on the current, mainly mathematic-algorithmically-guided path of AI. With the novel discipline of Brain-Like Artificial Intelligence, one potential way out of this dilemma has been suggested. Brain-Like AI aims at analyzing and deciphering the working mechanisms of the brain and translating this knowledge into implementable AI architectures with the objective to develop in this way more efficient, flexible, and capable technical systems This article aims at giving a review about this young and still heterogeneous and dynamic research field.
Advances in Reinforcement Learning, Intech, 2011
“There exist many robots who faithfully execute given programs describing the way of image recognition, action planning, control and so forth. Can we call them intelligent robots?” In this chapter, the author who has had the above skepticism describes the possibility of the emergence of intelligence or higher functions by the combination of Reinforcement Learning (RL) and a Neural Network (NN), reviewing his works up to now.
Four principal features of autonomous control systems are left both unaddressed and unaddressable by present-day engineering methodologies: 1. The ability to operate effectively in environments that are only partially known beforehand at design time;; 2. A level of generality that allows a system to re-assess and re-define the fulfillment of its mission in light of unexpected constraints or other unforeseen changes in the environment;; 3. The ability to operate effectively in environments of significant complexity;; and 4. The ability to degrade gracefully - how it can continue striving to achieve its main goals when resources become scarce, or in light of other expected or unexpected constraining factors that impede its progress. We describe new methodological and engineering principles for addressing these shortcomings, that we have used to design a machine that becomes increasingly better at behaving in underspecified circumstances, in a goal-directed way, on the job, by modeling itself and its environment as experience accumulates. Based on principles of autocatalysis, endogeny, and reflectivity, the work provides an architectural blueprint for constructing systems with high levels of operational autonomy in underspecified circumstances, starting from only a small amount of designer-specified code - a seed. Using a valuedriven dynamic priority scheduling to control the parallel execution of a vast number of lines of reasoning, the system accumulates increasingly useful models of its experience, resulting in recursive self-improvement that can be autonomously sustained after the machine leaves the lab, within the boundaries imposed by its designers. A prototype system has been implemented and demonstrated to learn a complex real-world task -real-time multimodal dialogue with humans -by on-line observation. Our work presents solutions to several challenges that must be solved for achieving artificial general intelligence.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.