Available:*
Library | Item Barcode | Call Number | Material Type | Item Category 1 | Status |
---|---|---|---|---|---|
Searching... | 30000010193938 | TJ211.3 S62 2009 | Open Access Book | Book | Searching... |
On Order
Summary
Summary
The basic principles guiding sensing, perception and action in bio systems seem to rely on highly organised spatial-temporal dynamics. In fact, all biological senses, (visual, hearing, tactile, etc.) process signals coming from different parts distributed in space and also show a complex time evolution. As an example, mammalian retina performs a parallel representation of the visual world embodied into layers, each of which r- resents a particular detail of the scene. These results clearly state that visual perception starts at the level of the retina, and is not related uniquely to the higher brain centres. Although vision remains the most useful sense guiding usual actions, the other senses, ?rst of all hearing but also touch, become essential particularly in cluttered conditions, where visual percepts are somehow obscured by environment conditions. Ef?cient use of hearing can be learnt from acoustic perception in animals/insects, like crickets, that use this ancient sense more than all the others, to perform a vital function, like mating.
Table of Contents
Part I Systems | |
1 Perception for Action in Insects | p. 3 |
1.1 Introduction | p. 3 |
1.2 The Traditional View | p. 3 |
1.3 Perception as Transformation | p. 6 |
1.4 Closing the Loop | p. 8 |
1.4.1 Active Perception | p. 8 |
1.4.2 Dynamical Systems Theory and Perception | p. 10 |
1.4.3 Dynamics and Networks | p. 11 |
1.4.4 Further Bio-inspired Architectures for Perception-Action | p. 14 |
1.5 Predictive Loops | p. 16 |
1.6 Perception for Action in Insects | p. 17 |
1.7 Basic Physiology and the Central Nervous System | p. 19 |
1.8 Higher Brain Centres in Insects | p. 21 |
1.8.1 The Mushroom Bodies (Corpora Pedunculata) | p. 21 |
1.8.2 The Central Complex | p. 27 |
1.9 Towards 'Insect Brain' Control Architectures | p. 30 |
1.10 Conclusion | p. 33 |
References | p. 35 |
2 Principles of Insect Locomotion | p. 43 |
2.1 Introduction | p. 43 |
2.2 Biological Systems | p. 44 |
2.3 Sensors | p. 48 |
2.3.1 Mechanosensors | p. 49 |
2.3.2 Environmental Sensors | p. 51 |
2.4 Leg Controller | p. 52 |
2.4.1 Swing Movement | p. 52 |
2.4.2 Stance Movement | p. 57 |
2.5 Coordination of Different Legs | p. 65 |
2.6 Insect Antennae as Models for Active Tactile Sensors in Legged Locomotion | p. 75 |
2.7 Central Oscillators | p. 78 |
2.8 Actuators | p. 83 |
2.9 Conclusion | p. 85 |
References | p. 86 |
3 Low Level Approaches to Cognitive Control | p. 97 |
3.1 Introduction | p. 97 |
3.2 Sensory Systems and Simple Behaviours | p. 98 |
3.2.1 Mechanosensory Systems | p. 98 |
3.2.2 Olfactory Systems | p. 100 |
3.2.3 Visual Systems | p. 102 |
3.2.4 Audition | p. 115 |
3.2.5 Audition and Vision | p. 123 |
3.3 Navigation | p. 129 |
3.3.1 Path Integration | p. 129 |
3.3.2 Visual Homing | p. 137 |
3.3.3 Robot Implementation and Results | p. 143 |
3.4 Learning | p. 156 |
3.4.1 Neural Model and STDP | p. 157 |
3.4.2 Non-elemental Associations | p. 158 |
3.4.3 Associating Auditory and Visual Cues | p. 163 |
3.5 Conclusion | p. 166 |
References | p. 167 |
Part II Cognitive Models | |
4 A Bottom-Up Approach for Cognitive Control | p. 179 |
4.1 Introduction | p. 180 |
4.2 Behavior-Based Approaches | p. 181 |
4.3 A Bottom-Up Approach for Cognitive Control | p. 185 |
4.4 Representation by Situation Models | p. 188 |
4.4.1 Basic Principles of Brain Function | p. 190 |
4.4.2 Recurrent Neural Networks | p. 192 |
4.4.3 Memory Systems | p. 192 |
4.4.4 Recurrent Neural Networks | p. 195 |
4.4.5 Applications | p. 199 |
4.4.6 Learning | p. 203 |
4.5 Towards Cognition, an Extension of Walknet | p. 207 |
4.5.1 The Reactive and Adaptive Layer | p. 208 |
4.5.2 Cognitive Level | p. 209 |
4.6 Conclusions | p. 215 |
References | p. 216 |
5 Mathematical Approach to Sensory Motor Control and Memory | p. 219 |
5.1 Theory of Recurrent Neural Networks Used to Form Situation Models | p. 219 |
5.1.1 RNNs as a Part of a General Memory Structure | p. 219 |
5.1.2 Input Compensation (IC) Units and RNNs | p. 220 |
5.1.3 Learning Static Situations | p. 223 |
5.1.4 Dynamic Situations: Convergence of the Network Training Procedure | p. 229 |
5.1.5 Dynamic Situations: Response of Trained IC-Unit Networks to a Novel External Stimulus | p. 236 |
5.1.6 IC-Networks with Nonlinear Recurrent Coupling | p. 242 |
5.1.7 Discussion | p. 246 |
5.2 Probabilistic Target Searching | p. 249 |
5.2.1 Introduction | p. 249 |
5.2.2 The Robot Probabilistic Sensory - Motor Layers | p. 250 |
5.2.3 Obstacles, Path Complexity and the Robot IQ Test | p. 253 |
5.2.4 First Neuron: Memory Skill | p. 254 |
5.2.5 Second Neuron: Action Planning | p. 257 |
5.2.6 Conclusions | p. 259 |
5.3 Memotaxis Versus Chemotaxis | p. 260 |
5.3.1 Introduction | p. 260 |
5.3.2 Robot Model | p. 261 |
5.3.3 Conclusions | p. 265 |
References | p. 266 |
6 From Low to High Level Approach to Cognitive Control | p. 269 |
6.1 Introduction | p. 269 |
6.2 Weak Chaos Control for the Generation of Reflexive Behaviours | p. 270 |
6.2.1 The Chaotic Multiscroll System | p. 272 |
6.2.2 Control of the Multiscroll System | p. 272 |
6.2.3 Multiscroll Control for Robot Navigation Control | p. 275 |
6.2.4 Robot Navigation | p. 276 |
6.2.5 Simulation Results | p. 278 |
6.3 Learning Anticipation in Spiking Networks | p. 279 |
6.3.1 The Spiking Network Model | p. 281 |
6.3.2 Robot Simulation and Controller Structure | p. 284 |
6.3.3 Spiking Network for Obstacle Avoidance | p. 286 |
6.3.4 Spiking Network for Target Approaching | p. 289 |
6.3.5 Navigation with Visual Cues | p. 293 |
6.4 Application to Landmark Navigation | p. 295 |
6.4.1 The Spiking Network for Landmark Identification | p. 297 |
6.4.2 The Recurrent Neural Network for Landmark Navigation | p. 298 |
6.4.3 Simulation Results | p. 301 |
6.5 Conclusions | p. 305 |
References | p. 306 |
7 Complex Systems and Perception | p. 309 |
7.1 Introduction | p. 309 |
7.2 Reaction-Diffusion Cellular Nonlinear Networks and Perceptual States | p. 311 |
7.3 The Representation Layer | p. 312 |
7.3.1 The Preprocessing Block | p. 313 |
7.3.2 The Perception Block | p. 313 |
7.3.3 The Action Selection Network and the DRF Block | p. 320 |
7.3.4 Unsupervised Learning in the Preprocessing Block | p. 321 |
7.3.5 The Memory Block | p. 323 |
7.4 Strategy Implementation and Results | p. 325 |
7.5 SPARK Cognitive Architecture | p. 330 |
7.6 Behaviour Modulation | p. 333 |
7.6.1 Basic Behaviors | p. 333 |
7.6.2 Representation Layer | p. 334 |
7.7 Behaviour Modulation: Simulation Results | p. 334 |
7.7.1 Simulation Setup | p. 334 |
7.7.2 Learning Phase | p. 336 |
7.7.3 Testing Phase | p. 336 |
7.8 Conclusions | p. 337 |
References | p. 338 |
Appendix I CNNs and Turing patterns | p. 340 |
Appendix II From Motor Maps to the Action Selection Network | p. 344 |
Part III Software/Hardware Cognitive Architecture and Experiments | |
8 New Visual Sensors and Processors | p. 351 |
8.1 Introduction | p. 351 |
8.2 The Eye-RIS Vision System Concept | p. 353 |
8.3 The Retina-Like Front-End: From ACE Chips to Q-Eye | p. 356 |
8.4 The Q-Eye Chip | p. 358 |
8.5 Eye-RIS v1.1 Description (ACE16K Based) | p. 362 |
8.5.1 Interrupts | p. 364 |
8.6 Eye-RIS v1.2 Description (Q-Eye Based) | p. 364 |
8.6.1 Digital Input/Output Ports | p. 366 |
8.7 NIOS II Processor | p. 367 |
8.7.1 NIOS II Processor Basics | p. 368 |
8.8 Conclusion | p. 368 |
References | p. 368 |
9 Visual Algorithms for Cognition | p. 371 |
9.1 Global Displacement Calculation | p. 371 |
9.2 Foreground-Background Separation Based Segmentation | p. 374 |
9.2.1 Temporal Foreground-Background Separation | p. 375 |
9.2.2 Spatial-Temporal Foreground-Background Separation | p. 376 |
9.3 Active Contour Algorithm | p. 377 |
9.4 Multi-target Tracking | p. 380 |
9.5 Conclusions | p. 383 |
References | p. 383 |
10 SPARK Hardware | p. 385 |
10.1 Introduction | p. 385 |
10.2 Multi-sensory Architecture | p. 386 |
10.2.1 Spark Main Board | p. 387 |
10.2.2 Analog Sensory Board | p. 388 |
10.3 Sensory System | p. 391 |
10.4 Conclusion | p. 396 |
References | p. 397 |
11 Robotic Platforms and Experiments | p. 399 |
11.1 Introduction | p. 399 |
11.2 Robotic Test Beds: Roving Robots | p. 400 |
11.2.1 Rover I | p. 400 |
11.2.2 Rover II | p. 402 |
11.3 Robotic Test Beds: Legged Robots | p. 402 |
11.3.1 MiniHex | p. 402 |
11.3.2 Gregor III | p. 404 |
11.4 Experiments and Results | p. 405 |
11.4.1 Visual Homing and Hearing Targeting | p. 405 |
11.4.2 Reflex-Based Locomotion Control with Sensory Fusion | p. 408 |
11.4.3 Visual Perception and Target Following | p. 409 |
11.4.4 Reflex-Based Navigation Based on WCC | p. 411 |
11.4.5 Learning Anticipation via Spiking Networks | p. 413 |
11.4.6 Landmark Navigation | p. 415 |
11.4.7 Turing Pattern Approach to Perception | p. 416 |
11.4.8 Representation Layer for Behaviour Modulation | p. 420 |
11.5 Conclusion | p. 422 |
References | p. 422 |
Index | p. 423 |
Author Index | p. 425 |