AFRL/ACT3 - QuEST Talk Archives

AUTONOMY CAPABILITY TEAM (ACT3) QuEST MEETING ARCHIVES

Qualia Exploitation of Sensor Technology

QuEST is focused on addressing the limitations in computational intelligence. Computational Intelligence has not developed anything resembling a general purpose solution, despite over half a century of effort. All current approaches have been built off of objective representations of the world. Objective representations are characterized by their attempt to capture attributes of the environment under consideration as accurately in a physics sense as possible. The results from the physical attribute sensing steps are then usually fed into exploitation algorithms or used for display for human operators where decisions are made.

There are several aspects of this approach that are flawed. First, we have to assume that the sensors were perfectly manufactured and are all identical. Any variations in sensor development will lead to differences in how the world is observed and in conventional approaches how it will be represented. Second, we have to have complete knowledge of what our system will encounter and that knowledge is captured either in the data set or models used for development, as well as knowledge of how this data might vary under the complete range of possible operating conditions. Slight changes in operating conditions can lead to completely different conclusions being drawn by the fragile exploitation algorithms operating with an objective representation.

Nature has adopted an approach that can deal with sensor imperfections and dramatic differences in the input reliably and effectively. These physiological solutions are based on the generation of a subjective representation that can adapt to ever changing sensors, the unique information experienced by the sensors as a result of their unique embedding, as well as information associated with the exploitation approach to be used for the sensor data. Subjective representations that are not tied to maintaining fidelity with physics based reality are a critical component of robust decision making.

The aspects of your subjective representation that you are aware of are called qualia. The color you see when looking at a scene or the pain you feel when you stub your toe are examples of qualia. Qualia are completely internal, and completely individualized. They form the basis set from which the subjective world model is constructed.  QuEST (Qualia Exploitation of Sensor Technology) seeks to develop a general purpose computational intelligence system that captures  the beneficial engineering aspects of qualia based solutions. A QUEST system will have the ability to detect, distinguish, and characterize entities in the environment, to include a representation of its self. It will also be able to construct a Theory of Mind for other entities in the environment to enable conclusions to be drawn about the internal feelings of sentient entities, such as sentiment and intent.

QuEST MEETINGS

Click below for past QuEST guest speakers, topics and talks.
This week we will begin to synthesize and distill the 2024 QuEST Annals
 
Abstract:
We will summarize the QuEST 2024 presentations in an effort to post-hoc structure the year's content of Department of the Air Force operational demands, philosophical transactions, and scientific trends.
 
We will explore ideas of similarity models of human consciousness by highlighting the really cool quest interactions that we've had over the year, to figure out how to work S3Q.
 
Why should systems be categorized conscious or not? Thinking about these binary classifications that we try to make -- we're often better off thinking in terms of gradations rather than the binary setting. Perhaps models of consciousness could be better framed around the idea that: all models of consciousness come down to some notion of similarity to human cognition, and make distinctions between models that aim for behavioral/functional similarity vs. representational/mechanistic similarity, and explore further breakdowns within those.
We will start with some highlights from the fabled Kabrisky Lectures to expose those points on which people want to focus.
 
Abstract: Every January the QuEST group uses the first meeting of the calendar year to present a ‘state of QuEST’ lecture in honor of a founding member of the QuEST group, Dr. Matthew “Special K” Kabrisky. The "Kabrisky Memorial Lecture" is designed to bring anyone up to speed on how we use terms and to communicate what we seek to accomplish.

Steve "Cap" Rogers made a call for content to add to the "Kabrisky Memorial Lecture". Please review the outline in the image and respond with any content that you would like to see added to this year's lecture. 

  • What is the goal of the Kabrisky lecture: What is the QuEST Story?
  • What is consciousness? What are Qualia?
  • What is the 'cash value' of qualia?
    • Qualia cash value is associated with 'intelligence'. What is intelligence? What is knowledge and how is it created?
    • Insight into qualia can be exploited by AI, posible creating a new generation of flexible AI (implies need a view of where is the field of AI)
  • What are the insights, tenets into qualia? A theory of qualia. (physiological basis / theories)
  • How do we use this theory to create that next generation of AI?
    • Four somewhat orthogonal problems arise in AI:
      • choosing a representation language ~ making that qualia-like, implications to Generative AI
      • encoding a model in that language ~ making that qualia-like, implications to Generative AI
      • performing inference using the model - don't believe qualia play a role here with current QuEST view other than updating the language / model other than 'experiencing' inference
  • "Sugar" Ray insertion - how do we determine if a representation is Qualia compliant?
    • provide examples of representations that are and are not compliant
This week Robert P. will lead a discussion on his work in flexible cognition

Abstract: Last meeting we discussed the (in)flexibilities of conscious and nonconscious learning and decision making. Join us this week as we continue by featuring Air Force work on the topic with Dr. Robert Patterson, Senior Psychologist

Reading: Journal of Cognitive Engineering and Decision Making 2017, Volume 11, Number 1, March 2017, pp. 5–22 DOI: 10.1177/1555343416686476

Patterson, R. E., Pierce, B. J., Boydstun, A. S., Ramsey, L. M., Shannan, J., Tripp, L., & Bell, H. (2013). Training intuitive decision making in a simulated real-world environment. Human factors, 55(2), 333–345. https://doi.org/10.1177/0018720812454432
This week we continue our conversation on clinical tools for consciousness

Abstract: Perturbational Complexity Index is proposed as an objective measure for the determination of consciousness in the clinic. This measure is derived from theoretically grounded information integration theory. Consciousness is a structure (not a function) that we can understand through systematically probing the brain and mind, yielding clinical tools as the research progresses.

Reading: Casali, A. G., Gosseries, O., Rosanova, M., Boly, M., Sarasso, S., Casali, K. R., ... & Massimini, M. (2013). A theoretically based index of consciousness independent of sensory processing and behavior. Science translational medicine, 5(198), 198ra105-198ra105.
This week we continue our conversation on clinical tools for consciousness

Abstract: Perturbational Complexity Index is proposed as an objective measure for the determination of consciousness in the clinic. This measure is derived from theoretically grounded information integration theory. Consciousness is a structure (not a function) that we can understand through systematically probing the brain and mind, yielding clinical tools as the research progresses.

Reading: Casali, A. G., Gosseries, O., Rosanova, M., Boly, M., Sarasso, S., Casali, K. R., ... & Massimini, M. (2013). A theoretically based index of consciousness independent of sensory processing and behavior. Science translational medicine, 5(198), 198ra105-198ra105.
This week we discuss field measures for disorders of consciousness relevant to the US Department of the Air Force
 
Abstract: Our colleague Josh Stierwalt, Chief En Route Care Mission Informatics Officer at the United State Air Force School of Aerospace Medicine, will join us to discuss field measures of consciousness, including Glasgow Coma Scale. Consciousness is real, and we owe our military service members the best QuEST has to offer for treatment, to bring our Heroes Home.
From metaphysical foundations for the asymptotes of neuroscience to transformational capabilities for aerospace medicine.
 
Abstract: The QuEST model asserts the fundamental computational unit of the conscious experience is of a relational structure within a simulated world model, situated in cognitively decoupled coherence with the real environment.
 
This week we start with the metaphysical philosophy required in consciousness research, and move to actionable insights for aeromedical practitioners. Qualia are real, and the Department of the Air Force is aggressively pursuing fundamental understanding of these phenomena, to build transformational capabilities for Airman and machines.
This week we will discuss coma and consciousness.
 
Abstract:
The medical field has a practical need for consciousness studies. This week we transition to discussing how researching disorders of consciousness, such as coma and persistent vegetative state, can help progress our understanding of qualia, with dual-use warfighter applications.
 
Bios:
Nitish Thakor, Ph.D. - I am a Professor of Biomedical Engineering, Electrical and Computer Engineering at Johns Hopkins University since 1983. My expertise is Medical Instrumentation and Neuroengineering. I have published over 450 refereed journal papers (GH-Index>100), obtained 18 US and international patents and co-founded 3 active companies. I was previously the Editor in Chief of IEEE-TNSRE. I am a recipient of the Technical Achievement Award (Neuroengineering) as well as the Academic Career Award from the IEEE Engineering in Medicine and Biology Society. I am a Life Fellow of IEEE, Fellow of the AIMBE, BMES, and IAMBE and the National Academy of Inventors. I have published Handbook of Neuroengineering – a potential overarching reference source. Handbook of Neuroengineering, Springer, 2023:
https://link.springer.com/referencework/10.1007/978-981-16-5540-1

Romer Geocadin, M.D. - I am a Professor of Neurology, Anesthesiology and Critical Care Medicine, and a practicing neurointensivist. My translational and clinical research focus have been on disorders of consciousness, mainly on coma and coma recovery. In this area our research seeks to understand 1) mechanisms related arousal from coma, 2) investigate thalamic-cortical connectivity measures as a metric of clinical improvement, and 3) help to develop new therapeutic interventions for coma recovery. My work in brain injury and cardiac arrest has led him to be a member of several American Heart Association Scientific Panel and have been a panelist and speaker for the Institute of Medicine (IOM) of the National Academy of Sciences Committee on Treatment of Cardiac Arrest: Current Status and Future Directions.
This week our colleague Patryk L. will discuss visual perception for autonomous artificial agents
 
Abstract:
Autonomous artificial systems need to perceive objects and events that are mired in everyday variability.  Despite recent progress in computer vision — bolstered largely by deep learning — current methods do not handle this variability well enough for robust autonomous artificial perception. Even "simple" visual tasks, like tracking a green ball as it rolls around a room or outdoors, are surprisingly difficult for robots with cameras. Whereas "low dimensional" objects, paths, and processes underpin physical scenes, the signals reaching a camera are convolved with the effects of numerous dynamical, natural physical regularities that affect how things appear. These effects are far too numerous for experts to hard-code into a system. These effects also cannot be learned by contemporary feedforward deep learning networks, because they arise from factors playing out across space and time, at multiple scales*. The consequence is that small naturally occurring variations can cause deep learning networks to fail.
 
In this presentation I describe a one-year DARPA-funded project aimed at tackling this perceptual challenge head-on (work performed at Brain Corporation; Piekniewski, Laurent, et al. 2016). Taking inspiration from neuroscience and the goal of learning to decompose a scene into its underlying dynamics, the authors architected a minimal neural network that was recurrent (with extensive feedback connectivity), scalable (self-similar), locally predictive (eliminating vanishing gradients), and un-(or self-)supervised.  After training, the authors analyzed learning performance, benchmarked the network's capabilities as a tracker, and then finally tested it by embodying it in both virtual and physical robots.
 
* Examples of these effects: all the shadows in a scene are pointing in the same direction, specular reflections, shifts in apparent hue due to ambient lighting, motion blur, shifts in robot camera sensor CCD color sensitivity due to heat buildup — to name just a few that we recognize and have names for.
 
Bio:
Patryk Laurent, Ph.D., is the chief science officer of Scenera, Inc — a Palo Alto-based startup partnered with Sony and Microsoft. Scenera enables facilities managers to deploy AI to their edge and customize the AI based on their own data and environment, ensuring privacy, low latency, and high accuracy. Patryk leads the company's data science, data management, and AI development efforts.  Prior to Scenera he spent a decade in industry working on AI/ML for robotics and IoT, and spent several years running a center of excellence for data science at a private equity firm.
 
Patryk has a Ph.D. in neuroscience from the University of Pittsburgh, a training certificate from the Center for the Neural Basis of Cognition (University of Pittsburgh and Carnegie Mellon University), and completed postdoctoral research at the Johns Hopkins University. His academic work focused on brain learning mechanisms, primarily on the reinforcement learning of physical/motor versus cognitive actions (covert attention, task switching) in models and in humans using behavioral and fMRI methods. 
 
Reference:
Piekniewski, F., Laurent, P., Petre, C., Richert, M., Fisher, D., & Hylton, T. (2016). Unsupervised learning from continuous video in a scalable predictive recurrent network. arXiv preprint arXiv:1607.06854. (https://arxiv.org/abs/1607.06854)
This week we will discuss topological features in large language models


Title: Topological Features in Large Language Models (and beyond?)
 
Abstract: Large language models (LLMs) have received attention because they can synthesize "human like" text. Besides the obvious philosophical questions about their semantics, the internal workings of LLMs are a bit inscrutable. However, since LLMs use "transformers" based upon deep neural networks (DNNs), their internal states are completely determined by large dimension real vectors. Most of the theoretical literature on DNNs assumes that the space of possible vectors is a manifold... but it seems that no one has really checked to see if this is true! If we look at slices of the space of these vectors, much can be learned, including clear evidence that the space is not a manifold! Discuss!
 
Bio: Michael Robinson is Professor of Mathematics and Statistics at American University. He is interested in applied mathematics of all sorts, especially the use of topology and category theory to practical systems. He prefers to study systems as they actually are used, not just simplified models. He's always up for an interesting discussion about unusual properties of engineered systems, and how they might be captured mathematically.

This week we discuss the implications of quantum probability theory for use in consciousness research.
 
Abstract:
We will synthesize and distill recent content on quantum cognition models by posing Jerome B.'s future directions questions: What are the implications of quantum cognitive models for Large Language Models embedded in vector spaces? How can quantum cognition leverage the advent of quantum computers? Can we use them to compute model predictions? Can quantum computers become conscious? What can quantum cognition contribute to understanding consciousness? What is the relation to quantum brain theories? How is this related to the quantum measurement problem? Please, consider your thoughts and come prepared with answers, or Bring Your Own Question.
 
Reading:
Pothos, E. M., & Busemeyer, J. R. (2022). Quantum Cognition. Annual review of psychology, 73, 749–778.
https://www.annualreviews.org/content/journals/10.1146/annurev-psych-033020-123501

Bruza, P. D., Busemeyer, J. R. (2015) Quantum cognition: A new theoretical approach to psychology. Trends in Cognitive Science, 19 (7), 383-393
https://www.sciencedirect.com/science/article/abs/pii/S1364661315000996
Quantum models of cognition and decision

Abstract:
What type of probability theory best describes the way humans make judgments under uncertainty and decisions under conflict? I will present general overviews along with concrete examples of three different kinds of applications of quantum cognition: one application to probabilistic reasoning, a second to dynamics of evidence accumulation, and a third to heuristic information processing.

Bio:
Jerome Busemeyer previously was a Full Professor at Purdue University before 1997, and now is Distinguished Professor in Psychological and Brain Sciences, Cognitive Science, and Statistics at Indiana University-Bloomington. His research has been funded by the National Science Foundation, and the National Institute of Mental Health. He was the Manager of the Cognition and Decision Program at the Air Force Office of Scientific Research in 2005-2007. He has published five books in decision and cognition, and over 150 journal articles across disciplines. He served as the Chief Editor of Journal of Mathematical Psychology, Associate Editor of Psychological Review, and he was the founding Chief Editor of Decision. He is a fellow of the Society of Experimental Psychologists, and he won the prestigious Warren medal from that society in 2015. He became a fellow of the Cognitive Science Society and a fellow of the American Academy of Arts and Sciences in 2017. He received an Honorary Doctorate from University of Basal in 2019. During his early career, he became well known for the development of a dynamic and stochastic model of human decision making called decision field theory. Later, he was one of the pioneers to develop a new approach to cognition based on principles from quantum theory. In 2012, Cambridge University Press published his book with Peter Bruza introducing this new theory applying quantum probability to model human judgment and decision-making. The second edition of this book will soon appear later this year with Cambridge
 
Reading:
Pothos, E. M., & Busemeyer, J. R. (2022). Quantum Cognition. Annual review of psychology, 73, 749–778.
https://www.annualreviews.org/content/journals/10.1146/annurev-psych-033020-123501
 
Bruza, P. D., Busemeyer, J. R. (2015) Quantum cognition: A new theoretical approach to psychology. Trends in Cognitive Science, 19 (7), 383-393
https://www.sciencedirect.com/science/article/abs/pii/S1364661315000996
This week we continue the discussion on model-free versus model-based (reinforcement) learning
 
Body: Paraphrasing Patryk's words, we will frame the discussion about the concepts of qualia / knowledge in the QuEST framework to help us define intelligence via knowledge acquisition and S3Q. Could a model-free agent (complete with sensory, motor, etc systems) ever support anything like consciousness? Is the new knowledge integration supported by consciousness akin to bringing new information into an agent's "model"? How simple could a "model" be yet admit anything like qualia or a consciousness-mediated acquisition of knowledge? Join us this week as we ramp up the simulation axis to frame this week's discussion
 
Reading: Bennett, M. S. (2021). Five breakthroughs: a first approximation of brain evolution from early bilaterians to humans. Frontiers in Neuroanatomy15, 693346.
https://www.frontiersin.org/journals/neuroanatomy/articles/10.3389/fnana.2021.693346/full
This week we evolve air combat leveraging reinforcement learning.

Abstract:
We discussed the evolutionary transition from radial to bilateral symmetry requiring left/right, anterior/posterior, and dorsal/ventral architectural specifications. This development enabled a collection of sensory and neural elements at the head end of the organism, vectoring the beginning of an evolutionary pathway culminating in a brain. The next breakthrough we will discuss is the development of reinforcement learning (RL). There are two main classes of algorithms for reinforcement learning. The first class of algorithms predates learning a world model and instead learns a table of long-run state-action values directly from experience. The discovery of algorithms for accomplishing such model-free RL was a major advance in machine learning that continues to provide the foundation for modern applications. The second approach to RL is based on estimating the one-step reward and state transition of distributions. This approach is known as model-based learning due to its reliance on an internal model. We will highlight the shared features of the neocortex in the brain and a class of machine learning models called “generative models” which learn a “latent representation” (also called a “model” or an “explanation”) of its input. Please continue this tour of biology as we move from animalness to vertebrates to homization.

Reading:
Sutton, R. S. (1988). Learning to predict by the methods of temporal differences. Machine learning, 3, 9-44.
https://link.springer.com/article/10.1007/bf00115009

Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., ... & Hassabis, D. (2015). Human-level control through deep reinforcement learning. nature, 518(7540), 529-533.
https://www.nature.com/articles/nature14236

Bennett, M. S. (2021). Five breakthroughs: a first approximation of brain evolution from early bilaterians to humans. Frontiers in Neuroanatomy, 15, 693346.
https://www.frontiersin.org/journals/neuroanatomy/articles/10.3389/fnana.2021.693346/full
This week we continue discussion of evolutionary perspectives on intelligence and consciousness.
 
Abstract: Our colleague Jason P. mentioned 5 evolutionary mechanistic breakthroughs hypothesized by Max Bennett to lead to human brain functions and features of human behavior. Discussion will be prompted by colleague Bogdan U. first describing some of these breakthroughs and as a group we will discuss how we can model these systems in artificial agents.
 
Readinghttps://www.frontiersin.org/journals/neuroanatomy/articles/10.3389/fnana.2021.693346/full
Please join us to close out our annual talent pipeline presentations featuring Department of the Air Force rising superstars in diffusion and neural network modeling.


ABSTRACT: A perfect autopilot can perform any mission in any environment over a long time horizon. Latent diffusion models are the state-of-the-art in compressing noisy input data, extracting critical features, and denoising and generating structured output action sequences. Our development code, cycle-agent, realizes these capabilities in an aircraft-proxy environment, but was developed without accounting for other environments. To enable training our model on a broad range of environments and behaviors, we developed Chopper, an image-observation gymnasium environment, and validated the cycle-agent pipeline with an environment entirely different from the initial proxy environment.

BIO: Sam Vinu-Srivatsan is a second-year undergraduate at MIT majoring in Electrical Engineering & Computer Science and minoring in Political Science. This is her third year interning with ACT3! She loves all things coffee and tea, running, and reading science fiction. 
 
ABSTRACT: Modern artificial neural networks neglect temporal dynamics in favor of more easily computed stateless neuron models. As a result, these AI models are often specialized for a single task and require extensive work from data scientists to generalize well to out-of-distribution data. In contrast, human behavior is incredibly dynamic and flexible to environmental changes despite noisy sensory receptors and frequent synaptic failures. Every behavior is driven by a complex set of deeply personalized and constantly updated motivations. Leabra, a biologically motivated neural network framework, includes a model of reward learning based on primary values and learned values similar to the Pavlovian paradigm of conditioned and unconditioned stimuli. The model demonstrates how neural networks with stateful dynamics may be able to implement goal-directed behaviors in a more human-like manner. In this presentation, I will briefly outline the importance of stateful neuron models for implementing these mechanisms and discuss how the PVLV model might close the gap between artificial and general intelligence.

BIO: Luis El Srouji is a PhD candidate in the Electrical and Computer Engineering department at the University of California, Davis. His thesis includes work on the design and fabrication of biologically-inspired optoelectronic neurons, the implementation of local learning rules on Mach-Zehnder interferometry meshes, and development of software for simulating photonic and electronic systems for emulating neural networks. This summer, Luis is developing a CUDA-accelerated implementation of Leabra and exploring how Leabra models can be scaled and applied to new workloads.

This week we start wrapping up our internship program while transitioning back to Qulia-focused artificial consciousness discussions


Talk Abstract: Guaranteeing safety of complex autonomous systems presents significant challenges due the inherent presence of unmodeled disturbances in real-world scenarios. This paper presents a novel safety-critical control framework which assures safety for a wide range of nonlinear systems under unknown bounded disturbances. To address the problem of obtaining a controlled invariant safe set, we leverage the advantages offered by the backup set method to produce such sets online in a computationally tractable manner. For disturbance-robustness, we provide sufficient conditions for forward invariance of such sets in the presence of bounded disturbances. We show that by appropriately tightening constraints on the nominal trajectory computed via the backup set method, we can guarantee safety for the true, disturbed trajectory via our \textit{Disturbance-Robust Backup CBF (DR-BaCBF)} solution. Finally, the efficacy of the proposed framework is demonstrated in simulation, applied to a simple double integrator problem and a rigid body spacecraft rotation problem with rate constraints.

Speaker Bio: David van Wijk is an intern with the Safe Autonomy team, and a third year PhD student in the Aerospace Engineering Department at Texas A&M University in the Land, Air and Space Robotics Laboratory. He completed his BS in Mechanical and Aerospace Engineering at Cornell University in 2021. His main research interest is in safe autonomy using control barrier functions, especially for spacecraft and for robotics applications. He has done work in safe autonomy for autonomous rendezvous, proximity operations, and docking (ARPOD) scenarios, in tandem with reinforcement learning.

This month we feature the outstanding work of our summer talent development program


Abstract: The rising popularity of diffusion models has marked a significant breakthrough in artificial intelligence research, offering robust generative capabilities that have broad applications. One of such capabilities being learning a reverse process to denoise a cluttered data set and get a probabilistic distribution of the original state. This has the potential of being a great tool for autonomous robotics applications in which finding dynamic models of the systems is challenging, and data driven approaches to learn such models are better suited. By leveraging the stochastic and generative nature of diffusion processes, this study aims to analyze algorithms that allow robots to adaptively and efficiently navigate complex environments. The integration of diffusion models into robotics could not only improve the precision and flexibility of robotic systems but also reduce computational budgets in the path planning process while still finding near-optimal feasible solutions for a generated plan and a control sequence.

Bio: Selvin is a current intern recently graduated with a Master’s Degree in Mechanical Engineering focusing on research and applications in the field of robotics, control, and autonomous systems from Carnegie Mellon University advised by Dr. Aaron Johnson. His main interest is to work on autonomous applications that can leverage the cutting-edge AI technology to improve their performance in the real world while being integrated among the human population safely (i.e. autonomous cars, assistive robots, etc). Currently seeking full-time position opportunities in any related fields.

Abstract: In order to utilize autonomous systems, safety of the system must be guaranteed. Run Time Assurance (RTA) is a safety assurance filter that monitors system behavior and the output of the primary controller to enforce satisfaction of safety constraints. In application, due to modeling errors or limitations, the system dynamics may not be known accurately. Adaptive controllers are able to control a system even in the presence of model uncertainty. This work investigates the impacts of the RTA safety technique on the learning of parameters within a model reference adaptive controller. A preliminary investigation is presented for a mass-spring-damper with gradient, Concurrent Learning (CL), and Integral Concurrent Learning (ICL) adaptive controllers.

Bio: Cassie-Kay McQuinn is a PhD student in Aerospace Engineering at Texas A&M University. She received her B.S. and M.S. in Aerospace Engineering from the same institution in 2021 and 2024, respectfully. At Texas A&M she is a graduate research assistant in the Vehicle Systems and Control Laboratory (VSCL), there her previous work focused on aircraft system identification from flight test data. Her current research interests are in aerospace vehicle control and dynamics, incorporation of safety assurance with autonomy, and implementation of this work through flight testing on unmanned air vehicles (UAVs).

This month we feature the excellent work developed through the ACT3 summer internship program


Abstract: As we all know, artificial intelligence is being utilized everywhere we look. During the school year, college students become “inspired” by ChatGPT, cell phones use facial recognition software as a key for unlocking, and Google Maps allows for a directionally challenged person to get from one place to another. Despite the overwhelming benefits brought by technological advancements, the process in which AI is developed and employed, ethically and responsibly, must be reconsidered. This is particularly crucial when considering decision-making programs that may have a significant impact on a person’s life, such as in the medical and military fields. Many leaders define the solution as keeping an “ethical” human in-the-loop, but oftentime, this strategy fails to overcome the moral gap. Not only does human processing time run at a snail's pace compared to AI, but most surprisingly, studies found that the human conscious experience may not occur until 10 seconds after a decision is made. This suggests that a consciously ethical human is an unattainable stipulation, making a human in-the-loop an ineffective solution to the problem. Alternatively, more effort should be allocated to analyzing the data used for training AI, educating those who will be applying the software before it’s implemented, and finally maintaining the currency of the human-machine team.

Bio: Madie Justice, a Dayton native, is an incoming fourth-year undergraduate at The Ohio State University, where she is majoring in biology and minoring in anatomy. After conquering the MCAT earlier in the summer, she hopes to attend medical school after a much needed gap year. Following her first year of measuring the capacity of consciousness, her second year with ACT3 focussed on the ethical employment of AI.
Abstract: Latent diffusion models represent the state of the art in text-conditioned image, video, and music generation, and notably in robotics motion planning.  We explore using these generative models to prototype part of a flexible autopilot architecture utilizing a proxy race car environment, where instead of altitude and bearing, we quantitatively analyze a car’s steering direction, location on the road, and speed, along with the direction of the track. Then in order to flexibly specify a desired future state for an action planning agent, we fine-tune a stable-diffusion model that allows users to modify specific components of race car driving scene images using a textual description of a desired state.  Finally, we explore the space of inference parameters that most successfully alter the desired parts of the image while keeping the rest of it consistent.

Bio: Jay Bhan is a rising sophomore at MIT studying Artificial Intelligence and Mathematics, currently interning at ACT3. He has a particular interest in exploring the intersection of STEM and education, which he developed from instructing math courses at his high school. He enjoys going on hikes, playing board games, solving puzzles, and spending time with friends and family.
Abstract: The rise of artificial intelligence (AI) has led to the increased use of these technologies across a wide range of fields and occupations. Due to this, how we incorporate AI into the workplace is an important concept we must consider. Not only is how we utilize AI critical but also how we integrate AI into formerly human-only teams effectively in the workplace is an essential part. With this, something that needs to be explored is the intelligence of these human-autonomous groups and see how we can construct the most efficient human-machine teams (HMTs). This presentation strives to define intelligence and identify possible ways to evaluate and measure the intelligence of human-machine teams.

Bio: Keri Kolker is a rising senior at the University of Dayton who is majoring in psychology. She is an executive officer for UD’s chapter of the psychology fraternity Psi Chi, and she is a registered behavior technician for the ABA clinic Key Behavior Services in Kettering, Ohio. This summer she has been working with Dr. Steven (Cap) Rogers at ACT3 looking at the definition and the testing of intelligence of human-machine teams.
Abstract: Prostheses have been utilized globally for over 3,000 years. Over the course of time, devices have evolved from basic leather and wood structures designed to replace missing limbs into advanced models that can mimic the structure and function a normal limb, even having the ability to connect to still existing bones, muscles, and nerves. This research explores the implications of prosthetic integration and whether we desire prostheses to be fully a part of one’s personal self.

Bio: Natalie is a rising junior at Ohio Wesleyan University majoring in exercise science, planning to follow this degree with a master’s in biomedical engineering. This is her third summer with ACT3. Her primary research focuses with ACT3 have surrounded the world of prostheses and how the devices have been developed and produced for more effective and efficient movement.

Abstract: While researching research AI models such as LEABRA (Local, Error-driven and Associative, Biologically Realistic Algorithm), a neural network learning algorithm which mirrors human thinking, I am compelled to draft understanding on how such technology can use pattern recognition to mimic the functions in the human brain. Models such as LEABRA aid AI in both processing and interpreting complex environments in order to make decisions–in theory similar to a human. LEABRA uses invariant object recognition–a concept from human psychology and neuroscience where an object moves up through a visual hierarchy to be increasingly distinguished, regardless of variations in location, size, angle, etc. Through invariant object recognition, AI models can mimic this aspect of human perception. Tools such as convolutional neural networks (CNNs) help AI models to recognize patterns and features in data similar to hierarchical human perception. Thus, LEABRA is enabled to engage in many stages of human information processing–however, there are some limitations. Because models like LEABRA are incapable of phenomenal consciousness, they are not self-aware and have no experience of the world itself, thus LEABRA cannot perfectly replicate human information processing. What are the implications of humans and machines perceiving differently, for the ethical employment of AI?

Bio: Deshna is a rising freshman undergraduate at the University of Florida studying Computer Science in Engineering. This is her second summer interning under ACT3 as she researches the AI algorithm LEABRA and draws insights applicable to the human brain and theories of phenomenal consciousness.

This month QuEST highlights our in-house rising talent! 


Abstract: The complex condition of the airspace during dogfights results in a need for an improved refueling system, which leads to the goal of the CRONUS project. Developing a scenario builder to feed plane configurations into a simulation will provide a visualization aspect of such a system. Essential to this effort is the presence of a graphic interface component that allows the user to customize the entities that are spawned into the interactive world, and a menu developed through Bevy and Rust has potential to provide that necessary customization by allowing users to select specific aircraft types and configurations.
 
Bio: Anna Kuang is an intern working under Rebecca Servaites. She is a recent graduate from Beavercreek High School and will be attending her first year at Cornell University as a Meinig Family National Scholar and a computer science major this fall. Through her time working with ACT3 and through future experiences, she hopes to explore deeper into the diverse fields of computer science. 
Abstract: Catastrophic interference is the fundamental tendency for a neural network to abruptly and drastically forget some of what it has previously learned upon learning new information. Humans don’t experience memory interference to the extent that artificial neural networks do due to conscious and nonconscious memory systems working together in the brain. Declarative memory, the conscious memory system, is dependent on the Medial Temporal Lobe (MTL) which rapidly encodes memories after just one episode using sparse activation patterns to be able to distinguish similar memories. These memories are then consolidated into the nonconcious memory system, nondeclarative memory, which utilizes other parts of the brain such as the neocortex. Consolidation includes extracting statistical generalities and encoding memories with dense overlapping activation patterns allowing connections to be made between similar memories. These complementary learning systems are what allow memories to be retained by humans even after learning new similar memories, unlike artificial neural networks. The Local Error-Driven Associative Biologically Realistic Algorithm LEABRA incorporates some of these conscious memory mechanisms such as flexible inhibition levels and a combination of both error-driven and self-organized learning. This summer I looked at how these mechanisms affect catastrophic interference in artificial neural networks. 
 
Bio: Amber Joneleit is a rising junior at Purdue University majoring in Artificial Intelligence and Computer Science with a concentration in machine learning. This is Amber’s second summer interning with ACT3. This summer Amber researched reducing catastrophic interference in artificial neural networks using the Local Error-Driven Associative Biologically Realistic Algorithm LEABRA. 
Abstract: The aerospace industry is a safety and mission-critical domain with high consequence, complex operations that require precise actions, decision-making under uncertainty, and the ability to adapt to unexpected events. The integration of artificial intelligence (AI) into aerospace operations typically performed by humans offers the potential for increased efficiency, safety, and decision-making capabilities. However, the success of such systems hinges on the effectiveness of the human-AI team, which intersects at the human-AI interface (HAI). This research develops an evaluation framework for the usability, situational awareness, and workload for human-AI teams in aerospace.

Bio: Britney Rogers is a current Master's student in Aerospace Engineering with a focus in controls at the University of Houston working within the UH Lab for Advanced Learning, Artificial Intelligence, and Control. She previously completed her BS in Mechanical Engineering with a minor in Math at UH and in the fall she will be attending Texas A&M for an Aerospace PhD. 
Abstract: In order to test the effectiveness of a sim-to-sim transfer of a reinforcement learning (RL) agent for satellite control from a lower fidelity simulator to a higher fidelity simulator (Kerbal Space Program), a team of three interns from the Safe Autonomy Team will be entering the AIAA Capture the Satellite Challenge competition where “participants develop autonomous agents for maneuvering satellites engaged in non-cooperative space operations [in Kerbal Space Program].” The team has built off of the simulation backend used by the Safe Autonomy Team to construct a new training environment using gymnasium that align with the goal of the competition. This presentation will show the current progress of this project and where it is headed. Competition information: https://www.aiaa.org/SciTech/program/2025-Capture-the-Satellite-Challenge

Bio: Arturo de la Barcena III is an intern working under Dr. Kerianne Hobbs and a mechanical engineering master’s student at the University of Houston. His primary research interests are in air and space vehicle control and autonomy and has been involved with the Advanced Learning, Artificial Intelligence and Control lab at the University of Houston for over three years. 

Abstract: Many people have been exploring the integration of artificial intelligence in American Sign Language interpretation. The use of AI in this way has the potential to enhance communication accessibility for the Deaf and hard-of-hearing communities. By utilizing machine learning algorithms and computer vision techniques, AI systems can recognize and translate ASL gestures into spoken or written language in real-time. This summer, I have been examining existing AI models, relevant datasets, and challenges faced in accurately capturing the nuances of ASL. Additionally, there are ethical implications of AI in this field, including the importance of maintaining cultural sensitivity and ensuring user privacy. Ultimately, in my research, I strive to identify pathways for improving AI-driven ASL interpretation technologies to foster greater inclusivity and break down communication barriers.

Bio: Raelee just finished her second year at Purdue University studying Computer Science. Even though most of her time is spent on campus in Indiana, she loves coming home to Huber Heights, Ohio. While at Purdue, she is a member of the John Martinson Honors College and The Data Mine. In addition to her academic pursuits, she plays the mellophone in the Purdue All-American Marching Band, for which she is excited to take on the role of a student leader this upcoming school year. This summer she has been working under the mentorship of Dr. Michael Mendenhall at ACT3 through the Premier College Intern Program.

This week we move from neural correlates to computational mechanisms of consciousness.
 
Abstract: Neural correlates of consciousness are useful. They are making their way into the clinic to test for residual consciousness in brain damaged patients. These measures, though, are limited and asymmetrical: if the test gives a positive answer, we are almost sure that the patient is conscious, but if it gives a negative answer, we cannot use it to conclude that a patient is not conscious. What are the implications of this for engineering Artificial Consciousness? We need to move from the neural correlates to the computational mechanisms of consciousness to better understand the underlying processes. This, we suggest, will get us closer to the How and Why we have subjective qualitative experience. We are making progress as a field, please join us this week to continue the discussion.
 
Reading: Storm, J. F., Klink, P. C., Aru, J., Senn, W., Goebel, R., Pigorini, A., Avanzini, P., Vanduffel, W., Roelfsema, P. R., Massimini, M., Larkum, M. E., & Pennartz, C. M. A. (2024). An integrative, multiscale view on neural theories of consciousness. Neuron, 112(10), 1531–1552. https://doi.org/10.1016/j.neuron.2024.02.004​​
This month we move from neural correlates to computational mechanisms of consciousness.
 
Abstract:
Neural correlates of consciousness are useful. They are making their way into the clinic to test for residual consciousness in brain damaged patients. These measures, though, are limited and asymmetrical: if the test gives a positive answer, we are almost sure that the patient is conscious, but if it gives a negative answer, we cannot use it to conclude that a patient is not conscious. What are the implications of this for engineering Artificial Consciousness? We need to cover the neural correlates and then move to the computational mechanisms of consciousness to better understand the underlying processes. This, we suggest, will get us closer to the How and Why we have subjective qualitative experience. We are making progress as a field, please join us this week to continue the discussion, starting with the comparative neuroanatomy of relevant neural architectures and mechanisms.
 
Readings:

  • Butler, A. B. (2008). Evolution of the thalamus: a morphological and functional review. Thalamus & Related Systems4(1), 35-58.
  • Butler, A. B. (2012). Hallmarks of consciousness. Sensing in nature739, 291-309.
  • Butler, A. B. (2008). Evolution of brains, cognition, and consciousness. Brain Research Bulletin75(2-4), 442-449.​​
This week we move from neural correlates to computational mechanisms of consciousness.
 
Abstract: Neural correlates of consciousness are useful. They are making their way into the clinic to test for residual consciousness in brain damaged patients. These measures, though, are limited and asymmetrical: if the test gives a positive answer, we are almost sure that the patient is conscious, but if it gives a negative answer, we cannot use it to conclude that a patient is not conscious. What are the implications of this for engineering Artificial Consciousness? We need to move from the neural correlates to the computational mechanisms of consciousness to better understand the underlying processes. This, we suggest, will get us closer to the How and Why we have subjective qualitative experience. We are making progress as a field, please join us this week to continue the discussion.
 
Reading: Storm, J. F., Klink, P. C., Aru, J., Senn, W., Goebel, R., Pigorini, A., Avanzini, P., Vanduffel, W., Roelfsema, P. R., Massimini, M., Larkum, M. E., & Pennartz, C. M. A. (2024). An integrative, multiscale view on neural theories of consciousness. Neuron, 112(10), 1531–1552. https://doi.org/10.1016/j.neuron.2024.02.004
​​
We will begin this week with BYOQ, followed by some research updates from our colleague Maui.

Abstract: I haven’t been idle on the neuroscience front, and I have started formalizing some of the ideas that I presented about a year ago on simulating the behavior of simple critters. As a reminder, I am interested in emergent behavior such as that observed in a colony of ants or a hive of honeybees....To my mind, the relationship with quale is to engage with the question “what is the simplest (less complex) qualia that allows on to build much more complex ones?” Pretty much, the old story of the whole is larger than the sum of the parts. Also, I have been preparing a set of Jupyter notebooks for background on neuromorphic compute and simulations that we can go over briefly to introduce the background to the specific modeling I am employing.

Bio: Dr Bogdan Udrea has been fascinated by relative motion in orbit ever since he found out, at a very young age, that gravity is inversely proportional to the square of the distance. He has experience working for companies small and large, government outfits, and as a university professor. In 2014 he has founded VisSidus to develop technologies for cooperating spacecraft and spacecraft onboard autonomy. He likes long walks on the beach and in his spare time is trying to figure out the nature of consciousness in creatures small to large.
​​
This week we are hosting James Hubbard (Texas A&M) and Zhao Sun (Hampton U.) to feature collaboration through our Research Institute in Tactical Autonomy.
 
ABSTRACT: The Music of the Mind: A modal approach to mapping and understanding the space time dynamics of human cognition
 
The long term goal of this work is to develop and refine a rigorous, canonical modelling approach for mapping and analyzing spatial-temporal brain wave dynamics, in near real-time, using contemporary biomarkers such as electroencephalogram (EEG). These mappings can then be correlated with human cognitive states like emotions, attention, decision making etc. More specifically the approach involves the use of modern output only system identification techniques to resolve a true state space model with the brain mapping produced as brain wave modal images for analysis. The nonlinear, nonstationary behavior of the associated brain wave measures and general uncertainty associated with the brain makes it difficult to apply modern system identification techniques to such systems. In preliminary work an adaptive state estimator was introduced to resolve this difficulty and has been shown to produce high fidelity models with less than 1% error when compared to the concomitant EEG output. While there is a substantial amount of literature on the use of stationary analyses for brain waves, relatively less work has considered the real-time estimation and imaging of brain waves from non-invasive measurements. This work addresses the issue of modelling and imaging brain waves and biomarkers generally, treating the nonlinear and nonstationary dynamics in near real-time. This modal state-space formulation leads to intuitive, physically significant models which has broad applicability for analysis, classification and diagnosis. This research falls under the general category of Engineering Medicine (EnMed). An emerging field of study that can offer new, innovative and effective solutions to modern healthcare challenges. This requires novel approaches that integrate all of science and engineering. The long-term benefits to DoD are broadly in the area of Human Machine Interaction and Teaming, Brain Computer Interfaces and intelligent machine agents. This work will allow these agents to seamlessly integrate with humans while conducting DoD missions.

BIOS: Dr. James E. Hubbard, Jr. (hubbard@tamu.edu) began his engineering career as an engineering officer in the U.S. Merchant Marine. In this role he served in Vietnam from 1970-1971 under contract to the Military Sea Transport Service (MSTS). He received his B.S., M.S. and Phd from the Massachusetts Institute of Technology and upon graduation joined the Mechanical Engineering faculty there. He has the distinction of being the first African American to receive the Phd in Mechanical Engineering from MIT. He is currently the Oscar S. Wyatt, Jr. ’45 Chair professor at the Texas A&M University and a University Distinguished Professor. University Distinguished Professors at Texas A&M represent the highest level of achievement for its faculty. He is internationally known for his innovative work in the control of adaptive structures, and spatially distributed systems for real-time control. He is widely viewed as a founding father of the field of Smart or Adaptive Structures. He is the recipient of the ASME Adaptive Structures and Material Systems Award, the SPIE Lifetime Achievement Award, the SPIE Innovative Product of the Year Award. He is a Permanent Fellow of the Hagler Institute for Advanced Studies. The Hagler Institute selects its fellows from among the top scholars in the world who have distinguished themselves through outstanding professional accomplishments and significant recognition. Only National Academy Members and Nobel Laurette’s are considered for Fellow induction. He holds 24 patents and is the co-founder of 4 companies. He has published 4 books and more than 100 technical articles. Hubbard is also a Fellow of the AIAA, SPIE, ASME, the National Academy of Inventors and a member of the National Academy of Engineering and the The Academy of Medicine Engineering Science of Texas (TAMEST). He has a passion for teaching, mentoring and the generation, and motivation of scholarship in his students.

Dr. Zhao (Joy) Sun (ZHAO.SUN@HAMPTONU.EDU) is an associate professor in the Electrical and Computer Engineering Department at Hampton University and her research background is modeling, simulation, and control of complex systems. Before joining Hampton University, she was a research scientist at the National Institute of Aerospace working on intelligent methods and adaptive fault tolerant control strategies. She also has worked as a Summer Visiting Faculty at National Lawrence Berkeley Laboratory on Quantum Computing and Quantum Control and at Stanford University on Safe/Tactical Autonomy. Her research experience includes serving as PI for NSF EIR grant “Integrated Sensor-Robot Networks for Real-time Environmental Monitoring and Marine Ecosystem Restoration in the Hampton River” and IBM-HBCU Quantum Center project “Machine Learning Methodology for Robust Control Design of Quantum Systems”; serving as Co-PI for the NASA ULI project “Safe Aviation Autonomy with Learning-enabled Components in the Loop: from Formal Assurances to Trusted Recovery Methods”, which is led by Stanford University. Collaborating with professors from other universities, she is also working for a DoD UARC (Howard University) task related to computational cognition modeling, which is led by Kevin Schmidt at Air Force Research Laboratory.
This week we will continue our discussion on the Types of Qualia.
 
Abstract: We will begin with BYOQ on the types of qualitative experience that are possible (think "My Octopus Teacher" and what the mollusk experiences versus us humans...). We will then continue with the lecture on the topic from Cap Rogers to continue our transition to better understanding the Types of Qualia. The discernible aspects of our experience are called Qualia, like the color red someone might experience when an 840nm wavelength of light hits their retina. What are all the types of qualia humans are capable of experiencing? What about a bat, or an octopus? This week we will undoubtedly solve the mysteries of qualitative experience and how artificial systems too can generate these qualia...​​
This week we are hosting Dr. Mary Kinsella to discuss career advancement strategies.
 
ABSTRACT: You’re looking forward to your future career in science and engineering and all it has in store for you. But you wonder if it will align with your talents and interests. If you can balance it with your lifestyle. And if it will bring you joy and fulfillment. These things matter - whatever stage you are in your career. I have lessons to share that are key to addressing your concerns. We’ll talk about owning your career, how to keep yourself on the right career path, and leveraging it to make your unique contribution.  Join me for an interactive discussion about your career. And come away with new insights and tools for making it all you want it to be!
 
BIO: Dr. Mary E. Kinsella is CEO at Her Engineering Career, helping engineers and scientists enjoy impactful careers with ease and confidence. She specializes in career strategy for women in engineering, and helps STEM employers improve their work environments to optimize diversity for enhanced innovation. She has 30+ years' experience with AFRL as a project engineer in materials and manufacturing technology and is the founder of AFWiSE. She serves as board secretary and Diversity & Inclusion chair for the Entrepreneurs’ Center in Dayton and is a fellow of the Society of Women Engineers. Find out more about Dr. Kinsella and listen to her podcast at HerEngineeringCareer.com.​​
Abstract: Consciousness is an important but mysterious aspect of human intelligence.  As AI researchers work toward building computational intelligent systems, what role should consciousness play in such systems?
 
Most of this talk describes steps toward a cognitive architecture to explain the benefits of consciousness to an intelligent agent, specifically in helping it cope with the overwhelming information content of its sensorimotor interaction with the physical world.  First, a relatively small number of dynamic trackers provide continuous access to changing entities within the sensory field, simultaneously providing concise descriptions for symbolic reasoning and access to rich sensory input about those entities in the world.  Second, a coherent sequential narrative is constructed, about 300-500 milliseconds after the fact, attempting to explain experienced events in terms of interactions among the tracked entities.  The trackers and the narrative together provide a concise summary of the overwhelming complexity of sensorimotor interaction, with which the agent can reason and plan.
 
This cognitive architecture proposal addresses what is sometimes called the “Easy Problem” of consciousness.  The “Hard Problem” is:  “Why does consciousness feel like anything at all?”  I offer some observations on perceptual vividness, and a few other philosophical matters related to consciousness.
 
Many of these ideas are introduced in “Drinking From the Firehose of Experience”:  https://web.eecs.umich.edu/~kuipers/papers/Kuipers-aim-08.pdf
 
Short Bio: Benjamin Kuipers has been a Professor of Computer Science and Engineering at the University of Michigan for the last fifteen years. He was previously an endowed Professor in Computer Sciences at the University of Texas at Austin, where he served as Department Chair. He received his B.A. from Swarthmore College, his Ph.D. from MIT, and he is a Fellow of AAAI, IEEE, and AAAS.  His research in artificial intelligence and robotics focuses on the representation, learning, and use of foundational domains of knowledge, including knowledge of space, dynamical change, objects, and actions. He is currently investigating ethics as a foundational domain of knowledge for robots and other AIs that may act as members of human society.
 
Readinghttps://pubmed.ncbi.nlm.nih.gov/18774281/
Kuipers B. (2008). Drinking from the firehose of experience. Artificial intelligence in medicine, 44(2), 155–170. https://doi.org/10.1016/j.artmed.2008.07.010​​
This week we will dive deep into our (mis)understandings of the Types of Qualia.
 
Abstract: We will begin with BYOQ on the high ground recently gained from renewed understandings of mathematical psychology. A lecture will follow from Cap Rogers to begin our transition to better understanding the Types of Qualia. The discernible aspects of our experience are called Qualia, like the color red someone might experience when an 840nm wavelength of light hits their retina. What are all the types of qualia humans are capable of experiencing? What about a bat, or an octopus? This week we will undoubtedly solve the mysteries of qualitative experience and how artificial systems too can generate these qualia...
​​
This week we are joined by Joe Houpt to discuss cognitive modeling and architectures

Abstract: I will give an overview of systems factorial technology (SFT) in the context of the broader motivations of mathematical psychology. A core principle of mathematical psychology is that the measurement of psychological phenomena is theory laden and hence our methodology should be explicitly driven by the theory. SFT is a theory driven methodology focused on measuring how cognitive systems combine distinct sources of information for use. The approach includes formal statements of four logically distinct properties that can distinguish cognitive systems, statistical models to analyze those properties, and guidelines for designing experiments to allow those statistical models to be applied. I will give examples of SFT application to testing interfaces for AI recommender systems and an example of SFT as a meta-theoretical tool applied to testing the ACT-R modeling framework.
 
Bio: Joseph Houpt, Ph.D., is an Associate Professor in the Department of Psychology with a joint appointment in the Departments of Computer Science and Neuroscience at the University of Texas at San Antonio. He is a member of the MATRIX AI Consortium for Human Well-Being and the Cyber Center for Security and Analytics at UTSA. His research interests are in investigating human performance, perception, and cognition through mathematical modeling. He has contributed to several projects analyzing complex, variable processes, including examining causal influences temporal data. He has published nearly fifty papers including many in top methods journals in psychology, another thirty papers in books or proceedings, and edited a two-volume book on mathematical psychology. His work has been funded by the NSF, AFOSR, Meta Reality Labs, RTX, and the State of Texas Trauma Research and Combat Care Collaborative. He recently served as the President of the Society for Mathematical Psychology and continues to serve on the editorial boards of the Journal of Mathematical Psychology and Behavior Research Methods.
​​
This week we will synthesize and distill recent conversations into the S3Q framework of consciousness.
 
Consciousness is a Simulated, Situated, and Structurally Coherent representation that suitably balances Stability, Consistency, and Usefulness to generate flexible knowledge. Recent conversations on episodic memory and computational neuroscience approaches to qualia offer novel insights into artificially intelligent systems that can leverage implementations of described mechanisms. QuEST will summarize the ground gained through mapping of recent discussions onto the QuEST S3Q framework for autonomous systems.
 
Reading: Zacharias, G. L. (2019). Autonomous horizons: the way forward (p. xxii). Maxwell Air Force Base, AL: Air University Press.
​​
This week we will have an extended session with Niko Kriegeskorte on geometric analyses of brain representations

Abstract: Comparing task-performing models by their predictions of representational geometries and topologies
Understanding the brain-computational mechanisms underlying cognitive functions requires that we implement our theories in task-performing models and adjudicate among these models on the basis of their predictions of brain representations and behavioral responses. Previous studies have characterized brain representations by their representational geometry, which is defined by the representational dissimilarity matrix (RDM), a summary statistic that abstracts from the roles of individual neurons (or responses channels) and characterizes the discriminability of stimuli. The talk will cover (1) recent methodological advances implemented in Python in the open-source RSA3 toolbox that support unbiased estimation of representational distances and model-comparative statistical inference that generalizes simultaneously to the populations of subjects and stimuli from which the experimental subjects and stimuli have been sampled, and (2) topological representational similarity analysis (tRSA), an extension of representational similarity analysis (RSA) that uses a family of geo-topological summary statistics that generalizes the RDM to characterize the topology while de-emphasizing the geometry. Results show that topology-sensitive characterizations of population codes are robust to noise and interindividual variability and maintain excellent sensitivity to the unique representational signatures of different neural network layers and brain regions.

Bio: Nikolaus Kriegeskorte is a computational neuroscientist who studies how our brains enable us to see and understand the world around us. He is a Professor of Psychology and Neuroscience at Columbia University and an affiliated member of the Department of Electrical Engineering. He is a Principal Investigator and Director of Cognitive Imaging at the Zuckerman Mind Brain Behavior Institute at Columbia University. He received his PhD in cognitive neuroscience from Maastricht University, held postdoctoral positions at the University of Minnesota and the U.S. National Institute of Mental Health in Bethesda, and was a Programme Leader at the U.K. Medical Research Council Cognition and Brain Sciences Unit at the University of Cambridge, UK. Kriegeskorte is a co-founder of the conference “Cognitive Computational Neuroscience”.

Reading: Kriegeskorte, N., & Diedrichsen, J. (2019). Peeling the Onion of Brain Representations. Annual review of neuroscience, 42, 407–432. https://doi.org/10.1146/annurev-neuro-080317-061906
​​
This week will host Dr. Lila Davachi to talk about how sequential event representations are formed.

Lila Davachi is currently a Professor of Psychology at Columbia University. She received her bachelor’s degree in psychology from Barnard College and her Ph.D. in Neurobiology from Yale University. She then conducted her post-doctoral research at the Massachusetts Institute for Technology in the Brain and Cognitive sciences department. She started her research group at the New York University in 2004 where she was Professor of Psychology and Neuroscience and served as the Director of the Center for Learning, Memory and Emotion at New York University before moving to Columbia University in 2017. Her scientific contributions have shed light on how dynamic experiences are transformed into lasting memories and how they update knowledge. She places an emphasis on behavioral and neuroimaging investigations into how humans encode and consolidate their experiences and her work has led to several discoveries, including in the area of sequential event representations and the impact of post-encoding neural activity on memory. Lila is a recipient of the prestigious Young Investigator Award from the Cognitive Neuroscience Society in 2009, Columbia University’s Lenfest Distinguished Faculty Award, a Provost’s Senior Faculty Teaching Scholar and she is an elected member of the Society of Experimental Psychologists (SEP) and the Association for Psychological Sciences (APS).

"I will talk about how sequential event representations are formed, de novo covering our work in this area since our seminal paper in 2011 called 'What is an episode in episodic memory?"

Reading: Ezzyat, Y., & Davachi, L. (2011). What constitutes an episode in episodic memory?. Psychological science, 22(2), 243–252. https://doi.org/10.1177/0956797610393742
​​
Colleagues,
 
This week we will prepare for a deep dive into Event Boundaries in preparation for Dr. Lila Davachi's QuEST seminar next week.
 
We have previously explored Dr. Davachi's recent work on the differential effects of consciously vs unconsciously reactivated memory (https://www.pnas.org/doi/abs/10.1073/pnas.2313604121).
 
This week we will focus on how conscious memories are formed. "We are seeking an understanding of how our perception of event structure (i.e. segmentation) modulates both how those events become organized in memory and the [mechanisms] used to bind information within and across events. Perception, attention, working memory and prediction all interact with encoding processes to determine what will be remembered and how it will be linked [(situated)] with other aspects of our ongoing conscious experience." -- L.D.
 
Reading:
Radvansky, G. A., & Zacks, J. M. (2017). Event Boundaries in Memory and Cognition. Current opinion in behavioral sciences, 17, 133–140. https://doi.org/10.1016/j.cobeha.2017.08.00
​​
This week we will continue the conversation on qualia. What was your subjective experience during encoding and retrieval in the experiment last week?
 
Topic: Last week Katrina S. actively demo'd the multiple memory systems through experimental psychology methods. Briefly, there was an interaction between encoding strategy (Method of Loci or vowel counting) and the type of retrieval (free recall or stem completion).
 
Introspecting on your participation in the study, what did you experience during the learning? What about during retrieval? (i.e., Jocelyn S. asks "what about our reaction")
 
The QuEST framework employs the idea of qualia to drive our research questions. What does it feel like to encode and retrieve an episodic memory? Where do these feelings come from? Are they a part of particular computations? How can we use these qualitative properties of the different memory systems to drive an artificial implementation of biological consciousness?
 
This week we will continue the conversation on qualia by articulating the differences between nonconscious memory retrieval, vague feelings of familiarity with a stimulus, and full blow conscious recognition with contextual reinstatement.
 
Readinghttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC4385384/
 
Voss, J. L., Lucas, H. D., & Paller, K. A. (2012). More than a feeling: Pervasive influences of memory without awareness of retrieval. Cognitive Neuroscience, 3(3–4), 193–207. https://doi.org/10.1080/17588928.2012.674935
This week we will have a BYOQ discussion continuing on the topic of complementary learning systems (CLS) theory and its implications for S3Q and intelligent agents. 
 
Topic: This week we will have a BYOQ discussion continuing on the topic of complementary learning systems (CLS) theory and its implications for S3Q and intelligent agents. We will focus on a well-known observed pattern of neural activity known as repetition suppression (see Barron et al. 2016) which is a reduction in measured fMRI BOLD signal in neocortex for repeated vs novel stimuli that occurs independent of conscious awareness - i.e., an implicit memory effect. This effect takes place independent of the hippocampus in what CLS theory terms the slow-learning, neocortical subsystem. Repetition suppression provides evidence for implicit memory at a meso-scale, representational-level in the brain, and supplements macro-scale evidence for the separability of explicit and implicit memory based on data from patients with medial-temporal lobe damage like H.M. Repetition suppression indicates that neocortical long-term knowledge representations may be updated outside of conscious awareness. A main question for our discussion is how the tenets of S3Q can be best aligned with observed implicit memory effects, especially as we put forth the notion that the "cash value" of consciousness is knowledge creation. How might we design experiments using the repetition suppression paradigm to test S3Q?
 
Reading: Barron, H. C., Garvert, M. M., & Behrens, T. E. (2016). Repetition suppression: a means to index neural representations using BOLD?. Philosophical transactions of the Royal Society of London. Series B, Biological sciences371(1705), 20150355. https://doi.org/10.1098/rstb.2015.0355
​​
Title: A simulated rat pauses to contemplate its future

Abstract: There is now overwhelming evidence that rats and other mammals make their decisions at the start of task trials, and spend relatively little time thereafter. This is consistent with the Rubicon theory (Heckhausen and Gollwitzer, 1987) that postulates two qualitatively distinct phases of mental life: goal selection vs. goal engaged. Among the many other implications of this theory, we procrastinate because once goal-enaged, we will be committed to finishing, or else suffer disappointment. The goal selection process is therefore especially conservative, and it is thus relatively difficult to actually cross the Rubicon into the goal engaged state. I have been developing a large-scale, systems-neuroscience computational model of the mechanisms that drive this and related goal-driven dynamics in the brain, which can shape learning and online processing in powerful ways. It requires the coordinated activity of many brain systems and neuromodulatory pathways to establish this strong future-oriented bias in learning and processing, and it is thus unlikely that these dynamics could simply emerge from generic error-driven learning mechanisms. Indeed, current deep neural networks powered by such mechanisms notably lack evidence of goal-driven, self-motivated behavior. Thus, it may be necessary to reverse-engineer the millions of years of evolution that shaped the mammalian brain to understand how goal-driven learning works, and how it all-too-often breaks down in a wide range of mental disorders that plague humanity.

Bio: Randall O’Reilly is internationally recognized as a founder of the field of Computational Cognitive Neuroscience, publishing a widely-cited textbook (O’Reilly & Munakata, 2000; 2014) and a number of influential papers in this field. He develops large-scale systems-neuroscience computational models of learning, memory, and motivated cognitive control, to learn how neurons give rise to human cognitive function and to inform our understanding of brain-based disorders such as schizophrenia and Parkinson’s disease. Longer bio info avail here: https://ccnlab.org/people/oreilly/bio/
 
Reading: This paper goes along with the talk: https://ccnlab.org/papers/OReilly20.pdf -- this is a bit of a different set of issues from the complementary learning systems work but the models use the deep predictive learning and hippocampus.
Why are there complementary learning systems in the brain?
 
Half a century of neuroscience research suggests we have multiple forms of learning and memory mechanisms in the brain. Classically, there is a medial temporal lobe system responsible for rapidly encoding facts and events from our lives (e.g., remembering the sweet, sweet taste of Skyline Chili on your birthday last year), while a number of systems, independent of the hippocampus, operate unconsciously and outside of awareness. Neuroscience, psychology, and psychophysics can tell us how these memories are encoded and retrieved, what these memories feel like, and the role they play in our behavior, but it is only through computational modeling that we truly understand why. Why has nature converged on a multiple memory storages? 
 
This week we will synthesize and distill our questions, in preparation for Randy Orielly next week, by reading the classic '95 paper on Why There Are Complementary Learning Systems in the Hippocampus and Neocortex...
 
https://pubmed.ncbi.nlm.nih.gov/7624455/
 
McClelland, J. L., McNaughton, B. L., & O'Reilly, R. C. (1995). Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. Psychological review, 102(3), 419–457. https://doi.org/10.1037/0033-295X.102.3.419
Title: Seeking the principles of unsupervised representation learning
 
Abstract: Humans and other animals exhibit learning abilities and understanding of the world that are far beyond the capabilities of current AI and machine learning systems. Such capabilities are largely driven by intrinsic objectives without external supervision. Unsupervised representation learning (aka self-supervised representation learning, SSL) aims to build models that find patterns in data automatically and reveal the patterns underlying data explicitly with a representation. Two fundamental goals in unsupervised representation learning are to model natural signal statistics and to model biological sensory systems. These are intertwined because many properties of the sensory system are adapted to the statistical structure of natural signals.

On the one hand, I will show how we can formulate unsupervised representation learning from neural and statistical principles: 1) sparse coding, which provides a good account of neural response properties at the early stages of sensory processing 2) low-rank spectral embedding, which can model the essential degrees of freedom of high-dimensional data. This approach leads to the sparse manifold transform (SMT) and shows how to exploit the structure in a sparse code to straighten the natural non-linear transformations and learn higher-order structure at later stages of processing. SMT has been recently supported by human perceptual experiments and population coding from the primary visual cortex.

On the other hand, we can use reductionism to show the success of the state-of-the-art (SOTA) joint-embedding self-supervised learning methods can be unified and explained by a distributed representation of image patches. These two seemingly distant approaches have a surprising convergence --- we can show that they share precisely the same learning objective, and their benchmark performance can also be closed significantly. The evidence outlines an exciting direction for building a theory of unsupervised hierarchical representation and explains how the visual cortex performs hierarchical manifold disentanglement. The tremendous advancement in this field also provides new tools for modeling and analyzing high-dimensional neuroscience data and other emerging signal modalities. However, these innovations are only steps on the path to building an autonomous machine intelligence that can learn as efficiently as humans and animals. In order to achieve this grand goal, we must venture far beyond classical notions of representation learning.
 
Bio: Yubei Chen is an assistant professor from the ECE department at UC Davis. He has worked with Professor Yann LeCun at Meta AI and NYU Center for Data Science as a postdoctoral researcher. Yubei received his MS/PhD in Electrical Engineering and Computer Sciences and MA in Mathematics at UC Berkeley under Professor Bruno Olshausen. His research interests span multiple aspects of representation learning. He explores the intersection of computational neuroscience and deep unsupervised learning, with the goal of improving our understanding of the computational principles governing unsupervised representation learning in both brains and machines, and reshaping our insights into natural signal statistics. He is a recipient of the NSF graduate fellowship, and ICLR Outstanding Paper Honorable Mention Award.

Learn more about him at https://yubeichen.com.
 
References for this talk:
1. The sparse manifold transform (NeurIPS18): https://arxiv.org/abs/1806.08887
2. Minimalistic Unsupervised Representation Learning with the Sparse Manifold Transform (ICLR23): https://arxiv.org/abs/2209.15261, https://iclr.cc/media/iclr-2023/Slides/12555.pdf
3. Bag of Image Patch Embedding Behind the Success of Self-Supervised Learning (TMLR23): https://arxiv.org/abs/2206.08954
Mission Command, this is Nagel, do you read me? This week we will read Lila Davachi's recent paper on memory reactivation (conscious / nonconscious, awake / asleep): CLICK HERE
 
This paper naturally follows conversations last week with Anna Schapiro on subjective experience during memory retrieval, privileges of a sleeping brain, and tackling head-on the consciousness business (qualia?).
 
We will synthesize and distill our questions in preparation for Dr. Davachi's visit 22 March to discuss Event Boundaries, the structure of episodic memory, and the re-activation of experienced content (i.e., memory?).
 
PS, it might be easiest to access the article preprint on bioarxiv: CLICK HERE
This week we are pleased to welcome Dr. Anna Schapiro to discuss learning representations of specifics and generalities over time.

Abstract: There is a fundamental tension between storing discrete traces of individual experiences, which allows recall of particular moments in our past without interference, and extracting regularities across these experiences, which supports generalization and prediction in similar situations in the future. One influential proposal for how the brain resolves this tension is that it separates the processes anatomically into Complementary Learning Systems, with the hippocampus rapidly encoding individual episodes and the neocortex slowly extracting regularities over days, months, and years. But this does not explain our ability to learn and generalize from new regularities in our environment quickly, often within minutes. We have put forward a neural network model of the hippocampus that suggests that the hippocampus itself may contain complementary learning systems, with one pathway specializing in the rapid learning of regularities and a separate pathway handling the region’s classic episodic memory functions. This proposal has broad implications for how we learn and represent novel information of specific and generalized types, which we test across statistical learning, inference, and category learning paradigms. We also explore how this system interacts with slower-learning neocortical memory systems, with empirical and modeling investigations into how the hippocampus shapes neocortical representations during sleep. Together, the work helps us understand how structured information in our environment is initially encoded and how it then transforms over time.

Biography: Dr. Anna Schapiro received her B.S. from Stanford University in Symbolic Systems and her Ph.D. from Princeton University in Psychology and Neuroscience. She did a postdoctoral fellowship at Harvard Medical School studying sleep and memory. She is currently an Assistant Professor in the Department of Psychology at the University of Pennsylvania. Her research draws on neuroimaging, behavioral, and computational modeling techniques to investigate how humans learn and consolidate information across time.

image of Dr. Anna Schapiro

Readings:

  • Singh, D., Norman, K. A., & Schapiro, A. C. (2022). A model of autonomous interactions between hippocampus and neocortex driving sleep-dependent memory consolidation. Proceedings of the National Academy of Sciences of the United States of America, 119(44), Article e2123432119. Link to paper
  • Jelena Sučević, Anna C. Schapiro (2023) A neural network model of hippocampal contributions to category learning eLife 12:e77185. Link to paper
  • Zhou, Z., Singh, D., Tandoc, M. C., & Schapiro, A. C. (2023, May 25). Building Integrated Representations Through Interleaved Learning. Journal of Experimental Psychology: General. Advance online publication. Link to paper

Colleagues, this week we will synthesize and distill our questions and comments from the below readings in preparation for computational cognitive neuroscientist Dr. Anna Schapiro's visit next week. Briefly, "there is a fundamental tension between storing discrete traces of individual experiences, which allows recall of particular moments in our past without interference, and extracting regularities across these experiences, which supports generalization and prediction in similar situations in the future."..."We will be exploring the proposed Complementary Learning Systems framework with broad implications for how we learn and represent novel information of specific and generalized types, to help us understand how structured information in our environment is initially encoded and how it then transforms over time." -- Anna Schapiro
 
Reading:
 
Singh, D., Norman, K. A., & Schapiro, A. C. (2022). A model of autonomous interactions between hippocampus and neocortex driving sleep-dependent memory consolidation. Proceedings of the National Academy of Sciences of the United States of America, 119(44), Article e2123432119. https://doi.org/10.1073/pnas.2123432119
 
Jelena SučevićAnna C Schapiro (2023) A neural network model of hippocampal contributions to category learning eLife 12:e77185. https://doi.org/10.7554/eLife.77185
 
Zhou, Z., Singh, D., Tandoc, M. C., & Schapiro, A. C. (2023, May 25). Building Integrated Representations Through Interleaved Learning. Journal of Experimental Psychology: General. Advance online publication. https://dx.doi.org/10.1037/xge0001415
Topic:  
This week we will hear from Dr. Christopher Baldassano, who will be discussing his work on event memory and the Method of Loci technique.
 
Abstract:
Current models of human memory have been developed primarily based on experiments in which participants are asked to memorize lists of unrelated words or pictures. These models, however, are missing a primary feature of typical, realistic experiences: in the real world, episodic memories are layered on top of "cognitive maps" that capture our general knowledge about the structure of familiar environments. A dinnertime conversation with a friend, for example, could be situated within a spatial map, a social network, or a temporal "restaurant" script. In this project we describe an experimental paradigm that can serve as a testbed for investigating memory models that can strategically use cognitive maps. This novel approach makes use of a unique and understudied subject population of "memory experts" who have spent years or decades optimizing their ability to bind arbitrary information to an internal cognitive map. In a preliminary analysis of novice (n=25) and expert (n=5) users of the Method of Loci technique, fMRI brain imaging shows evidence for the creation of conjunctive codes during encoding that are reinstated during memory retrieval. Overall, this project provides a roadmap to advance the current state-of-the-art in theories of episodic memory and in fMRI experimental methods.
 
Speaker Biography:
Christopher Baldassano is an Assistant Professor in the Psychology Department at Columbia University. He was an undergraduate in Electrical Engineering at Princeton University, received his PhD in Computer Science at Stanford University, and was a postdoc at the Princeton Neuroscience Institute. His lab's research focuses on how knowledge about the world - including semantic knowledge, temporal structure, spatial maps, or schematic scripts - is used to understand and remember complex naturalistic experiences. By applying machine learning techniques to data from behavioral and neuroimaging experiments, his work aims to uncover how dynamic representations in the mind and brain during perception lead to the formation of event memories.
 
Reading:

  • Bird, Chris M. (2020). How do we remember events? In Current Opinion in Behavioral Sciences. 32 (pp. 120-125). Link to reading
Summary: This week, Dr. Othalia Larue will describe a cognitive model of the hypothesized psychological processes memory athletes competing in Speed Cards events leverage to instantiate a Memory Palace (following  Person-Action-Object Dominic System). More specifically, we will look at which cognitive mechanisms support overlearning and the differences between novice and expert memory athletes. 
 
Bio: Dr Larue is a research scientist at Parallax Advanced Research. She obtained her Ph.D. in Cognitive AI from the University of Quebec in Montreal. Her research interests include cognitive architectures, cognitive modeling of emotions, dual-process theories (co-existence of heuristic and analytic behaviors), metacognition (including metacognitive trigger mechanisms), and the modeling of individual differences in memory and reasoning, as well as how those models can inform the design of adaptive and autonomous intelligent agents.

I'll use the memory athlete to go over different mechanisms of ACT-R that are useful to explain how procedures become implicit. And if I have time - I'll go over either analogy work or trust in automation work (deciding today).

Reading: I will stick to the paper you sent previously  (memory athlete), but if you can add a second, this, which goes other implementation of the Feeling of Rightness (as a bonus maybe - it's short):
Larue, O., Hough, A., & Juvina, I. (2018). A Cognitive Model of Switching Between Reflective and Reactive Decision Making in The Wason Task. In Proceedings of the Sixteenth International Conference on Cognitive Modeling (pp. 55-60). https://www.researchgate.net/publication/328280252_A_cognitive_model_of_switching_between_reflective_and_reactive_decision_making_in_the_Wason_task

And this one would be another bonus about cog architectures in general if people want to learn more about cognitive architectures (it's a textbook chapter), not just ACT-R:
Larue, O., Bourdon, J. N., Legault, M., & Poirier, P. (2022). Mental Architecture—Computational Models of Mind. In Mind, Cognition, and Neuroscience (pp. 164-182). Routledge.
This week we will describe the Memory Palace and Person-Action-Object Dominic System for mental athletes competing in Speed Cards events. A cognitive model of these psychological processes will be presented to harden our understanding of the underlying mechanisms, and experiments will then be hypothesized to further this knowledge base.
 
Reading: https://arxiv.org/abs/2303.11944
Following our discussion last week with the organizers of the USA Memory Championships Tony and Michael D., this week we will fine-grain our description of the methods used to prepare for and compete in the Speed Cards event in memory competitions, based on the subjective report of World Memory Championship Grandmaster Nelson Dellis.
 
We will begin with a review of the relevant QuEST lecture material (e.g., link game, chunking, etc.), and then continue to explore the reported experiences of mental athletes through the Simulated, Situated, and Structurally coherent Qualia (S3Q) framework of artificial consciousness, in order to propose a set of neuroscience and cognitive modeling experiments to help further understand the boundaries of expert memory performance.
 
With the psychology and neuroscience mechanisms successfully mapped onto Speed Cards and the S3Q theory, we will then describe approaches to computational modeling of these phenomena for use in implementing novel artificially intelligent agents. 
 
Covering these topics in-depth over several weeks, the conversation could naturally transition to discussing the "types of qualia," as well as a potential thread on discussing "basal cognition..."
 
BONUS: check out the card trick at 47 minutes and prepare your thoughts: https://www.youtube.com/live/kbfSR6TfJ7Y?si=B4E330SLqD6H2FHf
 
Reading: 

  • Schmidt, K., Larue, O., Kulhanek, R., Flaute, D., Veliche, R., Manasseh, C., ... & Rogers, S. (2023). Representational Tenets for Memory Athletics. arXiv preprint arXiv:2303.11944.
This week we will describe the current state of world-class memory competitions, including the methods used to prepare for and compete in memory competitions, based on the subjective report of World Memory Championship Grandmaster Nelson Dellis. We will then explore the reported experiences through the lens of the Simulated, Situated, and Structurally coherent Qualia (S3Q) theory of consciousness, in order to propose a set of experiments to help further understand the boundaries of expert memory performance.

Readinghttps://arxiv.org/abs/2303.11944
Schmidt, K., Larue, O., Kulhanek, R., Flaute, D., Veliche, R., Manasseh, C., ... & Rogers, S. (2023). Representational Tenets for Memory Athletics. arXiv preprint arXiv:2303.11944.
This week we will continue our discussion on the creation of flexible memory with the structure of qualia.
 
We will use the QuEST work with USA Memory Champion Nelson Dellis to provide a framework for qualia-based memory encoding. Briefly, a well-practiced mnemonic strategy (e.g., The Person-Action-Object Dominic System) can be leveraged to rapidly place a narrative (representing 3 playing cards) into a location in a Memory Palace, with the world record holder memorizing a 52-card deck in 12 seconds!
 
What can we glean about learning and memory systems by characterizing the experience of these competitive memory athletes?
 
We will then transition to discussing the Types of Qualia. What is a quale (examples)? Is there a logical parsing for Types of Qualia?
 
Reading: Schmidt, K., Larue, O., Kulhanek, R., Flaute, D., Veliche, R., Manasseh, C., ... & Rogers, S. (2023). Representational Tenets for Memory Athletics. arXiv preprint arXiv:2303.11944.
 
https://arxiv.org/abs/2303.11944.
This week we will discuss operating characteristics and mechanisms of conscious and nonconscious learning and memory.
 
What are the roles of qualia in generating flexible representations?
 
Much prior research on memory systems has focused on establishing dissociations between different types of memory based on behavior, subjective experience, and the brain: explicit memory depends on the medial temporal lobe and is thought to operate consciously through a relatively slower processing bottleneck, while implicit memory is a term for all other learning operating outside subjective awareness and not dependent on the medial temporal lobe. These implicit, nonconscious learning and memory processes are acquired incrementally and exhibit a hallmark inflexibility, characteristic of mainstream machine learning approaches (i.e., taking hundreds of trials to mature a representation that is brittle when inferencing outside the training distribution). Conscious learning and memory, in contrast, appears to flexibly leverage a relational knowledgebase, allowing for transfer of learning to novel contexts (i.e., unexpected queries).
 
This week we will leverage mainstream cognitive neuroscience to explore some operating characteristics and mechanisms of conscious learning and memory.
 
Readings:
Scoville, W. B., & Milner, B. (1957). Loss of recent memory after bilateral hippocampal lesions. Journal of Neurology, Neurosurgery & Psychiatry, 20, 11–21. https://doi.org/10.1136/jnnp.20.1.11
 
Reber, A. S. (1967). Implicit learning of artificial grammars. Journal of Verbal Learning & Verbal Behavior, 6(6), 855–863. https://doi.org/10.1016/S0022-5371(67)80149-X.
Kabrisky Lecture 2024

Every January the QuEST group uses the first meeting of the calendar year to present a ‘state of QuEST’ lecture in honor of a founding member the QuEST meetings, Dr. Matthew “Special K” Kabrisky. This lecture is designed to bring anyone up to speed on how we use terms and to communicate what we seek to accomplish.

  • QuEST is an analytical and software development approach to improve human-machine team decision quality over a wide range of stimuli, handling unexpected queries and contextual adaptation. QuEST is focused on creating computer-based decision aids and also decision engines that may be embedded in platforms interacting with the world. QuEST seeks to engineer solutions to provide the advantages commonly associated with intuitive reasoning, quick reflexive, and advantages often associated with “conscious” context sensitive thinking.
  • QuEST also seeks to provide a mathematical framework to understand what can be known by a group of people and their computer-based decision aids about situations to facilitate prediction of when more or different people (different training) or more / different computer aids are necessary to acceptably make a particular decision. Can a given situational complexity be represented acceptably by the representational capacity of the group of people and computer agents available.
  • For 2024 we also will explore the idea that there is only one representation, how knowledge is structured and the processes that are used to create / maintain and access that knowledge. The idea of Sys1 and Sys2 are just a model of ways to think about different cognitive capabilities and challenges.  We posit that these capabilities / challenges can be addressed using the same representation. Qualia provide insight into the process used by nature to create, maintain and exploit a vocabulary of cognition, the one representation. That representation can be used in multiple ways, using multiple processes that can be modeled as sys1 / sys2 cognition. By studying qualia, specifically the ‘what it is like’ to have a conscious experience we can get insight into the vocabulary of cognition, the representation, and thus gain insights applicable to the QuEST goal of creating modern decision aids.
  • Qualia can be studied by investigating the neuro basis, the behavioral/functional attributes and/or the phenomenology of the experience. QuEST has been focused on examining the phenomenology of the experience, qualia, and using that information to advance the S3Q Theory of Consciousness. 

Dr. Matthew Kabrisky was an Air Force pioneer and innovator. From Air Force aviator in the 1950s to professor, mentor, and researcher, his discoveries paved the way for many modern technological advancements. He developed theories of how the human brain processes information to recognize visual objects. This work directly led to the innovation of implanted electrodes for those afflicted with diseases such as epilepsy and injuries that resulted in paralysis. He was the leading international expert on the physiological symptoms of space adaptation sickness, i.e., motion sickness. His research led NASA to a better understanding and an approach to mitigate the effects of space environments on astronauts. His research in the area of robust speech recognition laid critical foundations for fostering the development of DoD and private industry products ranging from voice activated controls in advanced tactical aircrafts, to aides for the disabled and handicapped and industrial process control. In the 1990s, he helped lead a team of engineers that developed the world’s most accurate breast cancer detection system. This highly successful product has helped in the detection of thousands of breast cancers before they would have otherwise been detected. Dr. Kabrisky’s pioneering efforts paved the way for current innovations across the Air Force and the Nation.

Click below for past QuEST guest speakers, topics and talks.
Sleep, Memory and Dreams:  A Unified View
Dr. Robert Stickgold, PhD
Harvard Medical School and Beth Israel Deaconess Medical Center, Boston MA USA
 
The benefits that sleep confers on memory are surprisingly widespread.  For simple procedural skills – how to ride a bicycle or distinguish different coins in one’s pocket – a night of sleep or an afternoon nap following learning leads to an absolute and dramatic improvement in performance. Sleep also stabilizes verbal memories, reducing their susceptibility to interference and decay, processes that all too easily lead to forgetting. 

But the action of sleep can be more sophisticated than simply strengthening and stabilizing memories.  It can lead to the selective retention of emotional memories, or even of emotional components of a scene, while allowing other memories and parts of scenes to be forgotten. It can extract the gist from a list of words, or the rules governing a complex probabilistic game. It can lead to insights ranging from finding the single word that logically connects three apparently unrelated words, to discovering an unexpected rule that allows for the more efficient solving of mathematical problems. It can facilitate the integration of new information into existing networks of related information and help infants learn artificial grammars. Disruptions of normal sleep in neurologic and psychiatric disorders can lead to a failure of these processes.

Dreams appear to be part of this ongoing memory processing, and can predict subsequent memory improvement. The NEXTUP (Network Exploration to Understand Possibilities) model of dreaming proposes that dreaming aids complex problem solving by supporting divergent creativity, acting more by exploring a problem's "solution space" than by searching for the solution, itself.

SPEAKER BIO: Robert Stickgold is a professor of psychiatry at Beth Israel Deaconess Medical Center and Harvard Medical School, and is a visiting professor at M.I.T’s Media Lab.  He has published over 100 scientific publications, including papers in Science, Nature, and Nature Neuroscience.  His work has been written up in Time, Newsweek, The New York Times, The Boston Globe Magazine, and Seed Magazine, and he has been a guest on The Newshour with Jim Leher and NPR’s Science Friday with Ira Flato several times, extolling the importance of sleep. He has spoken at the Boston Museum of Science, the American Museum of Natural History in New York, and NEMO, the Amsterdam museum of science. His current work looks at the nature and function of sleep and dreams from a cognitive neuroscience perspective, with an emphasis on the role of sleep and dreams in memory consolidation and integration.  In addition to studying the normal functioning of sleep, he is currently investigating alterations in sleep-dependent memory consolidation in patients with schizophrenia, autism spectrum disorder, and PTSD.  His work is currently funded by NIMH. He is coauthor, with Antonio Zadra, of the new book When Brains Dream.
Why do we dream?
 
I (Kevin) am awake and conscious as I am writing this, vividly experiencing the world around me (e.g., 640 nm wavelengths of light currently appear red to me). At night, I fall asleep, become unconscious, and these qualia go away, but as my sleep stages progress I enter Rapid Eye Movement sleep, and I start to dream. I am asleep yet am vividly experiencing a virtual reality of thoughts, feelings, emotions, etc. (i.e., the qualia come back).
 
What is the function of this qualitative experience while we are asleep?
 
Over the next several weeks will be exploring the function of dreams as we prepare for guest speaker Dr. Robert Stickgold's presentation 22 December on his model of dream function, NEXTUP (Network Exploration to Understand Possibilities).
We're continuing the conversation. The first QuEST meeting of any calendar year is the Kabrisky Memorial Lecture. It is meant to provide an introduction for those who have not been on this journey with us to get the basic ideas we are pursuing. That is always a very difficult thing to do in a single talk or a couple of talks. What we have done traditionally is use some of the December meetings to capture some key ideas we think are critical to insert into that Kabrisky Lecture, often focusing on what is new to our discussions for that year.
 
So come to QuEST this Friday with any topic. For example, something that we’ve covered this year that has impacted your thinking about consciousness, and we will discuss where / how they need to be assimilated.
The first QuEST meeting of any calendar year is the Kabrisky Memorial Lecture. It is meant to provide an introduction for those who have not been on this journey with us to get the basic ideas we are pursuing. That is always a very difficult thing to do in a single talk or a couple of talks. What we have done traditionally is use some of the December meetings to capture some key ideas we think are critical to insert into that Kabrisky Lecture, often focusing on what is new to our discussions for that year.
 
So come to QuEST this Friday with any topic. For example, something that we’ve covered this year that has impacted your thinking about consciousness, and we will discuss where / how they need to be assimilated.
This week, our own Dr. Bogdan "Maui" Udrea will lead an informal discussion on some non-human animal cognition questions that he has been considering. 
 
In the 2022 Air Force Chief Scientist Workshop on AI Dr. Yann LeCun mentioned that human cognition is mostly based on observation.

Simple animals, such as flies seem to spend a relatively small amount of time performing observation to enable functions necessary for survival and reproduction. Does this mean that flies are “born ready” with certain necessarily simple representations of the environment in which they function?

Studies of more complex insects such honeybees (with a nervous system of about 900,000 neurons) have shown that they possess “numerical cognition” that allows them to perform addition and subtraction. Howard et al. [1] state that, in order to perform addition and subtraction, honeybees “acquire long-term rules and use short-term memory” that the authors have trained during the experiments. Moreover, the authors state that the results of their study “suggest the possibility that honeybees and other nonhuman animals may be biologically tuned for complex numerical tasks.” Have the honeybees evolved the ability to perform complex numerical tasks for survival only or do their social skills and collective decision making [2] have something to do with it?
 ---
[1] Scarlett R. Howard et al. Numerical cognition in honeybees enables addition and subtraction. Sci. Adv.5,eaav0961(2019)
[2] Seeley, T. D. (2010). Honeybee Democracy. Princeton University Press.

Safe and Stable Learning for Agile Robotics

Abstract: My research group at Caltech (https://aerospacerobotics.caltech.edu/) is working to systematically leverage AI and Machine Learning techniques towards achieving safe and stable autonomy of safety-critical robotic systems, such as robot swarms and autonomous flying cars. Another example is LEONARDO, the first bipedal robot that can walk, fly, slackline, and skateboard. Stability and safety are often research problems of control theory, while conventional black-box AI approaches lack much-needed robustness, scalability, and interpretability, which are indispensable to designing control and autonomy engines for safety-critical aerospace and robotic systems. I will present some recent results using contraction-based incremental stability tools for deriving formal robustness and stability guarantees of various learning-based and data-driven control problems, with some illustrative examples including learning-to-fly control with adaptive meta learning, learning-based swarm control and planning synthesis, and optimal motion planning with stochastic nonlinear dynamics and chance constraints. Recent results on neural-network-based contraction metrics (NCMs) as a stability certificate for safe motion planning and control will also be discussed.

Bio: Soon-Jo Chung is Bren (Named Professorship) Professor of Control and Dynamical Systems in the California Institute of Technology.  Prof. Chung is also a Senior Research Scientist of the NASA Jet Propulsion Laboratory. Prof. Chung received the S.M. degree in Aeronautics and Astronautics and the Sc.D. degree in Estimation and Control with a minor in Optics from MIT in 2002 and 2007, respectively. He received the B.S. degree in Aerospace Engineering from KAIST in 1998. He is the recipient of the UIUC Engineering Dean's Award for Excellence in Research, the Arnold Beckman Faculty Fellowship of the U of Illinois Center for Advanced Study, the AFOSR Young Investigator Program (YIP) award, the NSF CAREER award, a 2020 Honorable Mention for the IEEE Robotics and Automation Letters Best Paper Award, three best conference paper awards (2015 AIAA GNC, 2009 AIAA Infotech, 2008 IEEE EIT), and five best student paper awards. Prof. Chung is an Associate Editor of the IEEE Transactions on Automatic Control and the AIAA Journal of Guidance, Control, and Dynamics.  He was an Associate Editor of the IEEE Transactions on Robotics, and the Guest Editor of a Special Section on Aerial Swarm Robotics published in the IEEE Transactions on Robotics.

Key reference papers:

  1. M. O’Connell*, G. Shi*, X. Shi, K. Azizzadenesheli, A. Anandkumar, Y. Yue, and S.-J, Chung, “Neural-Fly Enables Rapid Learning for Agile Flight in Strong Winds,” Science Robotics, vol 7, No. 66, May 4, 2022. (Paper) (Caltech press release) (YouTube video 1) (YouTube video 2)
  2. H. Tsukamoto, S.-J. Chung, and J.-J. E. Slotine, “Contraction Theory for Nonlinear Stability Analysis and Learning-based Control: A Tutorial Overview,” Annual Reviews in Control, vol. 52, 2021, pp. 135-169. (PDF)
  3. Hiroyasu Tsukamoto, Benjamin Rivière, Changrak Choi, Amir Rahmani, Soon-Jo Chung, “CaRT: Certified Safety and Robust Tracking in Learning-based Motion Planning for Multi-Agent Systems,” IEEE Conference on Decision and Control (CDC), Singapore, December 2023. (PDF)
  4. Y. K. Nakka, A. Liu, G. Shi, A Anandkumar, Y. Yue, and S.-J. Chung, “Chance-Constrained Trajectory Optimization for Safe Exploration and Learning of Nonlinear Systems,” IEEE Robotics and Automation Letters, vol. 6, no. 2, April 2021, pp. 389-396. (PDF)
  5. Y. K. Nakka and S.-J. Chung, “Trajectory Optimization of Chance-Constrained Nonlinear Stochastic Systems for Motion Planning Under Uncertainty,” IEEE Transactions on Robotics, vol. 39, no. 1, Feb 2023, pp. 203-222. (PDF) (Youtube Video)
  6. B. Rivière, W. Hoenig, Y. Yue, and S.-J. Chung, “GLAS: Global-to-Local Safe Autonomy Synthesis for Multi-Robot Motion Planning with End-to-End Learning,” IEEE Robotics and Automation Letters, vol. 5, no. 3, July 2020, pp. 4249-4256. Honorable Mention, IEEE RA-L Best Paper Award(PDF) (YouTube Video)
  7. Some more papers can be found here. Publications — Autonomous Robotics and Control Lab at Caltech

This week we will be discussing the phenomenon of Choice Blindness with Dr. Robert Patterson, Senior Psychologist at the Air Force Research Laboratory.
 
Robert Patterson received the Ph.D. degree in Experimental Psychology from Vanderbilt University in 1984. He was a Post-Doctoral Research Fellow in Neuroscience with Northwestern University from 1985 to 1987. From 1991 to 2010, he was an Assistant, Associate (tenured), and Full Professor of Experimental Psychology and Neuroscience with Washington State University. In 2010, he willingly resigned his faculty position and took a full-time position with the Air Force Research Laboratory. He is currently a Senior Psychologist with the 711 Human Performance Wing, Air Force Research Laboratory. His expertise is in human visual perception and decision making. Dr. Patterson is a Member of the IEEE, the IEEE Computational Intelligence Society, the Human Factors and Ergonomics Society, and the System Dynamics Society. He was a recipient of the 2012 Harry G. Armstrong Scientific Excellence Award of the Air Force Research Laboratory.
 
Suggested readings:

https://www.sciencedirect.com/science/article/abs/pii/S0010027710001381

https://www.science.org/doi/10.1126/science.1111709

https://www.frontiersin.org/articles/10.3389/fnhum.2014.00166/full

https://psycnet.apa.org/record/1978-00295-001

TITLE:  From Human to Neuromorphic HDR Recognition 
Speaker:  Dr. Chou P. Hung  (chou.p.hung.civ@army.mil
 
ABSTRACT:  The Army Research Office supports many topics in fundamental research, to advance the science and technology for the future Army. The Neurophysiology of Cognition program supports non medically oriented high-risk high-reward basic research that will enable discovery of the appropriate molecular, cellular, systems and behavioral-level codes underlying cognition and performance across multiple time scales. An overarching goal of the program is to foster advances in a broad range of experimental, computational and theoretical approaches applied to animal models and humans as well as data. Basic research opportunities are sought in two primary research thrusts within this program: (i) Evolutionary and Revolutionary Interactions (with Real and Mixed Worlds) and (ii) Neural Computation, Information Coding, and Translation. 

This talk will describe an ongoing basic research project at the intersection of these two thrusts, leveraging previous neuroscience efforts to understand visual processing to develop neuromorphic approaches for real-world resilient autonomous sensing and navigation. The brain has specialized circuits and computations for visual processing, and one of the challenges is the high dynamic range (HDR) luminance of real-world scenes. Previous animal and human research uncovered a putative circuitry for how the brain integrates contextual luminance and shape cues to enable rapid visual recognition. In a project jointly funded by ARO, ITC-IPAC, and ONRG, Prof. Lo’s team at National Tsing-Hua University (Taiwan) has been developing a neuromorphic algorithm to reproduce human behavior in HDR perception and testing this circuit as a pre-processor for a DNN visual algorithm (Detectron2). Initial results show improvements in the system’s localization performance under natural occlusion in a dense foliage environment. 
 
BIOGRAPHY:   Dr. Chou Hung is the Program Manager for Neurophysiology of Cognition at the US Army Research Office and has been a researcher at the Army Research Laboratory since 2015 in the areas of human cognition and bio-inspired novel AI development. Previously, he was a professor of neuroscience at Georgetown University and at National Yang-Ming University in Taiwan, where he led research to discover neural circuits and representations underlying visual perception. Dr. Hung’s research interests span from living neurons, circuits, mechanisms, and behaviors underlying real and augmented perception, to biological and AI-aided learning and decision-making, to brain-inspired computational principles for novel AIs for complex reasoning. Dr. Hung was trained as a systems neurophysiologist and received his PhD from Yale University in 2002. 

READINGhttps://dl.acm.org/doi/10.1145/3589737.3605990
Neuromorphic luminance-edge contextual preprocessing of naturally obscured targets 2023 White et. al

In this talk I will discuss promising synergies between consciousness and meditation research emerging from our recent work combining high-density EEG with neurophenomenological approaches. I will first briefly mention some exciting recent progress occurring in the field of consciousness science as a whole, then further discuss bidirectional interactions between consciousness and contemplative sciences: for example how meditative states can allow neuroscientists to challenge theoretical assumptions about the physical substrate of consciousness in the human brain, and also how neurophenomenological studies can also help to interpret changes in brain activity consistently observed across a range of traditions as a result of meditation practice.

Dr. Melanie Boly is a neurologist and neuroscientist with a joint appointment in Neurology and Psychiatry at UW-Madison. For more than twenty years she has been studying altered states of consciousness such as vegetative state, sleep, anesthesia, seizures, and more recently meditation, working under the mentorship of Profs. Steven Laureys, Pierre Maquet, Adrian Owen, Marcello Massimini, Karl Friston, Giulio Tononi, Hal Blumenfeld and Catherine Schevon. Her research, which combines neuroimaging techniques (e.g., PET, functional MRI, TMS-EEG, high-density EEG and intracranial recordings) with the theoretical framework of the Integrated Information Theory, aims to uncover the neural mechanisms of the level and contents of consciousness in healthy subjects and neurological patients. 

Dr. Boly’s work has led to numerous publications in international peer-reviewed journals (>180 Pubmed-indexed articles, current Google Scholar H-index 88), as well as invited talks at international conferences. She is board certified in neurology in both Europe and the US and is currently performing 75% FTE research and 25% clinical work as an epileptologist.

Recommended readings:

This week we will discuss articles circulating with respect to nonhuman animal behavioral paradigms of learning, memory, and consciousness.
 
Our colleague Bert P. posted an article on associative learning in jellyfish, while our colleague Bogdan U. sent some articles on flexible statistical inference in crows.

It is here requested that we attempt to specifically address what in the experiments could be used to relate to consciousness research, specifically the S3Q representation of the birds and/or what could be done with experiments modified to tease out S3Q from these behavioral experiments (https://arxiv.org/abs/2103.12638).
 
We will also prepare for Dr. Melanie Boly's visit next week, perhaps to discuss electroencephalographic experiments on expert meditation practitioners, and Integrated Information Theory.
 

Biography: Dr. Grace Hwang is a Program Director at the National Institute of Neurological Disorders and Stroke where she manages projects in the Technologies for Neural Recording and Modulation portfolio as part of the BRAIN Initiative. Prior to joining the NIH, Dr. Hwang was a Program Director at the National Science Foundation while based at her home institution, Johns Hopkins University, with appointments in both the Applied Physics Laboratory and the Kavli Neuroscience Discovery Institute. At NSF, she managed the Disability and Rehabilitation Engineering program while also spearheading cross-agency initiatives including the Emerging Frontiers in Research and Innovation's Brain-Inspired Dynamics for Engineering Energy-Efficient Circuits and Artificial Intelligence (BRAID) topic. Her research career at Johns Hopkins spanned neuroscience, artificial intelligence, dynamical systems analysis, neuromodulation, brain-machine interface, and robotics. She served as a Principal Investigator on an NIH BRAIN award to investigate neural stimulation using sonogenetics and on an NSF award to develop a brain-inspired algorithm for multi-agent robotic control.
 
Abstract: Computing demands are vastly outpacing improvements made through Moore’s law scaling; transistors are reaching their physical limitations. Modern computing is on a trajectory to consume far too much energy and requiring even more data. Yet the current paradigm of artificial intelligence and machine learning casts intelligence as a problem of representation learning and weak compute. These drawbacks have ignited interests in non von Neuman architecture for compute and new types of brain-informed/inspired learning. This talk will highlight recent innovations in neuromorphic hardware and algorithms, and explore emerging synergies between neuromorphic engineering and engineered organoid intelligence, a nascent field that we refer to as convergence intelligence. This talk will also build on Joseph Monaco’s April 2023 QuEST talk, entitled “Neurodynamical Articulation: Decoupling Intelligence from the Experiencing Self” to describe the importance of dynamics to achieving convergence intelligence. Relevant federal funding opportunities and strategies will be presented along with the presenter’s personal outlook for applying convergence intelligence to several application domains including brain/body interface technologies for improving health.
 
Suggested Reading Materials:

Optional Reading:

Abstract: The concept of the self is intimately related to notions of phenomenal consciousness. The self is discussed through viewpoints from western philosophy, eastern philosophy, cognitive psychology, and neuroscience. A tentative hypothesis is advanced that framing the problem of conscious AI as a problem of the self (and self as a memory system) might suggest implementation approaches that lead us closer towards AI with self-referential capabilities and improvements in human-machine teaming.
Light reading: pages 28-37 of Chapter 3 in Mace, John, ed. The Organization and Structure of Autobiographical Memory. Oxford University Press, 2019.
Slightly less-light reading: Conway, M. A. (2005). Memory and the selfJournal of Memory and Language53(4), 594-628.
Katrina Schleisman: Biography
This week will feature Raj Sharma and Aaron Craig with the Safe Autonomy Team in ACT3.
This week QuEST will informally discuss the Air Force's strategic vision for conscious machines: 
  • Autonomous Horizons vol. 2: The Way Forward (Zacharias, 2019)
  • Bring Your Own Question: topics could include desired peer, cognitive, and task flexibilities, human-machine alignment, etc. 
  • The goal is to communicate the Air Force's documented needs for exploiting qualia for sensor technology.