AFRL/ACT3 - QuEST Talk Archives


Qualia Exploitation of Sensor Technology

QuEST is focused on addressing the limitations in computational intelligence. Computational Intelligence has not developed anything resembling a general purpose solution, despite over half a century of effort. All current approaches have been built off of objective representations of the world. Objective representations are characterized by their attempt to capture attributes of the environment under consideration as accurately in a physics sense as possible. The results from the physical attribute sensing steps are then usually fed into exploitation algorithms or used for display for human operators where decisions are made.

There are several aspects of this approach that are flawed. First, we have to assume that the sensors were perfectly manufactured and are all identical. Any variations in sensor development will lead to differences in how the world is observed and in conventional approaches how it will be represented. Second, we have to have complete knowledge of what our system will encounter and that knowledge is captured either in the data set or models used for development, as well as knowledge of how this data might vary under the complete range of possible operating conditions. Slight changes in operating conditions can lead to completely different conclusions being drawn by the fragile exploitation algorithms operating with an objective representation.

Nature has adopted an approach that can deal with sensor imperfections and dramatic differences in the input reliably and effectively. These physiological solutions are based on the generation of a subjective representation that can adapt to ever changing sensors, the unique information experienced by the sensors as a result of their unique embedding, as well as information associated with the exploitation approach to be used for the sensor data. Subjective representations that are not tied to maintaining fidelity with physics based reality are a critical component of robust decision making.

The aspects of your subjective representation that you are aware of are called qualia. The color you see when looking at a scene or the pain you feel when you stub your toe are examples of qualia. Qualia are completely internal, and completely individualized. They form the basis set from which the subjective world model is constructed.  QuEST (Qualia Exploitation of Sensor Technology) seeks to develop a general purpose computational intelligence system that captures  the beneficial engineering aspects of qualia based solutions. A QUEST system will have the ability to detect, distinguish, and characterize entities in the environment, to include a representation of its self. It will also be able to construct a Theory of Mind for other entities in the environment to enable conclusions to be drawn about the internal feelings of sentient entities, such as sentiment and intent.


Click below for past QuEST guest speakers, topics and talks.
This week we will hear from Dr. Christopher Baldassano, who will be discussing his work on event memory and the Method of Loci technique.
Current models of human memory have been developed primarily based on experiments in which participants are asked to memorize lists of unrelated words or pictures. These models, however, are missing a primary feature of typical, realistic experiences: in the real world, episodic memories are layered on top of "cognitive maps" that capture our general knowledge about the structure of familiar environments. A dinnertime conversation with a friend, for example, could be situated within a spatial map, a social network, or a temporal "restaurant" script. In this project we describe an experimental paradigm that can serve as a testbed for investigating memory models that can strategically use cognitive maps. This novel approach makes use of a unique and understudied subject population of "memory experts" who have spent years or decades optimizing their ability to bind arbitrary information to an internal cognitive map. In a preliminary analysis of novice (n=25) and expert (n=5) users of the Method of Loci technique, fMRI brain imaging shows evidence for the creation of conjunctive codes during encoding that are reinstated during memory retrieval. Overall, this project provides a roadmap to advance the current state-of-the-art in theories of episodic memory and in fMRI experimental methods.
Speaker Biography:
Christopher Baldassano is an Assistant Professor in the Psychology Department at Columbia University. He was an undergraduate in Electrical Engineering at Princeton University, received his PhD in Computer Science at Stanford University, and was a postdoc at the Princeton Neuroscience Institute. His lab's research focuses on how knowledge about the world - including semantic knowledge, temporal structure, spatial maps, or schematic scripts - is used to understand and remember complex naturalistic experiences. By applying machine learning techniques to data from behavioral and neuroimaging experiments, his work aims to uncover how dynamic representations in the mind and brain during perception lead to the formation of event memories.

  • Bird, Chris M. (2020). How do we remember events? In Current Opinion in Behavioral Sciences. 32 (pp. 120-125). Link to reading
Summary: This week, Dr. Othalia Larue will describe a cognitive model of the hypothesized psychological processes memory athletes competing in Speed Cards events leverage to instantiate a Memory Palace (following  Person-Action-Object Dominic System). More specifically, we will look at which cognitive mechanisms support overlearning and the differences between novice and expert memory athletes. 
Bio: Dr Larue is a research scientist at Parallax Advanced Research. She obtained her Ph.D. in Cognitive AI from the University of Quebec in Montreal. Her research interests include cognitive architectures, cognitive modeling of emotions, dual-process theories (co-existence of heuristic and analytic behaviors), metacognition (including metacognitive trigger mechanisms), and the modeling of individual differences in memory and reasoning, as well as how those models can inform the design of adaptive and autonomous intelligent agents.

I'll use the memory athlete to go over different mechanisms of ACT-R that are useful to explain how procedures become implicit. And if I have time - I'll go over either analogy work or trust in automation work (deciding today).

Reading: I will stick to the paper you sent previously  (memory athlete), but if you can add a second, this, which goes other implementation of the Feeling of Rightness (as a bonus maybe - it's short):
Larue, O., Hough, A., & Juvina, I. (2018). A Cognitive Model of Switching Between Reflective and Reactive Decision Making in The Wason Task. In Proceedings of the Sixteenth International Conference on Cognitive Modeling (pp. 55-60).

And this one would be another bonus about cog architectures in general if people want to learn more about cognitive architectures (it's a textbook chapter), not just ACT-R:
Larue, O., Bourdon, J. N., Legault, M., & Poirier, P. (2022). Mental Architecture—Computational Models of Mind. In Mind, Cognition, and Neuroscience (pp. 164-182). Routledge.
This week we will describe the Memory Palace and Person-Action-Object Dominic System for mental athletes competing in Speed Cards events. A cognitive model of these psychological processes will be presented to harden our understanding of the underlying mechanisms, and experiments will then be hypothesized to further this knowledge base.
Following our discussion last week with the organizers of the USA Memory Championships Tony and Michael D., this week we will fine-grain our description of the methods used to prepare for and compete in the Speed Cards event in memory competitions, based on the subjective report of World Memory Championship Grandmaster Nelson Dellis.
We will begin with a review of the relevant QuEST lecture material (e.g., link game, chunking, etc.), and then continue to explore the reported experiences of mental athletes through the Simulated, Situated, and Structurally coherent Qualia (S3Q) framework of artificial consciousness, in order to propose a set of neuroscience and cognitive modeling experiments to help further understand the boundaries of expert memory performance.
With the psychology and neuroscience mechanisms successfully mapped onto Speed Cards and the S3Q theory, we will then describe approaches to computational modeling of these phenomena for use in implementing novel artificially intelligent agents. 
Covering these topics in-depth over several weeks, the conversation could naturally transition to discussing the "types of qualia," as well as a potential thread on discussing "basal cognition..."
BONUS: check out the card trick at 47 minutes and prepare your thoughts:

  • Schmidt, K., Larue, O., Kulhanek, R., Flaute, D., Veliche, R., Manasseh, C., ... & Rogers, S. (2023). Representational Tenets for Memory Athletics. arXiv preprint arXiv:2303.11944.
This week we will describe the current state of world-class memory competitions, including the methods used to prepare for and compete in memory competitions, based on the subjective report of World Memory Championship Grandmaster Nelson Dellis. We will then explore the reported experiences through the lens of the Simulated, Situated, and Structurally coherent Qualia (S3Q) theory of consciousness, in order to propose a set of experiments to help further understand the boundaries of expert memory performance.

Schmidt, K., Larue, O., Kulhanek, R., Flaute, D., Veliche, R., Manasseh, C., ... & Rogers, S. (2023). Representational Tenets for Memory Athletics. arXiv preprint arXiv:2303.11944.
This week we will continue our discussion on the creation of flexible memory with the structure of qualia.
We will use the QuEST work with USA Memory Champion Nelson Dellis to provide a framework for qualia-based memory encoding. Briefly, a well-practiced mnemonic strategy (e.g., The Person-Action-Object Dominic System) can be leveraged to rapidly place a narrative (representing 3 playing cards) into a location in a Memory Palace, with the world record holder memorizing a 52-card deck in 12 seconds!
What can we glean about learning and memory systems by characterizing the experience of these competitive memory athletes?
We will then transition to discussing the Types of Qualia. What is a quale (examples)? Is there a logical parsing for Types of Qualia?
Reading: Schmidt, K., Larue, O., Kulhanek, R., Flaute, D., Veliche, R., Manasseh, C., ... & Rogers, S. (2023). Representational Tenets for Memory Athletics. arXiv preprint arXiv:2303.11944.
This week we will discuss operating characteristics and mechanisms of conscious and nonconscious learning and memory.
What are the roles of qualia in generating flexible representations?
Much prior research on memory systems has focused on establishing dissociations between different types of memory based on behavior, subjective experience, and the brain: explicit memory depends on the medial temporal lobe and is thought to operate consciously through a relatively slower processing bottleneck, while implicit memory is a term for all other learning operating outside subjective awareness and not dependent on the medial temporal lobe. These implicit, nonconscious learning and memory processes are acquired incrementally and exhibit a hallmark inflexibility, characteristic of mainstream machine learning approaches (i.e., taking hundreds of trials to mature a representation that is brittle when inferencing outside the training distribution). Conscious learning and memory, in contrast, appears to flexibly leverage a relational knowledgebase, allowing for transfer of learning to novel contexts (i.e., unexpected queries).
This week we will leverage mainstream cognitive neuroscience to explore some operating characteristics and mechanisms of conscious learning and memory.
Scoville, W. B., & Milner, B. (1957). Loss of recent memory after bilateral hippocampal lesions. Journal of Neurology, Neurosurgery & Psychiatry, 20, 11–21.
Reber, A. S. (1967). Implicit learning of artificial grammars. Journal of Verbal Learning & Verbal Behavior, 6(6), 855–863.
Kabrisky Lecture 2024

Every January the QuEST group uses the first meeting of the calendar year to present a ‘state of QuEST’ lecture in honor of a founding member the QuEST meetings, Dr. Matthew “Special K” Kabrisky. This lecture is designed to bring anyone up to speed on how we use terms and to communicate what we seek to accomplish.

  • QuEST is an analytical and software development approach to improve human-machine team decision quality over a wide range of stimuli, handling unexpected queries and contextual adaptation. QuEST is focused on creating computer-based decision aids and also decision engines that may be embedded in platforms interacting with the world. QuEST seeks to engineer solutions to provide the advantages commonly associated with intuitive reasoning, quick reflexive, and advantages often associated with “conscious” context sensitive thinking.
  • QuEST also seeks to provide a mathematical framework to understand what can be known by a group of people and their computer-based decision aids about situations to facilitate prediction of when more or different people (different training) or more / different computer aids are necessary to acceptably make a particular decision. Can a given situational complexity be represented acceptably by the representational capacity of the group of people and computer agents available.
  • For 2024 we also will explore the idea that there is only one representation, how knowledge is structured and the processes that are used to create / maintain and access that knowledge. The idea of Sys1 and Sys2 are just a model of ways to think about different cognitive capabilities and challenges.  We posit that these capabilities / challenges can be addressed using the same representation. Qualia provide insight into the process used by nature to create, maintain and exploit a vocabulary of cognition, the one representation. That representation can be used in multiple ways, using multiple processes that can be modeled as sys1 / sys2 cognition. By studying qualia, specifically the ‘what it is like’ to have a conscious experience we can get insight into the vocabulary of cognition, the representation, and thus gain insights applicable to the QuEST goal of creating modern decision aids.
  • Qualia can be studied by investigating the neuro basis, the behavioral/functional attributes and/or the phenomenology of the experience. QuEST has been focused on examining the phenomenology of the experience, qualia, and using that information to advance the S3Q Theory of Consciousness. 

Dr. Matthew Kabrisky was an Air Force pioneer and innovator. From Air Force aviator in the 1950s to professor, mentor, and researcher, his discoveries paved the way for many modern technological advancements. He developed theories of how the human brain processes information to recognize visual objects. This work directly led to the innovation of implanted electrodes for those afflicted with diseases such as epilepsy and injuries that resulted in paralysis. He was the leading international expert on the physiological symptoms of space adaptation sickness, i.e., motion sickness. His research led NASA to a better understanding and an approach to mitigate the effects of space environments on astronauts. His research in the area of robust speech recognition laid critical foundations for fostering the development of DoD and private industry products ranging from voice activated controls in advanced tactical aircrafts, to aides for the disabled and handicapped and industrial process control. In the 1990s, he helped lead a team of engineers that developed the world’s most accurate breast cancer detection system. This highly successful product has helped in the detection of thousands of breast cancers before they would have otherwise been detected. Dr. Kabrisky’s pioneering efforts paved the way for current innovations across the Air Force and the Nation.

Sleep, Memory and Dreams:  A Unified View
Dr. Robert Stickgold, PhD
Harvard Medical School and Beth Israel Deaconess Medical Center, Boston MA USA
The benefits that sleep confers on memory are surprisingly widespread.  For simple procedural skills – how to ride a bicycle or distinguish different coins in one’s pocket – a night of sleep or an afternoon nap following learning leads to an absolute and dramatic improvement in performance. Sleep also stabilizes verbal memories, reducing their susceptibility to interference and decay, processes that all too easily lead to forgetting. 

But the action of sleep can be more sophisticated than simply strengthening and stabilizing memories.  It can lead to the selective retention of emotional memories, or even of emotional components of a scene, while allowing other memories and parts of scenes to be forgotten. It can extract the gist from a list of words, or the rules governing a complex probabilistic game. It can lead to insights ranging from finding the single word that logically connects three apparently unrelated words, to discovering an unexpected rule that allows for the more efficient solving of mathematical problems. It can facilitate the integration of new information into existing networks of related information and help infants learn artificial grammars. Disruptions of normal sleep in neurologic and psychiatric disorders can lead to a failure of these processes.

Dreams appear to be part of this ongoing memory processing, and can predict subsequent memory improvement. The NEXTUP (Network Exploration to Understand Possibilities) model of dreaming proposes that dreaming aids complex problem solving by supporting divergent creativity, acting more by exploring a problem's "solution space" than by searching for the solution, itself.

SPEAKER BIO: Robert Stickgold is a professor of psychiatry at Beth Israel Deaconess Medical Center and Harvard Medical School, and is a visiting professor at M.I.T’s Media Lab.  He has published over 100 scientific publications, including papers in Science, Nature, and Nature Neuroscience.  His work has been written up in Time, Newsweek, The New York Times, The Boston Globe Magazine, and Seed Magazine, and he has been a guest on The Newshour with Jim Leher and NPR’s Science Friday with Ira Flato several times, extolling the importance of sleep. He has spoken at the Boston Museum of Science, the American Museum of Natural History in New York, and NEMO, the Amsterdam museum of science. His current work looks at the nature and function of sleep and dreams from a cognitive neuroscience perspective, with an emphasis on the role of sleep and dreams in memory consolidation and integration.  In addition to studying the normal functioning of sleep, he is currently investigating alterations in sleep-dependent memory consolidation in patients with schizophrenia, autism spectrum disorder, and PTSD.  His work is currently funded by NIMH. He is coauthor, with Antonio Zadra, of the new book When Brains Dream.
Why do we dream?
I (Kevin) am awake and conscious as I am writing this, vividly experiencing the world around me (e.g., 640 nm wavelengths of light currently appear red to me). At night, I fall asleep, become unconscious, and these qualia go away, but as my sleep stages progress I enter Rapid Eye Movement sleep, and I start to dream. I am asleep yet am vividly experiencing a virtual reality of thoughts, feelings, emotions, etc. (i.e., the qualia come back).
What is the function of this qualitative experience while we are asleep?
Over the next several weeks will be exploring the function of dreams as we prepare for guest speaker Dr. Robert Stickgold's presentation 22 December on his model of dream function, NEXTUP (Network Exploration to Understand Possibilities).
We're continuing the conversation. The first QuEST meeting of any calendar year is the Kabrisky Memorial Lecture. It is meant to provide an introduction for those who have not been on this journey with us to get the basic ideas we are pursuing. That is always a very difficult thing to do in a single talk or a couple of talks. What we have done traditionally is use some of the December meetings to capture some key ideas we think are critical to insert into that Kabrisky Lecture, often focusing on what is new to our discussions for that year.
So come to QuEST this Friday with any topic. For example, something that we’ve covered this year that has impacted your thinking about consciousness, and we will discuss where / how they need to be assimilated.
The first QuEST meeting of any calendar year is the Kabrisky Memorial Lecture. It is meant to provide an introduction for those who have not been on this journey with us to get the basic ideas we are pursuing. That is always a very difficult thing to do in a single talk or a couple of talks. What we have done traditionally is use some of the December meetings to capture some key ideas we think are critical to insert into that Kabrisky Lecture, often focusing on what is new to our discussions for that year.
So come to QuEST this Friday with any topic. For example, something that we’ve covered this year that has impacted your thinking about consciousness, and we will discuss where / how they need to be assimilated.
This week, our own Dr. Bogdan "Maui" Udrea will lead an informal discussion on some non-human animal cognition questions that he has been considering. 
In the 2022 Air Force Chief Scientist Workshop on AI Dr. Yann LeCun mentioned that human cognition is mostly based on observation.

Simple animals, such as flies seem to spend a relatively small amount of time performing observation to enable functions necessary for survival and reproduction. Does this mean that flies are “born ready” with certain necessarily simple representations of the environment in which they function?

Studies of more complex insects such honeybees (with a nervous system of about 900,000 neurons) have shown that they possess “numerical cognition” that allows them to perform addition and subtraction. Howard et al. [1] state that, in order to perform addition and subtraction, honeybees “acquire long-term rules and use short-term memory” that the authors have trained during the experiments. Moreover, the authors state that the results of their study “suggest the possibility that honeybees and other nonhuman animals may be biologically tuned for complex numerical tasks.” Have the honeybees evolved the ability to perform complex numerical tasks for survival only or do their social skills and collective decision making [2] have something to do with it?
[1] Scarlett R. Howard et al. Numerical cognition in honeybees enables addition and subtraction. Sci. Adv.5,eaav0961(2019)
[2] Seeley, T. D. (2010). Honeybee Democracy. Princeton University Press.

Safe and Stable Learning for Agile Robotics

Abstract: My research group at Caltech ( is working to systematically leverage AI and Machine Learning techniques towards achieving safe and stable autonomy of safety-critical robotic systems, such as robot swarms and autonomous flying cars. Another example is LEONARDO, the first bipedal robot that can walk, fly, slackline, and skateboard. Stability and safety are often research problems of control theory, while conventional black-box AI approaches lack much-needed robustness, scalability, and interpretability, which are indispensable to designing control and autonomy engines for safety-critical aerospace and robotic systems. I will present some recent results using contraction-based incremental stability tools for deriving formal robustness and stability guarantees of various learning-based and data-driven control problems, with some illustrative examples including learning-to-fly control with adaptive meta learning, learning-based swarm control and planning synthesis, and optimal motion planning with stochastic nonlinear dynamics and chance constraints. Recent results on neural-network-based contraction metrics (NCMs) as a stability certificate for safe motion planning and control will also be discussed.

Bio: Soon-Jo Chung is Bren (Named Professorship) Professor of Control and Dynamical Systems in the California Institute of Technology.  Prof. Chung is also a Senior Research Scientist of the NASA Jet Propulsion Laboratory. Prof. Chung received the S.M. degree in Aeronautics and Astronautics and the Sc.D. degree in Estimation and Control with a minor in Optics from MIT in 2002 and 2007, respectively. He received the B.S. degree in Aerospace Engineering from KAIST in 1998. He is the recipient of the UIUC Engineering Dean's Award for Excellence in Research, the Arnold Beckman Faculty Fellowship of the U of Illinois Center for Advanced Study, the AFOSR Young Investigator Program (YIP) award, the NSF CAREER award, a 2020 Honorable Mention for the IEEE Robotics and Automation Letters Best Paper Award, three best conference paper awards (2015 AIAA GNC, 2009 AIAA Infotech, 2008 IEEE EIT), and five best student paper awards. Prof. Chung is an Associate Editor of the IEEE Transactions on Automatic Control and the AIAA Journal of Guidance, Control, and Dynamics.  He was an Associate Editor of the IEEE Transactions on Robotics, and the Guest Editor of a Special Section on Aerial Swarm Robotics published in the IEEE Transactions on Robotics.

Key reference papers:

  1. M. O’Connell*, G. Shi*, X. Shi, K. Azizzadenesheli, A. Anandkumar, Y. Yue, and S.-J, Chung, “Neural-Fly Enables Rapid Learning for Agile Flight in Strong Winds,” Science Robotics, vol 7, No. 66, May 4, 2022. (Paper) (Caltech press release) (YouTube video 1) (YouTube video 2)
  2. H. Tsukamoto, S.-J. Chung, and J.-J. E. Slotine, “Contraction Theory for Nonlinear Stability Analysis and Learning-based Control: A Tutorial Overview,” Annual Reviews in Control, vol. 52, 2021, pp. 135-169. (PDF)
  3. Hiroyasu Tsukamoto, Benjamin Rivière, Changrak Choi, Amir Rahmani, Soon-Jo Chung, “CaRT: Certified Safety and Robust Tracking in Learning-based Motion Planning for Multi-Agent Systems,” IEEE Conference on Decision and Control (CDC), Singapore, December 2023. (PDF)
  4. Y. K. Nakka, A. Liu, G. Shi, A Anandkumar, Y. Yue, and S.-J. Chung, “Chance-Constrained Trajectory Optimization for Safe Exploration and Learning of Nonlinear Systems,” IEEE Robotics and Automation Letters, vol. 6, no. 2, April 2021, pp. 389-396. (PDF)
  5. Y. K. Nakka and S.-J. Chung, “Trajectory Optimization of Chance-Constrained Nonlinear Stochastic Systems for Motion Planning Under Uncertainty,” IEEE Transactions on Robotics, vol. 39, no. 1, Feb 2023, pp. 203-222. (PDF) (Youtube Video)
  6. B. Rivière, W. Hoenig, Y. Yue, and S.-J. Chung, “GLAS: Global-to-Local Safe Autonomy Synthesis for Multi-Robot Motion Planning with End-to-End Learning,” IEEE Robotics and Automation Letters, vol. 5, no. 3, July 2020, pp. 4249-4256. Honorable Mention, IEEE RA-L Best Paper Award(PDF) (YouTube Video)
  7. Some more papers can be found here. Publications — Autonomous Robotics and Control Lab at Caltech

This week we will be discussing the phenomenon of Choice Blindness with Dr. Robert Patterson, Senior Psychologist at the Air Force Research Laboratory.
Robert Patterson received the Ph.D. degree in Experimental Psychology from Vanderbilt University in 1984. He was a Post-Doctoral Research Fellow in Neuroscience with Northwestern University from 1985 to 1987. From 1991 to 2010, he was an Assistant, Associate (tenured), and Full Professor of Experimental Psychology and Neuroscience with Washington State University. In 2010, he willingly resigned his faculty position and took a full-time position with the Air Force Research Laboratory. He is currently a Senior Psychologist with the 711 Human Performance Wing, Air Force Research Laboratory. His expertise is in human visual perception and decision making. Dr. Patterson is a Member of the IEEE, the IEEE Computational Intelligence Society, the Human Factors and Ergonomics Society, and the System Dynamics Society. He was a recipient of the 2012 Harry G. Armstrong Scientific Excellence Award of the Air Force Research Laboratory.
Suggested readings:

TITLE:  From Human to Neuromorphic HDR Recognition 
Speaker:  Dr. Chou P. Hung  (
ABSTRACT:  The Army Research Office supports many topics in fundamental research, to advance the science and technology for the future Army. The Neurophysiology of Cognition program supports non medically oriented high-risk high-reward basic research that will enable discovery of the appropriate molecular, cellular, systems and behavioral-level codes underlying cognition and performance across multiple time scales. An overarching goal of the program is to foster advances in a broad range of experimental, computational and theoretical approaches applied to animal models and humans as well as data. Basic research opportunities are sought in two primary research thrusts within this program: (i) Evolutionary and Revolutionary Interactions (with Real and Mixed Worlds) and (ii) Neural Computation, Information Coding, and Translation. 

This talk will describe an ongoing basic research project at the intersection of these two thrusts, leveraging previous neuroscience efforts to understand visual processing to develop neuromorphic approaches for real-world resilient autonomous sensing and navigation. The brain has specialized circuits and computations for visual processing, and one of the challenges is the high dynamic range (HDR) luminance of real-world scenes. Previous animal and human research uncovered a putative circuitry for how the brain integrates contextual luminance and shape cues to enable rapid visual recognition. In a project jointly funded by ARO, ITC-IPAC, and ONRG, Prof. Lo’s team at National Tsing-Hua University (Taiwan) has been developing a neuromorphic algorithm to reproduce human behavior in HDR perception and testing this circuit as a pre-processor for a DNN visual algorithm (Detectron2). Initial results show improvements in the system’s localization performance under natural occlusion in a dense foliage environment. 
BIOGRAPHY:   Dr. Chou Hung is the Program Manager for Neurophysiology of Cognition at the US Army Research Office and has been a researcher at the Army Research Laboratory since 2015 in the areas of human cognition and bio-inspired novel AI development. Previously, he was a professor of neuroscience at Georgetown University and at National Yang-Ming University in Taiwan, where he led research to discover neural circuits and representations underlying visual perception. Dr. Hung’s research interests span from living neurons, circuits, mechanisms, and behaviors underlying real and augmented perception, to biological and AI-aided learning and decision-making, to brain-inspired computational principles for novel AIs for complex reasoning. Dr. Hung was trained as a systems neurophysiologist and received his PhD from Yale University in 2002. 

Neuromorphic luminance-edge contextual preprocessing of naturally obscured targets 2023 White et. al

In this talk I will discuss promising synergies between consciousness and meditation research emerging from our recent work combining high-density EEG with neurophenomenological approaches. I will first briefly mention some exciting recent progress occurring in the field of consciousness science as a whole, then further discuss bidirectional interactions between consciousness and contemplative sciences: for example how meditative states can allow neuroscientists to challenge theoretical assumptions about the physical substrate of consciousness in the human brain, and also how neurophenomenological studies can also help to interpret changes in brain activity consistently observed across a range of traditions as a result of meditation practice.

Dr. Melanie Boly is a neurologist and neuroscientist with a joint appointment in Neurology and Psychiatry at UW-Madison. For more than twenty years she has been studying altered states of consciousness such as vegetative state, sleep, anesthesia, seizures, and more recently meditation, working under the mentorship of Profs. Steven Laureys, Pierre Maquet, Adrian Owen, Marcello Massimini, Karl Friston, Giulio Tononi, Hal Blumenfeld and Catherine Schevon. Her research, which combines neuroimaging techniques (e.g., PET, functional MRI, TMS-EEG, high-density EEG and intracranial recordings) with the theoretical framework of the Integrated Information Theory, aims to uncover the neural mechanisms of the level and contents of consciousness in healthy subjects and neurological patients. 

Dr. Boly’s work has led to numerous publications in international peer-reviewed journals (>180 Pubmed-indexed articles, current Google Scholar H-index 88), as well as invited talks at international conferences. She is board certified in neurology in both Europe and the US and is currently performing 75% FTE research and 25% clinical work as an epileptologist.

Recommended readings:

This week we will discuss articles circulating with respect to nonhuman animal behavioral paradigms of learning, memory, and consciousness.
Our colleague Bert P. posted an article on associative learning in jellyfish, while our colleague Bogdan U. sent some articles on flexible statistical inference in crows.

It is here requested that we attempt to specifically address what in the experiments could be used to relate to consciousness research, specifically the S3Q representation of the birds and/or what could be done with experiments modified to tease out S3Q from these behavioral experiments (
We will also prepare for Dr. Melanie Boly's visit next week, perhaps to discuss electroencephalographic experiments on expert meditation practitioners, and Integrated Information Theory.

Biography: Dr. Grace Hwang is a Program Director at the National Institute of Neurological Disorders and Stroke where she manages projects in the Technologies for Neural Recording and Modulation portfolio as part of the BRAIN Initiative. Prior to joining the NIH, Dr. Hwang was a Program Director at the National Science Foundation while based at her home institution, Johns Hopkins University, with appointments in both the Applied Physics Laboratory and the Kavli Neuroscience Discovery Institute. At NSF, she managed the Disability and Rehabilitation Engineering program while also spearheading cross-agency initiatives including the Emerging Frontiers in Research and Innovation's Brain-Inspired Dynamics for Engineering Energy-Efficient Circuits and Artificial Intelligence (BRAID) topic. Her research career at Johns Hopkins spanned neuroscience, artificial intelligence, dynamical systems analysis, neuromodulation, brain-machine interface, and robotics. She served as a Principal Investigator on an NIH BRAIN award to investigate neural stimulation using sonogenetics and on an NSF award to develop a brain-inspired algorithm for multi-agent robotic control.
Abstract: Computing demands are vastly outpacing improvements made through Moore’s law scaling; transistors are reaching their physical limitations. Modern computing is on a trajectory to consume far too much energy and requiring even more data. Yet the current paradigm of artificial intelligence and machine learning casts intelligence as a problem of representation learning and weak compute. These drawbacks have ignited interests in non von Neuman architecture for compute and new types of brain-informed/inspired learning. This talk will highlight recent innovations in neuromorphic hardware and algorithms, and explore emerging synergies between neuromorphic engineering and engineered organoid intelligence, a nascent field that we refer to as convergence intelligence. This talk will also build on Joseph Monaco’s April 2023 QuEST talk, entitled “Neurodynamical Articulation: Decoupling Intelligence from the Experiencing Self” to describe the importance of dynamics to achieving convergence intelligence. Relevant federal funding opportunities and strategies will be presented along with the presenter’s personal outlook for applying convergence intelligence to several application domains including brain/body interface technologies for improving health.
Suggested Reading Materials:

Optional Reading:

Abstract: The concept of the self is intimately related to notions of phenomenal consciousness. The self is discussed through viewpoints from western philosophy, eastern philosophy, cognitive psychology, and neuroscience. A tentative hypothesis is advanced that framing the problem of conscious AI as a problem of the self (and self as a memory system) might suggest implementation approaches that lead us closer towards AI with self-referential capabilities and improvements in human-machine teaming.
Light reading: pages 28-37 of Chapter 3 in Mace, John, ed. The Organization and Structure of Autobiographical Memory. Oxford University Press, 2019.
Slightly less-light reading: Conway, M. A. (2005). Memory and the selfJournal of Memory and Language53(4), 594-628.
Katrina Schleisman: Biography
This week will feature Raj Sharma and Aaron Craig with the Safe Autonomy Team in ACT3.
This week QuEST will informally discuss the Air Force's strategic vision for conscious machines: 
  • Autonomous Horizons vol. 2: The Way Forward (Zacharias, 2019)
  • Bring Your Own Question: topics could include desired peer, cognitive, and task flexibilities, human-machine alignment, etc. 
  • The goal is to communicate the Air Force's documented needs for exploiting qualia for sensor technology.