The Abstraction and Generalization of Social Decision Information

Wang Han, Dong Yulin, Liu Ningfeng, Zhu Lusha

Journal of Psychological Science ›› 2025, Vol. 48 ›› Issue (4) : 962-971.

PDF(680 KB)
PDF(680 KB)
Journal of Psychological Science ›› 2025, Vol. 48 ›› Issue (4) : 962-971. DOI: 10.16719/j.cnki.1671-6981.20250416
Computational modeling and artificial intelligence

The Abstraction and Generalization of Social Decision Information

  • Wang Han, Dong Yulin, Liu Ningfeng, Zhu Lusha
Author information +
History +

Abstract

Recent advances in artificial intelligence (AI), cognitive psychology, and neuroscience have significantly enhanced our understanding of abstraction and generalization — how agents extract key features from complex decision environments to support efficient and generalizable decision-making. While extensive research has elucidated the mechanisms of abstraction and generalization in non-social contexts, such as rule-based learning, far less is known about how these processes in social domains. During social interactions, agents must not only filter out irrelevant details and abstract core decision-relevant information (e.g., concepts, perspectives, strategies), but also infer whether their internal representations align with those of others. Investigating how multi-agent systems understand, utilize, and generalize relevant information in service of effective interactions is increasingly critical for understanding the mechanisms underlying general social intelligence. Building on recent findings in non-social decision-making, this paper outlines future research directions for studying abstraction and generalization in social contexts.
Specifically, in the non-social domain, studies of generalization have highlighted two core mechanisms: rule-based strategies, which involve hierarchical categorization of features, and similarity-based approaches, such as analogical reasoning via prototype matching. At the computational level, a prominent example in this area is the successor representation (SR) model, which provides a unified computational framework for AI, behavioral and neuroscience research. Developed in close relation to reinforcement learning theories, SR compresses decision states by encoding predictive relationships, thereby enabling rapid adaptation to reward changes. Inspired by SR predictions, emerging neurobiological evidence has implicated the hippocampal and prefrontal representations of abstract knowledge, potentially facilitating knowledge transfer and strategy generalization. Parallel research in AI demonstrates how deep successor reinforcement learning (DSR) can leverage SR to achieve cross-task generalization in navigation and robotic design. These findings underscore abstraction as a conserved yet effective mechanism for flexible decision-making across both biological and artificial systems.
However, social interactions introduce unique computational demands. Agents must extract and organize relevant information, while negotiating shared intentions and conventions during cooperative interactions or strategizing to outmaneuver opponents during competitive interactions. Despite progress in characterizing the neural and computational mechanisms of various social cognitive functions, fundamental questions remain. For example, how do interacting agents form aligned or misaligned abstractions? How do goals and environmental constraints shape these abstractions, and how do these, in turn, influence decision strategies? What computational and neural mechanisms give rise to the dynamic alignment or divergence of abstractions and generalization across individuals?
Three interconnected open questions critical to future research should be addressed. First, effective social cooperation typically requires abstraction hierarchies and knowledge organization shared across interacting agents. While neural coupling across cooperative individuals is well-documented, the computational role of such alignment in facilitating social decision-making and its strategy generalization remains unclear. Research in AI suggests that alignment enhances cross-task performance, but its relevance to human social cognition requires further investigation.
Second, abstractions of decision-relevant information may vary across individuals with divergent social experiences, network positions, and cultural backgrounds. Therefore, it is critical to elucidate, at a computational level, when and how individual differences give rise to misaligned internal representations, which may contribute to phenomena such as prejudice and polarization. Investigating these misaligned representations will shed light on how social identities shape perception and decision-making, ultimately informing strategies to mitigate bias and foster social cohesion.
Finally, successful social cooperation often depends on aligning initially misaligned representations across individuals. It is important to identify the mechanisms that support such interpersonal adaptation. Although prior research in AI and evolutionary biology has highlighted the benefits of alignment dynamics, the cognitive and neurocomputational processes that govern these internal changes remain to be explored.
Together, this paper proposes to bridge ideas and computational methods in social decision-making, AI, and cognitive neuroscience for developing a mechanistic understanding of abstraction and representation. Such an integrative framework holds the potential to reveal the computational principles of general social intelligence and inspire the design of socially intelligent systems.

Key words

decision-making / multi-agent / abstraction / generalization / cognitive science / neuroscience / artificial intelligence

Cite this article

Download Citations
Wang Han, Dong Yulin, Liu Ningfeng, Zhu Lusha. The Abstraction and Generalization of Social Decision Information[J]. Journal of Psychological Science. 2025, 48(4): 962-971 https://doi.org/10.16719/j.cnki.1671-6981.20250416

References

[1] Adolphs, R. (2009). The social brain: Neural basis of social knowledge. Annual Review of Psychology, 60, 693-716.
[2] Barreto A., Borsa D., Quan J., Schaul T., Silver D., Hessel M., Mankowitz D., Zidek A., & Munos R. (2018). Transfer in deep reinforcement learning using successor features and generalised policy improvement. Proceeding of the international conference on machine learning.
[3] Bengio Y., Louradour J., Collobert R., & Weston J. (2009). Curriculum learning. Proceedings of the 26th annual international conference on machine learning.
[4] Bond R. M., Fariss C. J., Jones J. J., Kramer A. D., Marlow C., Settle J. E., & Fowler J. H. (2012). A 61-million-person experiment in social influence and political mobilization. Nature, 489(7415), 295-298.
[5] Boyce V., Hawkins R. D., Goodman N. D., & Frank M. C. (2024). Interaction structure constrains the emergence of conventions in group communication. Proceedings of the National Academy of Sciences, 121(28), e2403888121.
[6] Call, J., & Tomasello, M. (2008). Does the chimpanzee have a theory of mind? 30 years later. Trends in Cognitive Sciences, 12(5), 187-192.
[7] Clark, H. H., & Wilkes-Gibbs, D. (1986). Referring as a collaborative process. Cognition, 22(1), 1-39.
[8] Courellis H. S., Minxha J., Cardenas A. R., Kimmel D. L., Reed C. M., Valiante T. A., Salzman C. D., Mamelak A. N., Fusi S., & Rutishauser U. (2024). Abstract representations emerge in human hippocampal neurons during inference. Nature. Advance online publication.
[9] Cross L., Cockburn J., Yue Y., & O' Doherty, J. P. (2021). Using deep reinforcement learning to reveal how the brain encodes abstract state-space representations in high-dimensional environments. Neuron, 109(4), 724-738.
[10] Dayan, P. (1993). Improving generalization for temporal difference learning: The successor representation. Neural Computation, 5(4), 613-624.
[11] De Martino, B., & Cortese, A. (2023). Goals, usefulness and abstraction in value-based choice. Trends in Cognitive Sciences, 27(1), 65-80.
[12] Dessi R., Gallo E., & Goyal S. (2016). Network cognition. Journal of Economic Behavior and Organization, 123, 78-96.
[13] Dijkstra, E. W. (1959). A note on two problems in connexion with graphs. Numerische Mathematik, 1(1), 269-271.
[14] Galesic M., Bruine de Bruin W., Dalege J., Feld S. L., Kreuter F., Olsson H., Prelec D., Stein D. L., & van Der Does, T. (2021). Human social sensing is an untapped resource for computational social science. Nature, 595(7866), 214-222.
[15] Garvert M. M., Saanum T., Schulz E., Schuck N. W., & Doeller C. F. (2023). Hippocampal spatio-predictive cognitive maps adaptively guide reward generalization. Nature Neuroscience, 26(4), Article 4.
[16] Gershman, S. J. (2018). The successor representation: Its computational logic and neural substrates. The Journal of Neuroscience, 38(33), 7193-7200.
[17] Gershman S. J., Horvitz E. J., & Tenenbaum J. B. (2015). Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Science, 349(6245), 273-278.
[18] Goudar V., Peysakhovich B., Freedman D. J., Buffalo E. A., & Wang X. J. (2023). Schema formation in a neural population subspace underlies learning-to-learn in flexible sensorimotor problem-solving. Nature Neuroscience, 26(5), 879-890.
[19] Guilbeault D., Baronchelli A., & Centola D. (2021). Experimental evidence for scale-induced category convergence across populations. Nature Communications, 12(1), 327.
[20] Hamrick, J. B. (2019). Analogues of mental simulation and imagination in deep learning. Current Opinion in Behavioral Sciences, 29, 8-16.
[21] Hassabis D., Kumaran D., Summerfield C., & Botvinick M. (2017). Neuroscience-inspired artificial intelligence. Neuron, 95(2), 245-258.
[22] Hasson U., Ghazanfar A. A., Galantucci B., Garrod S., & Keysers C. (2012). Brain-to-brain coupling: A mechanism for creating and sharing a social world. Trends in Cognitive Sciences, 16(2), 114-121.
[23] Hattori R., Hedrick N. G., Jain A., Chen S., You H., Hattori M., Choi J. H., Lim B. K., Yasuda R., & Komiyama T. (2023). Meta-reinforcement learning via orbitofrontal cortex. Nature Neuroscience, 26(12), Article 12.
[24] Hawkins R. D., Frank M. C., & Goodman N. D. (2020). Characterizing the dynamics of learning in repeated reference games. Cognitive Science, 44(6), e12845.
[25] Hawkins R. D., Sano M., Goodman N. D., & Fan J. E. (2023). Visual resemblance and interaction history jointly constrain pictorial meaning. Nature Communications, 14(1), 2199.
[26] Hinton, G. E. (2007). Learning multiple layers of representation. Trends in Cognitive Sciences, 11(10), 428-434.
[27] Ho M. K., Abel D., Correa C. G., Littman M. L., Cohen J. D., & Griffiths T. L. (2022). People construct simplified mental representations to plan. Nature, 606(7912), 129-136.
[28] Ho M. K., Abel D., Griffiths T. L., & Littman M. L. (2019). The value of abstraction. Current Opinion in Behavioral Sciences, 29, 111-116.
[29] Holroyd, C. B. (2022). Interbrain synchrony: On wavy ground. Trends in Neurosciences, 45(5), 346-357.
[30] Jacoby, N., & McDermott, J. H. (2017). Integer ratio priors on musical rhythm revealed cross-culturally by iterated reproduction. Current Biology, 27(3), 359-370.
[31] Jacoby N., Polak R., Grahn J., Cameron D. J., Lee K. M., Godoy R., Undurraga E. A., Huanca T., Thalwitzer T., & Doumbia N. (2021). Universality and cross-cultural variation in mental representations of music revealed by global comparison of rhythm priors. PsyArXiv.
[32] Jacoby N., Undurraga E. A., McPherson M. J., Valdés J., Ossandón T., & McDermott J. H. (2019). Universal and non-universal features of musical pitch perception revealed by singing. Current Biology, 29(19), 3229-3243.
[33] Jiang N., Kulesza A., & Singh S. (2015). Abstraction selection in model-based reinforcement learning. International Conference on Machine Learning, 30(4), 179-188.
[34] Jiang Y., Mi Q., & Zhu L. (2023). Neurocomputational mechanism of real-time distributed learning on social networks. Nature Neuroscience, 26(3), 506-516.
[35] Kim B., Wattenberg M., Gilmer J., Cai C., Wexler J., & Viegas F. (2018). Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). International Conference on Machine Learning, 80, 2668-2677.
[36] Konovalov A., Hill C., Daunizeau J., & Ruff C. C. (2021). Dissecting functional contributions of the social brain to strategic behavior. Neuron, 109(20), 3323-3337.
[37] Kornblith S., Norouzi M., Lee H., & Hinton G. (2019). Similarity of neural network representations revisited. International Conference on Machine Learning, 97, 3519-3529.
[38] Kulkarni T. D., Narasimhan K., Saeedi A., & Tenenbaum J. (2016). Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation. Advances in Neural Information Processing Systems, 29, 3675-3683.
[39] Lanning D. R., Harrell G. K., & Wang J. (2014). Dijkstra' s algorithm and Google maps. Proceedings of the 2014 ACM Southeast Regional Conference.
[40] LeCun Y., Bengio Y., & Hinton G. (2015). Deep learning. Nature, 521(7553), 436-444.
[41] Lee D., Seo H., & Jung M. W. (2012). Neural basis of reinforcement learning and decision making. Annual Review of Neuroscience, 35(1), 287-308.
[42] Lewis, D. (1969). Convention: A philosophical study. Blackwell Publishers.
[43] Matsuo Y., LeCun Y., Sahani M., Precup D., Silver D., Sugiyama M., Uchibe E., & Morimoto J. (2022). Deep learning, reinforcement learning, and world models. Neural Networks, 152, 267-275.
[44] Mehr S. A., Singh M., Knox D., Ketter D. M., Pickens-Jones D., Atwood S., Lucas C., Jacoby N., Egner A. A., Hopkins E. J., Howard R. M., Hartshorne J. K., Jennings M. V., Simson J., Bainbridge C. M., Pinker S., O' Donnell T. J., Krasnow M. M., & Glowacki L. (2019). Universality and diversity in human song. Science, 366(6468), eaax0868.
[45] Meshulam M., Hasenfratz L., Hillman H., Liu Y. F., Nguyen M., Norman K. A., & Hasson U. (2021). Neural alignment predicts learning outcomes in students taking an introduction to computer science course. Nature Communications, 12(1), 1922.
[46] Mi Q., Wang C., Camerer C. F., & Zhu L. (2021). Reading between the lines: Listener' s vmPFC simulates speaker cooperative choices in communication games. Science Advances, 7(10), eabe6276.
[47] Mnih V., Kavukcuoglu K., Silver D., Rusu A. A., Veness J., Bellemare M. G., Graves A., Riedmiller M., Fidjeland A. K., & Ostrovski G. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529-533.
[48] Momennejad I., Russek E. M., Cheong J. H., Botvinick M. M., Daw N. D., & Gershman S. J. (2017). The successor representation in human reinforcement learning. Nature Human Behaviour, 1(9), 680-692.
[49] Muchnik L., Aral S., & Taylor S. J. (2013). Social influence bias: A randomized experiment. Science, 341(6146), 647-651.
[50] Nili H., Wingfield C., Walther A., Su L., Marslen-Wilson W., & Kriegeskorte N. (2014). A toolbox for representational similarity analysis. PLoS Computational Biology, 10(4), e1003553.
[51] Niv, Y. (2019). Learning task-state representations. Nature Neuroscience, 22(10), 1544-1553.
[52] Peer M., Brunec I. K., Newcombe N. S., & Epstein R. A. (2021). Structuring Knowledge with Cognitive Maps and Cognitive Graphs. Trends in Cognitive Sciences, 25(1), 37-54.
[53] Premack, D., & Woodruff, G. (1978). Does the chimpanzee have a theory of mind? Behavioral and Brain Sciences, 1(4), 515-526.
[54] Rabb N., Fernbach P. M., & Sloman S. A. (2019). Individual representation in a community of knowledge. Trends in Cognitive Sciences, 23(10), 891-902.
[55] Redcay, E., & Schilbach, L. (2019). Using second-person neuroscience to elucidate the mechanisms of social interaction. Nature Reviews Neuroscience, 20(8), 495-505.
[56] Rossem, L. van, & Saxe, A. M. (2024). When representations align: Universality in representation learning dynamics. ArXiv.
[57] Saxe A., Nelli S., & Summerfield C. (2021). If deep learning is the answer, what is the question? Nature Reviews Neuroscience, 22(1), 55-67.
[58] Sharp, P. B., & Eldar, E. (2024). Humans adaptively deploy forward and backward prediction. Nature Human Behaviour, 8(9), 1726-1737.
[59] Silvey C., Kirby S., & Smith K. (2019). Communication increases category structure and alignment only when combined with cultural transmission. Journal of Memory and Language, 109, 104051.
[60] Stachenfeld K. L., Botvinick M. M., & Gershman S. J. (2017). The hippocampus as a predictive map. Nature Neuroscience, 20(11), Article 11.
[61] Stolk A., Noordzij M. L., Verhagen L., Volman I., Schoffelen J. M., Oostenveld R., Hagoort P., & Toni I. (2014). Cerebral coherence between communicators marks the emergence of meaning. Proceedings of the National Academy of Sciences, 111(51), 18183-18188.
[62] Stolk A., Verhagen L., Schoffelen J. M., Oostenveld R., Blokpoel M., Hagoort P., Van Rooij I., & Toni I. (2013). Neural mechanisms of communicative innovation. Proceedings of the National Academy of Sciences, 110(36), 14574-14579.
[63] Sucholutsky, I., & Griffiths, T. L. (2023). Alignment with human representations supports robust few-shot learning. ArXiv.
[64] Sucholutsky I., Muttenthaler L., Weller A., Peng A., Bobu A., Kim B., Love B. C., Cueva C. J., Grant E., Groen I., Achterberg J., Tenenbaum J. B., Collins K. M., Hermann K. L., Oktar K., Greff K., Hebart M. N., Cloos N., Kriegeskorte N., Jacoby N., Qiuyi, Zhang, Marjieh R., Geirhos R., Chen S., Kornblith S., Rane S., Knokle T., O' Connell T., P., Unterthiner T., Lampinen A., K., Müller, Klaus-Robert, Toneva M., & Griffiths T. L. (2023). Getting aligned on representational alignment. ArXiv.
[65] Summerfield C., Luyckx F., & Sheahan H. (2020). Structure learning and the posterior parietal cortex. Progress in Neurobiology, 184, 101717.
[66] Tenenbaum J. B., Kemp C., Griffiths T. L., & Goodman N. D. (2011). How to grow a mind: Statistics, structure, and abstraction. Science, 331(6022), 1279-1285.
[67] Tessler M. H., Bakker M. A., Jarrett D., Sheahan H., Chadwick M. J., Koster R., Evans G., Campbell-Gillingham L., Collins T., Parkes D. C., Botvinick M., & Summerfield C. (2024). AI can help humans find common ground in democratic deliberation. Science, 386(6719), eadq2852.
[68] Vaidya, A. R., & Badre, D. (2022). Abstract task representations for inference and control. Trends in Cognitive Sciences, 26(6), 484-498.
[69] Wang J. X., Kurth-Nelson Z., Kumaran D., Tirumala D., Soyer H., Leibo J. Z., Hassabis D., & Botvinick M. (2018). Prefrontal cortex as a meta-reinforcement learning system. Nature Neuroscience, 21(6), 860-868.
[70] Wang X., Chen Y., & Zhu W. (2021). A survey on curriculum learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(9), 4555-4576.
[71] Wu C. M., Meder B., & Schulz E. (2025). Unifying principles of generalization: Past, present, and future. Annual Review of Psychology, 76(1), 275-302.
[72] Yenigün D., Ertan G., & Siciliano M. (2017). Omission and commission errors in network cognition and network estimation using ROC curve. Social Networks, 50, 26-34.
[73] Zador, A. M. (2019). A critique of pure learning and what artificial neural networks can learn from animal brains. Nature Communications, 10(1), 3770.
[74] Zhang H., Yang J., Ni J., De Dreu, C. K. W., & Ma Y. (2023). Leader-follower behavioural coordination and neural synchronization during intergroup conflict. Nature Human Behaviour, 7(12), Article 12.
[75] Zhu L., Mathewson K. E., & Hsu M. (2012). Dissociable neural representations of reinforcement and belief prediction errors underlie strategic learning. Proceedings of the National Academy of Sciences, 109(5), 1419-1424.
[76] Zhu Y., Gordon D., Kolve E., Fox D., Fei-Fei L., Gupta A., Mottaghi R., & Farhadi A. (2017). Visual semantic planning using deep successor representations. Proceedings of the IEEE International Conference on Computer Vision.
PDF(680 KB)

Accesses

Citation

Detail

Sections
Recommended

/