Psychological Science ›› 2018, Vol. 41 ›› Issue (4): 796-802.

Previous Articles     Next Articles

Which Memory Buffer does the Implicit Learning Mechanism of Nonlocal Dependencies Use: Evidence from Neural Network Simulations

Fei-Fei LI1,Liu Baogen2   

  1. 1. Zhejiang Normal University
    2.
  • Received:2017-09-18 Revised:2018-04-08 Online:2018-07-20 Published:2018-07-20
  • Contact: Liu Baogen

远距离规则的内隐学习使用了何种记忆存储器:来自神经网络模拟的证据

李菲菲1,刘宝根2   

  1. 1. 浙江师范大学
    2. 浙江师范大学杭州幼儿师范学院
  • 通讯作者: 刘宝根

Abstract: In implicit learning literature, a basic question concerning how knowledge of structures and regularities is learned is whether the learning mechanism uses a temporary storage buffer, and, if so, what the nature of the buffer is. Recently, Li et al.(2013) found that people acquired unconscious structural knowledge of both Chinese tonal retrogrades and inversions. Moreover, inversions were implicitly learnt more easily than retrogrades, pattern predicted by implicit learning using a first in-first out buffer rather than a last in-?rst out buffer. However, because Chinese Tang poetry uses an inversion, knowledge participants were likely exposed to as children, it is not clear whether prior expectations of structure instantiating inversions could over-ride the effect of what type of buffer the system uses. The network doesn’t have prior knowledge. Accordingly, the present study investigated whether the Simple Recurrent Network (SRN), that used a buffer to allow learning of nonlocal dependencies, could learn tonal inversions and retrogrades and replicate the advantage of inversions over retrogrades. The SRN was tested on the same materials and procedures as Li et al. (2013). The networks were assigned to four cells of two training conditions (trained vs. untrained) by two rules (inversion vs. retrograde) design. The simulations were carried out using all possible permutations of the parameter values, resulting in 150 different models for each group. The materials were strings of tonal syllables. Each string consisted of 10 different tonal syllables, where the tone types (pings and zes) of first five syllables predicted the tone types of following five by forming an inversion or a retrograde. In training phase, 144 grammatical strings were used for two trained groups. In test phase, four groups of networks were presented with 48 test sequences (half grammatical and half ungrammatical), and their ability to predict the next tone in the predictable second five elements was used as an index of performance. T-test (with Bonferronni correction) showed that trained networks performed significantly better than untrained networks for both inversion and retrograde groups, suggesting that the networks possibly learnt the two rules. Moreover, for both trained and untrained groups, inversion group performed significantly better than retrograde group. The performance difference between inversion and retrograde for trained networks was greater than that for untrained networks, indicating that inversions were implicitly learnt more easily than retrogrades. Further, the effects of learning were calculated by subtracting the z-scores of the untrained networks/participants from that of the trained networks/participants. A substantial number of the SRNs fell within the area covered by the human data (m ± 1se)(15/150 for inversion, 38/150 for retrograde), suggesting that the SRN could match the characteristic performance of human participants. To conclude, consistent with the results of human experiments, the present simulations showed that: SRN could learn the two nonlocal dependencies, and tonal inversions were implicitly learnt more easily than retrogrades, tentatively suggesting that functionally a first in-first out memory buffer is more likely to be involved in implicit learning of nonlocal dependencies. Thus the present study provide new evidence and a new perspective for exploring the implicit learning mechanism of nonlocal dependencies.

Key words: nonlocal dependencies, implicit learning, memory buffer, neural network simulations

摘要: 关于远距离规则的知识是如何被内隐学习的,研究尚未得出结论。该研究通过采用和人类被试相同的实验材料和程序,考察了简单循环网络模型(SRN)对两种汉语声调远距离规则——倒映和逆行规则的内隐学习。结果发现:1.在广泛的参数范围上,SRN能够学会倒映和逆行规则,表明模型的记忆缓冲器可以模拟人类远距离规则的内隐学习;2.SRN对倒映规则的学习比对逆行规则的学习更好,表明在功能上远距离规则的内隐学习可能优先使用了先进先出的记忆存储器及信息加工模式。该研究为探究远距离规则内隐学习的机制提供了新的证据和视角。

关键词: 远距离规则, 内隐学习, 记忆存储器, 神经网络模拟