PDF(921 KB)
PDF(921 KB)
PDF(921 KB)
迫选测验中题组环境对展开模型能力估计精度的影响
Influence of Block Context on Accuracy of Ability Parameter Estimated by Unfolding Model in Forced-Choice Test
研究使用蒙特卡洛模拟研究方法,探讨使用展开模型对迫选测验数据进行分析时,题组环境变化是否会对能力参数估计精度产生影响。研究发现:(1)题组中项目数大于3 个时,部分排序迫选测验的估计精度更高,完全排序测验受其影响较小;(2) 加入33% 左右的正负陈述配对题组,相比不包含或包含50% 比例该类型题组时估计精度更高;(3)展开模型在维度相互独立时估计精度更高,题组中项目数较少时更易受维度相关的影响。以上研究发现可为迫选测验编制或开发自适应迫选人格测验提供支持。
Personality assessment was a much-talked topic in education and psychology research, widespread using Likert-type format as a measurement method, which could be affected by various response biases and faking. Many researchers revealed that forced-choice test could better control biases and faking, while had problems on score interpreting and psychometric properties because data collected from these test was ipsative. To solve the problem, researchers have built some models within IRT framework, especially based on dominance model, such as Thurstonian IRT model, RIM and MUPP-2PL etc. However, some studies revealed that scales analyzed by unfolding model had psychometric properties that equaled or exceeded dominance approaches. Unfolding models could bring significant advantages to psychological constructs such as personality and should be considered as an alternative to dominance model. Stark et al. proposed the multi-unidimensional pairwise-preference (MUPP) model for two-item preferred format based on the generalized graded unfolding model (GGUM). The location of behaviors could relate to response probability, thus estimated scores could be used to make inter-individual inferences, and MUPP model was extended for different response format recently. Block as the basis form of forced-choice test, would be changed when different quality or quantity items were combined into it, therefore context of block (e.g. number of items composed in one block, percentage of blocks composed with items keyed in opposite directions) may affect the ability parameter estimation, and threaten parameter invariance assumption.
To conclude, the present study provided evidence that test users might obtain when they used unfolding model to analyze FC formats. Inside, the study explored the effect of context on item functioning in FC blocks and the whole test by examining ability parameters estimated accuracy across different conditions. The robustness of the parameter invariance assumption required for IRT was examined by the correlation between the true theta and estimated parameter and RMSE.
A Monte Carlo simulation was used to compare the accuracy of ability parameter estimated by unfolding model in forced-choice test by changing the context of either block or all questionnaires. Two forced-choice test formats were considered, and respondents were asked to choose the two items that described them the most and the least or rank all the items in the order of their descriptiveness of the respondents. In total, we examined 72 conditions including the number of items in a block; Percentage of block composed of items keyed in opposite directions; Discrimination and the correlation between different traits.
The results indicated that: (1) In Most and Least type, more items composed in a block would increase the accuracy of ability parameter estimation while it had little effect in Rank type especially when account of items increased by more than 4. (2) Including 33% of block composed of items keyed in opposite directions would make estimation accuracy better. (3) Dimensions of trait were independent and would improve accuracy of trait score estimation, especially when more items composed in a block. The study revealed that the RANK type of forced-choice test provided reassurance on stability of person parameter estimation when context of block was changed. These results can provide support for designing forced-choice test and combining it with computerized adaptive test.
/
| 〈 |
|
〉 |