ダウンロード数: 140
このアイテムのファイル:
ファイル | 記述 | サイズ | フォーマット | |
---|---|---|---|---|
rptel.2024.19016.pdf | 1.12 MB | Adobe PDF | 見る/開く |
タイトル: | Unsupervised techniques for generating a standard sample self-explanation answer with knowledge components in a math quiz |
著者: | Nakamoto, Ryosuke Flanagan, Brendan Dai, Yiling Takami, Kyosuke Ogata, Hiroaki |
著者名の別形: | 中本, 陵介 戴, 憶菱 緒方, 広明 |
キーワード: | Self-explanation Rubric Knowledge components Summarization Natural language processing |
発行日: | 1-Jan-2024 |
出版者: | Asia-Pacific Society for Computers in Education |
誌名: | Research and Practice in Technology Enhanced Learning |
巻: | 19 |
論文番号: | 016 |
抄録: | Self-explanation is a widely recognized and effective pedagogical method. Previous research has indicated that self-explanation can be used to evaluate students’ comprehension and identify their areas of difficulty on mathematical quizzes. However, most analytical techniques necessitate pre-labeled materials, which limits the potential for large-scale study. Conversely, utilizing collected self-explanations without supervision is challenging because there is little research on this topic. Therefore, this study aims to investigate the feasibility of automatically generating a standardized self-explanation sample answer from unsupervised collected self-explanations. The proposed model involves preprocessing and three machine learning steps: vectorization, clustering, and extraction. Experiments involving 1, 434 self-explanation answers from 25 quizzes indicate that 72% of the quizzes generate sample answers containing all the necessary knowledge components. The similarity between human-generated and machine-generated sentences was significant with moderate positive correlation, r(23) = .48, p < .05.The best-performing generative model also achieved a high BERTScore of 0.715. Regarding the readability of the generated sample answers, the average score of the human-generated sentences was superior to that of the machine-generated ones. These results suggest that the proposed model can generate sample answers that contain critical knowledge components and can be further improved with BERTScore. This study is expected to have numerous applications, including identifying students’ areas of difficulty, scoring self-explanations, presenting students with reference materials for learning, and automatically generating scaffolding templates to train self-explanation skills. |
著作権等: | © The Author(s). 2023 This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. |
URI: | http://hdl.handle.net/2433/286391 |
DOI(出版社版): | 10.58459/rptel.2024.19016 |
出現コレクション: | 学術雑誌掲載論文等 |
このアイテムは次のライセンスが設定されています: クリエイティブ・コモンズ・ライセンス