ダウンロード数: 231
このアイテムのファイル:
ファイル | 記述 | サイズ | フォーマット | |
---|---|---|---|---|
rptel.2024.19016.pdf | 1.12 MB | Adobe PDF | 見る/開く |
完全メタデータレコード
DCフィールド | 値 | 言語 |
---|---|---|
dc.contributor.author | Nakamoto, Ryosuke | en |
dc.contributor.author | Flanagan, Brendan | en |
dc.contributor.author | Dai, Yiling | en |
dc.contributor.author | Takami, Kyosuke | en |
dc.contributor.author | Ogata, Hiroaki | en |
dc.contributor.alternative | 中本, 陵介 | ja |
dc.contributor.alternative | 戴, 憶菱 | ja |
dc.contributor.alternative | 緒方, 広明 | ja |
dc.date.accessioned | 2023-12-12T23:59:58Z | - |
dc.date.available | 2023-12-12T23:59:58Z | - |
dc.date.issued | 2024-01-01 | - |
dc.identifier.uri | http://hdl.handle.net/2433/286391 | - |
dc.description.abstract | Self-explanation is a widely recognized and effective pedagogical method. Previous research has indicated that self-explanation can be used to evaluate students’ comprehension and identify their areas of difficulty on mathematical quizzes. However, most analytical techniques necessitate pre-labeled materials, which limits the potential for large-scale study. Conversely, utilizing collected self-explanations without supervision is challenging because there is little research on this topic. Therefore, this study aims to investigate the feasibility of automatically generating a standardized self-explanation sample answer from unsupervised collected self-explanations. The proposed model involves preprocessing and three machine learning steps: vectorization, clustering, and extraction. Experiments involving 1, 434 self-explanation answers from 25 quizzes indicate that 72% of the quizzes generate sample answers containing all the necessary knowledge components. The similarity between human-generated and machine-generated sentences was significant with moderate positive correlation, r(23) = .48, p < .05.The best-performing generative model also achieved a high BERTScore of 0.715. Regarding the readability of the generated sample answers, the average score of the human-generated sentences was superior to that of the machine-generated ones. These results suggest that the proposed model can generate sample answers that contain critical knowledge components and can be further improved with BERTScore. This study is expected to have numerous applications, including identifying students’ areas of difficulty, scoring self-explanations, presenting students with reference materials for learning, and automatically generating scaffolding templates to train self-explanation skills. | en |
dc.language.iso | eng | - |
dc.publisher | Asia-Pacific Society for Computers in Education | en |
dc.rights | © The Author(s). 2023 | en |
dc.rights | This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. | en |
dc.rights.uri | https://creativecommons.org/licenses/by-nc-nd/4.0/ | - |
dc.subject | Self-explanation | en |
dc.subject | Rubric | en |
dc.subject | Knowledge components | en |
dc.subject | Summarization | en |
dc.subject | Natural language processing | en |
dc.title | Unsupervised techniques for generating a standard sample self-explanation answer with knowledge components in a math quiz | en |
dc.type | journal article | - |
dc.type.niitype | Journal Article | - |
dc.identifier.jtitle | Research and Practice in Technology Enhanced Learning | en |
dc.identifier.volume | 19 | - |
dc.relation.doi | 10.58459/rptel.2024.19016 | - |
dc.textversion | publisher | - |
dc.identifier.artnum | 016 | - |
dcterms.accessRights | open access | - |
datacite.awardNumber | 20H01722 | - |
datacite.awardNumber | 23H01001 | - |
datacite.awardNumber | 21K19824 | - |
datacite.awardNumber | 23K17012 | - |
datacite.awardNumber | 23H00505 | - |
datacite.awardNumber.uri | https://kaken.nii.ac.jp/grant/KAKENHI-PROJECT-20H01722/ | - |
datacite.awardNumber.uri | https://kaken.nii.ac.jp/grant/KAKENHI-PROJECT-23H01001/ | - |
datacite.awardNumber.uri | https://kaken.nii.ac.jp/grant/KAKENHI-PROJECT-21K19824/ | - |
datacite.awardNumber.uri | https://kaken.nii.ac.jp/grant/KAKENHI-PROJECT-23K17012/ | - |
datacite.awardNumber.uri | https://kaken.nii.ac.jp/grant/KAKENHI-PROJECT-23H00505/ | - |
dc.identifier.eissn | 1793-7078 | - |
jpcoar.funderName | 日本学術振興会 | ja |
jpcoar.funderName | 日本学術振興会 | ja |
jpcoar.funderName | 日本学術振興会 | ja |
jpcoar.funderName | 日本学術振興会 | ja |
jpcoar.funderName | 日本学術振興会 | ja |
jpcoar.awardTitle | Knowledge-Aware Learning Analytics Infrastructure to Support Smart Education and Learning | en |
jpcoar.awardTitle | Extraction and Use of Highly Explainable and Transferable Indicators for AI in Education | en |
jpcoar.awardTitle | Learning Support by Novel Modality Process Analysis of Educational Big Data | en |
jpcoar.awardTitle | 教育データAI利活用による学習者・教師の問題作成・共有支援システムの研究開発 | ja |
jpcoar.awardTitle | リアルワールド教育データからのエビデンス抽出・共有・利用のための情報基盤開発 | ja |
出現コレクション: | 学術雑誌掲載論文等 |
このアイテムは次のライセンスが設定されています: クリエイティブ・コモンズ・ライセンス