ダウンロード数: 231

このアイテムのファイル:
ファイル 記述 サイズフォーマット 
rptel.2024.19016.pdf1.12 MBAdobe PDF見る/開く
完全メタデータレコード
DCフィールド言語
dc.contributor.authorNakamoto, Ryosukeen
dc.contributor.authorFlanagan, Brendanen
dc.contributor.authorDai, Yilingen
dc.contributor.authorTakami, Kyosukeen
dc.contributor.authorOgata, Hiroakien
dc.contributor.alternative中本, 陵介ja
dc.contributor.alternative戴, 憶菱ja
dc.contributor.alternative緒方, 広明ja
dc.date.accessioned2023-12-12T23:59:58Z-
dc.date.available2023-12-12T23:59:58Z-
dc.date.issued2024-01-01-
dc.identifier.urihttp://hdl.handle.net/2433/286391-
dc.description.abstractSelf-explanation is a widely recognized and effective pedagogical method. Previous research has indicated that self-explanation can be used to evaluate students’ comprehension and identify their areas of difficulty on mathematical quizzes. However, most analytical techniques necessitate pre-labeled materials, which limits the potential for large-scale study. Conversely, utilizing collected self-explanations without supervision is challenging because there is little research on this topic. Therefore, this study aims to investigate the feasibility of automatically generating a standardized self-explanation sample answer from unsupervised collected self-explanations. The proposed model involves preprocessing and three machine learning steps: vectorization, clustering, and extraction. Experiments involving 1, 434 self-explanation answers from 25 quizzes indicate that 72% of the quizzes generate sample answers containing all the necessary knowledge components. The similarity between human-generated and machine-generated sentences was significant with moderate positive correlation, r(23) = .48, p < .05.The best-performing generative model also achieved a high BERTScore of 0.715. Regarding the readability of the generated sample answers, the average score of the human-generated sentences was superior to that of the machine-generated ones. These results suggest that the proposed model can generate sample answers that contain critical knowledge components and can be further improved with BERTScore. This study is expected to have numerous applications, including identifying students’ areas of difficulty, scoring self-explanations, presenting students with reference materials for learning, and automatically generating scaffolding templates to train self-explanation skills.en
dc.language.isoeng-
dc.publisherAsia-Pacific Society for Computers in Educationen
dc.rights© The Author(s). 2023en
dc.rightsThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.en
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/-
dc.subjectSelf-explanationen
dc.subjectRubricen
dc.subjectKnowledge componentsen
dc.subjectSummarizationen
dc.subjectNatural language processingen
dc.titleUnsupervised techniques for generating a standard sample self-explanation answer with knowledge components in a math quizen
dc.typejournal article-
dc.type.niitypeJournal Article-
dc.identifier.jtitleResearch and Practice in Technology Enhanced Learningen
dc.identifier.volume19-
dc.relation.doi10.58459/rptel.2024.19016-
dc.textversionpublisher-
dc.identifier.artnum016-
dcterms.accessRightsopen access-
datacite.awardNumber20H01722-
datacite.awardNumber23H01001-
datacite.awardNumber21K19824-
datacite.awardNumber23K17012-
datacite.awardNumber23H00505-
datacite.awardNumber.urihttps://kaken.nii.ac.jp/grant/KAKENHI-PROJECT-20H01722/-
datacite.awardNumber.urihttps://kaken.nii.ac.jp/grant/KAKENHI-PROJECT-23H01001/-
datacite.awardNumber.urihttps://kaken.nii.ac.jp/grant/KAKENHI-PROJECT-21K19824/-
datacite.awardNumber.urihttps://kaken.nii.ac.jp/grant/KAKENHI-PROJECT-23K17012/-
datacite.awardNumber.urihttps://kaken.nii.ac.jp/grant/KAKENHI-PROJECT-23H00505/-
dc.identifier.eissn1793-7078-
jpcoar.funderName日本学術振興会ja
jpcoar.funderName日本学術振興会ja
jpcoar.funderName日本学術振興会ja
jpcoar.funderName日本学術振興会ja
jpcoar.funderName日本学術振興会ja
jpcoar.awardTitleKnowledge-Aware Learning Analytics Infrastructure to Support Smart Education and Learningen
jpcoar.awardTitleExtraction and Use of Highly Explainable and Transferable Indicators for AI in Educationen
jpcoar.awardTitleLearning Support by Novel Modality Process Analysis of Educational Big Dataen
jpcoar.awardTitle教育データAI利活用による学習者・教師の問題作成・共有支援システムの研究開発ja
jpcoar.awardTitleリアルワールド教育データからのエビデンス抽出・共有・利用のための情報基盤開発ja
出現コレクション:学術雑誌掲載論文等

アイテムの簡略レコードを表示する

Export to RefWorks


出力フォーマット 


このアイテムは次のライセンスが設定されています: クリエイティブ・コモンズ・ライセンス Creative Commons