このアイテムのアクセス数: 345
このアイテムのファイル:
ファイル | 記述 | サイズ | フォーマット | |
---|---|---|---|---|
978-3-030-21935-2_26.pdf | 3.02 MB | Adobe PDF | 見る/開く |
タイトル: | Vocabulary Learning Support System based on Automatic Image Captioning Technology |
著者: | Hasnine, Mohammad Nehal Flanagan, Brendan ![]() ![]() ![]() Akcapinar, Gokhan Ogata, Hiroaki ![]() ![]() ![]() Mouri, Kousuke Uosaki, Noriko |
著者名の別形: | 緒方, 広明 |
キーワード: | Artificial intelligence in education Automatic image captioning Learning context representation Ubiquitous learning Visual contents analysis Vocabulary learning |
発行日: | 2019 |
出版者: | Springer, Cham |
誌名: | Distributed, Ambient and Pervasive Interactions |
開始ページ: | 346 |
終了ページ: | 358 |
抄録: | Learning context has evident to be an essential part in vocabulary development, however describing learning context for each vocabulary is considered to be difficult. In the human brain, it is relatively easy to describe learning contexts using pictures because pictures describe an immense amount of details at a quick glance that text annotations cannot do. Therefore, in an informal language learning system, pictures can be used to overcome the problems that language learners face in describing learning contexts. The present study aimed to develop a support system that generates and represents learning contexts automatically by analyzing the visual contents of the pictures captured by language learners. Automatic image captioning, a technology of artificial intelligence that connects computer vision and natural language processing is used for analyzing the visual contents of the learners’ captured images. A neural image caption generator model called Show and Tell is trained for image-to-word generation and to describe the context of an image. The three-fold objectives of this research are: First, an intelligent technology that can understand the contents of the picture and capable to generate learning contexts automatically; Second, a leaner can learn multiple vocabularies by using one picture without relying on a representative picture for each vocabulary, and Third, a learner’s prior vocabulary knowledge can be mapped with new learning vocabulary so that previously acquired vocabulary be reviewed and recalled while learning new vocabulary. |
記述: | 7th International Conference, DAPI 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26–31, 2019, Proceedings Part of the book series: Lecture Notes in Computer Science (LNCS, volume 11587) |
著作権等: | This is a post-peer-review, pre-copyedit version of an article published in 'Distributed, Ambient and Pervasive Interactions'. The final authenticated version is available online at: https://doi.org/10.1007/978-3-030-21935-2_26 The full-text file will be made open to the public on 07 June 2020 in accordance with publisher's 'Terms and Conditions for Self-Archiving'. This is not the published version. Please cite only the published version. この論文は出版社版でありません。引用の際には出版社版をご確認ご利用ください。 |
URI: | http://hdl.handle.net/2433/243253 |
DOI(出版社版): | 10.1007/978-3-030-21935-2_26 |
出現コレクション: | 学術雑誌掲載論文等 |

このリポジトリに保管されているアイテムはすべて著作権により保護されています。