Downloads: 318
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
978-3-030-21935-2_26.pdf | 3.02 MB | Adobe PDF | View/Open |
Title: | Vocabulary Learning Support System based on Automatic Image Captioning Technology |
Authors: | Hasnine, Mohammad Nehal Flanagan, Brendan ![]() ![]() ![]() Akcapinar, Gokhan Ogata, Hiroaki ![]() ![]() ![]() Mouri, Kousuke Uosaki, Noriko |
Author's alias: | 緒方, 広明 |
Keywords: | Artificial intelligence in education Automatic image captioning Learning context representation Ubiquitous learning Visual contents analysis Vocabulary learning |
Issue Date: | 2019 |
Publisher: | Springer, Cham |
Journal title: | Distributed, Ambient and Pervasive Interactions |
Start page: | 346 |
End page: | 358 |
Abstract: | Learning context has evident to be an essential part in vocabulary development, however describing learning context for each vocabulary is considered to be difficult. In the human brain, it is relatively easy to describe learning contexts using pictures because pictures describe an immense amount of details at a quick glance that text annotations cannot do. Therefore, in an informal language learning system, pictures can be used to overcome the problems that language learners face in describing learning contexts. The present study aimed to develop a support system that generates and represents learning contexts automatically by analyzing the visual contents of the pictures captured by language learners. Automatic image captioning, a technology of artificial intelligence that connects computer vision and natural language processing is used for analyzing the visual contents of the learners’ captured images. A neural image caption generator model called Show and Tell is trained for image-to-word generation and to describe the context of an image. The three-fold objectives of this research are: First, an intelligent technology that can understand the contents of the picture and capable to generate learning contexts automatically; Second, a leaner can learn multiple vocabularies by using one picture without relying on a representative picture for each vocabulary, and Third, a learner’s prior vocabulary knowledge can be mapped with new learning vocabulary so that previously acquired vocabulary be reviewed and recalled while learning new vocabulary. |
Description: | 7th International Conference, DAPI 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26–31, 2019, Proceedings Part of the book series: Lecture Notes in Computer Science (LNCS, volume 11587) |
Rights: | This is a post-peer-review, pre-copyedit version of an article published in 'Distributed, Ambient and Pervasive Interactions'. The final authenticated version is available online at: https://doi.org/10.1007/978-3-030-21935-2_26 The full-text file will be made open to the public on 07 June 2020 in accordance with publisher's 'Terms and Conditions for Self-Archiving'. This is not the published version. Please cite only the published version. この論文は出版社版でありません。引用の際には出版社版をご確認ご利用ください。 |
URI: | http://hdl.handle.net/2433/243253 |
DOI(Published Version): | 10.1007/978-3-030-21935-2_26 |
Appears in Collections: | Journal Articles |
![](/dspace/image/articlelinker.gif)
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.