COGNITIVE DIALOG GAMES AS COGNITIVE ASSISTANTS: TRACKING AND ADAPTING KNOWLEDGE AND INTERACTIONS IN STUDENT’S DIALOGS

Abstract

This study introduces a system design in a form of cognitive dialog game (DiaCog) to support pedagogical factors and student learning model ideas. The purpose of the study is to describe how such a design achieves tracking and adapting students’ knowledge and mastery learning levels as a cognitive assistant. Also, this study shows alternative ways for supporting intelligent personal learning, tutoring systems, and MOOCS. This paper explains method called DiaCog that uses structure for students` thinking in an online dialog by tracking student`s level of learning/knowledge status. The methodology of computing is the semantic that match between students` interactions in a dialog. By this way it informs DiaCog’s learner model to inform the pedagogical model. Semantic fingerprint matching method of DiaCog allows making comparisons with expert knowledge to detect students` mastery levels in learning. The paper concludes with the DiaCog tool and methodologies that used for intelligent cognitive assistant design to implement pedagogical and learner model to track and adapt students’ learning. Finally, this paper discusses future improvements and planned experimental set up to advance the techniques introduced in DiaCog design.

Downloads

Download data is not yet available.

Abstract

This study introduces a system design in a form of cognitive dialog game (DiaCog) to support pedagogical factors and student learning model ideas. The purpose of the study is to describe how such a design achieves tracking and adapting students’ knowledge and mastery learning levels as a cognitive assistant. Also, this study shows alternative ways for supporting intelligent personal learning, tutoring systems, and MOOCS. This paper explains method called DiaCog that uses structure for students` thinking in an online dialog by tracking student`s level of learning/knowledge status. The methodology of computing is the semantic that match between students` interactions in a dialog. By this way it informs DiaCog’s learner model to inform the pedagogical model. Semantic fingerprint matching method of DiaCog allows making comparisons with expert knowledge to detect students` mastery levels in learning. The paper concludes with the DiaCog tool and methodologies that used for intelligent cognitive assistant design to implement pedagogical and learner model to track and adapt students’ learning. Finally, this paper discusses future improvements and planned experimental set up to advance the techniques introduced in DiaCog design.

References

Bain, Y. C. (2011). Learning through online discus-sion: a framework evidenced in learners’ inter-actions, Research in Learning Technology, Vol 19, 29-42, DOI: 10.3402/rlt.v19s1/7779

Bernsen, N. O., Dybkjær, H., & Dybkjær, L. (2012). Designing interactive speech systems: From first ideas to user testing. Springer Science & Business Media. https://goo.gl/PZDM5v

Bull, S. (2004). Supporting learning with open learner models. Planning, 29(14), 1. https://www.researchgate.net/profile/Susan_Bull2/publication/228888082_Supporting_learning_with_open_learner_models/links/0c96052b4861a5d449000000/Supporting-learning-with-open-learner-models.pdf

Chi, M. T. (2009). Active‐constructive‐interactive: A conceptual framework for differentiating learning activities. Topics in cognitive science, 1(1), 73-105. https://doi.org/10.1111/j.1756-8765.2008.01005.x

Conati, C., Gertner, A., & Vanlehn, K. (2002). Using Bayesian networks to manage uncertainty in student modeling. User modeling and user-adapted interaction, 12(4), 371-417. https://link.springer.com/article/10.1023/A:1021258506583

Core, M. G., Moore, J. D., & Zinn, C. (2003, April). The role of initiative in tutorial dialogue. In Proceedings of the tenth conference on European chapter of the Association for Computational Linguistics-Volume 1 (pp. 67-74). Association for Computational Linguistics. https://aclanthology.info/pdf/E/E03/E03-1072.pdf

Corticalio (2015). Sparse distributed representations. http://www.cortical.io/technology_representations.html

Embretson, S. E., & Reise, S. P. (2013). Item response theory. Psychology Press. https://goo.gl/Euq1sm

Ezen-Can, A., Boyer, K. E., Kellogg, S., & Booth, S. (2015, March). Unsupervised modeling for understanding MOOC discussion forums: a learning analytics approach. In Proceedings of the fifth international conference on learning analytics and knowledge (pp. 146-150). ACM. https://www.intellimedia.ncsu.edu/wp-content/uploads/ezen-can-lak-2015.pdf

Fereday, J., & Muir-Cochrane, E. (2006). Demonstrating rigor using thematic analysis: A hybrid approach of inductive and deductive coding and theme development. International journal of qualitative methods, 5(1), 80-92. http://journals.sagepub.com/doi/abs/10.1177/160940690600500107

Forbes-Riley, K., Litman, D. J., Purandare, A., Rotaru, M., & Tetreault, J. (2007, June). Comparing linguistic features for modeling learning in computer tutoring. In Aied (Vol. 158, pp. 270-277).https://www.cs.rochester.edu/~tetreaul/aied07-ios2-final.pdf

Gagné, M., & Deci, E. L. (2005). Self‐determination theory and work motivation. Journal of Organizational behavior, 26(4), 331-362. https://doi.org/10.1002/job.322

Hawkey, K. (2003). Social constructivism and asynchronous text-based discussion: A case study with trainee teachers. Education and Information Technologies, 8(2), 165-177. https://link.springer.com/article/10.1023/A:1024558414766

Huang, Y. M., & Liang, T. H. (2015). A technique for tracking the reading rate to identify the e‐book reading behaviors and comprehension outcomes of elementary school students. British Journal of Educational Technology, 46(4), 864-876. https://doi.org/10.1111/bjet.12182

Johnson, A., & Taatgen, N. (2005). User modeling. The handbook of human factors in web design, 424-438. https://pdfs.semanticscholar.org/83d7/a8c7faf225b9ff8172693d1faa7e3cf45f77.pdf

Kolb, D.A. (1985). Learning-style inventory: Self-scoring inventory and interpretation book-let (2nd ed.). Boston: McBer & Co.

Kop, R., Fournier, H., & Mak, J. S. F. (2011). A pedagogy of abundance or a pedagogy to support human beings? Participant support on massive open online courses. The International Review of Research in Open and Distributed Learning, 12(7), 74-93. http://dx.doi.org/10.19173/irrodl.v12i7.1041

Lee, Y. W., & Sawaki, Y. (2009). Cognitive diagnosis and Q-matrices in language assessment, Journal Language Assessment Quarterly, Vol. 6, 169-171, doi.org/10.1080/15434300903059598

Li, N., Cohen, W. W., Koedinger, K. R., & Matsuda, N. (2011, July). A Machine Learning Approach for Automatic Student Model Discovery. In EDM (pp. 31-40). https://files.eric.ed.gov/fulltext/ED537453.pdf#page=44

Nkuyubwatsi, B. (2013, October). Evaluation of massive open online courses (MOOCs) from the learner’s perspective. In European Conference on e-Learning (p. 340). Academic Conferences International Limited. https://search.proquest.com/openview/2616f896b4618dba39f72b248bd6d47a/1?pq-origsite=gscholar&cbl=1796419

Palmer, S., Holt, D., & Bray, S. (2008). Does the dis-cussion help? The impact of a formally as-sessed online discussion on final student results. British Journal of Educational Technology, 39(5), 847-858. https://doi.org/10.1111/j.1467-8535.2007.00780.x

Prylipko, D., Rösner, D., Siegert, I., Günther, S., Friesen, R., Haase, M., ... & Wendemuth, A. (2014). Analysis of significant dialog events in realistic human–computer interaction. Journal on Multimodal User Interfaces, 8(1), 75-86. https://link.springer.com/article/10.1007/s12193-013-0144-x

Rosé, C. P., Bhembe, D., Siler, S., Srivastava, R., & VanLehn, K. (2003). The role of why questions in effective human tutoring. In Proceedings of the 11th International Conference on AI in Education (pp. 55-62). https://www.researchgate.net/profile/Stephanie_Siler/publication/243787583_The_Role_of_Why_Questions_in_Effective_Human_Tutoring/links/00b4953bd580c7117d000000/The-Role-of-Why-Questions-in-Effective-Human-Tutoring.pdf

Rosenberg, M., & Burkert, A. (2015). Learning Styles and their Effect on Learning and Teaching. Forschende Fachdidaktik, 103-130. https://goo.gl/qMcgYh

Rotaru, M., & Litman, D. J. (2006, July). Exploiting discourse structure for spoken dialogue performance analysis. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing (pp. 85-93). Association for Computational Linguistics. http://www.aclweb.org/anthology/W06-1611

Sardareh, S.A.; Aghabozorgi, S.; Dutt, A. (2014). Reflective dialogues and students’ problem solving ability analysis using clustering. In: Proceedings of the 3rd International Conference on Computer Engineering & Mathematical Sciences (ICCEMS 2014), 04-05 Dec 2014, Langkawi, Malaysia. http://eprints.um.edu.my/12988/

Soloman, B. A., & Felder, R. M. (2014). Index of Learning Styles Questionnaire [Internet]. North Carolina State University http://www.engr.ncsu.edu/learningstyles/ilsweb.html

Sottilare, R. A., Graesser, A., Hu, X., & Holden, H. (Eds.). (2013). Design recommendations for intelligent tutoring systems: Volume 1-learner modeling (Vol. 1). US Army Research Laboratory. https://goo.gl/8HGjp4

Thomas, M. J. (2002). Learning within incoherent structures: The space of online discussion forums. Journal of Computer Assisted Learning, 18(3), 351-366. https://doi.org/10.1046/j.0266-4909.2002.03800.x

Vail, A. K., & Boyer, K. E. (2014, June). Identifying effective moves in tutoring: On the refinement of dialogue act annotation schemes. In International Conference on Intelligent Tutoring Systems (pp. 199-209). Springer, Cham. https://link.springer.com/chapter/10.1007/978-3-319-07221-0_24

Webb, G. I., Pazzani, M. J., & Billsus, D. (2001). Machine learning for user modeling. User modeling and user-adapted interaction, 11(1-2), 19-29. https://link.springer.com/article/10.1023/A:1011117102175

Yengin, I., & Lazarevic, B. (2014). The DiaCog: A Prototype Tool for Visualizing Online Dialog Games’ Interactions. Research in Higher Education Journal, 25. https://eric.ed.gov/?id=EJ1055317

Zapata, M. (2010). Estrategias de evaluación de competencias en entornos virtuales de aprendi-zaje. RED. Revista de Educación a Distancia. Sección de Docencia Universitaria en la Sociedad del Conocimiento. Número 1. Consultado https://www.um.es/ead/reddusc/1/eval_compet.pdf
Published
2018-04-20
How to Cite
YENGIN, İlker; KARAHOCA, Adem; KARAHOCA, Dilek. COGNITIVE DIALOG GAMES AS COGNITIVE ASSISTANTS: TRACKING AND ADAPTING KNOWLEDGE AND INTERACTIONS IN STUDENT’S DIALOGS. International Journal of Cognitive Research in Science, Engineering and Education (IJCRSEE), [S.l.], v. 6, n. 1, p. 45-52, apr. 2018. ISSN 2334-8496. Available at: <http://ijcrsee.com/index.php/IJCRSEE/article/view/328>. Date accessed: 24 may 2018. doi: https://doi.org/10.5937/ijcrsee1801045K.