Student modeling, Cognitive Modeling, Cognitive Assistants, Cognitive dialog game, MOOCS


This study introduces a system design in a form of cognitive dialog game (DiaCog) to support pedagogical factors and student learning model ideas. The purpose of the study is to describe how such a design achieves tracking and adapting students’ knowledge and mastery learning levels as a cognitive assistant. Also, this study shows alternative ways for supporting intelligent personal learning, tutoring systems, and MOOCS. This paper explains method called DiaCog that uses structure for students` thinking in an online dialog by tracking student`s level of learning/knowledge status. The methodology of computing is the semantic that match between students` interactions in a dialog. By this way it informs DiaCog’s learner model to inform the pedagogical model. Semantic fingerprint matching method of DiaCog allows making comparisons with expert knowledge to detect students` mastery levels in learning. The paper concludes with the DiaCog tool and methodologies that used for intelligent cognitive assistant design to implement pedagogical and learner model to track and adapt students’ learning. Finally, this paper discusses future improvements and planned experimental set up to advance the techniques introduced in DiaCog design.


Download data is not yet available.


Bain, Y. C. (2011). Learning through online discus-sion: a framework evidenced in learners’ interactions, Research in Learning Technology, Vol 19, 29-42, DOI: 10.3402/rlt.v19s1/7779

Bernsen, N. O., Dybkjær, H., & Dybkjær, L. (2012). Designing interactive speech systems: From first ideas to user testing. Springer Science & Business Media.

Bull, S. (2004). Supporting learning with open learner models. Planning, 29(14), 1.

Chi, M. T. (2009). Active‐constructive‐interactive: A conceptual framework for differentiating learning activities. Topics in cognitive science, 1(1), 73-105.

Conati, C., Gertner, A., & Vanlehn, K. (2002). Using Bayesian networks to manage uncertainty in student modeling. User modeling and user-adapted interaction, 12(4), 371-417.

Core, M. G., Moore, J. D., & Zinn, C. (2003, April). The role of initiative in tutorial dialogue. In Proceedings of the tenth conference on European chapter of the Association for Computational Linguistics-Volume 1 (pp. 67-74). Association for Computational Linguistics.

Corticalio (2015). Sparse distributed representations.

Embretson, S. E., & Reise, S. P. (2013). Item response theory. Psychology Press.

Ezen-Can, A., Boyer, K. E., Kellogg, S., & Booth, S. (2015, March). Unsupervised modeling for understanding MOOC discussion forums: a learning analytics approach. In Proceedings of the fifth international conference on learning analytics and knowledge (pp. 146-150). ACM.

Fereday, J., & Muir-Cochrane, E. (2006). Demonstrating rigor using thematic analysis: A hybrid approach of inductive and deductive coding and theme development. International journal of qualitative methods, 5(1), 80-92.

Forbes-Riley, K., Litman, D. J., Purandare, A., Rotaru, M., & Tetreault, J. (2007, June). Comparing linguistic features for modeling learning in computer tutoring. In Aied (Vol. 158, pp. 270-277).

Gagné, M., & Deci, E. L. (2005). Self‐determination theory and work motivation. Journal of Organizational behavior, 26(4), 331-362.

Hawkey, K. (2003). Social constructivism and asynchronous text-based discussion: A case study with trainee teachers. Education and Information Technologies, 8(2), 165-177.

Huang, Y. M., & Liang, T. H. (2015). A technique for tracking the reading rate to identify the e‐book reading behaviors and comprehension outcomes of elementary school students. British Journal of Educational Technology, 46(4), 864-876.

Johnson, A., & Taatgen, N. (2005). User modeling. The handbook of human factors in web design, 424-438.

Kolb, D.A. (1985). Learning-style inventory: Self-scoring inventory and interpretation book-let (2nd ed.). Boston: McBer & Co.

Kop, R., Fournier, H., & Mak, J. S. F. (2011). A pedagogy of abundance or a pedagogy to support human beings? Participant support on massive open online courses. The International Review of Research in Open and Distributed Learning, 12(7), 74-93.

Lee, Y. W., & Sawaki, Y. (2009). Cognitive diagnosis and Q-matrices in language assessment, Journal Language Assessment Quarterly, Vol. 6, 169-171,

Li, N., Cohen, W. W., Koedinger, K. R., & Matsuda, N. (2011, July). A Machine Learning Approach for Automatic Student Model Discovery. In EDM (pp. 31-40).

Nkuyubwatsi, B. (2013, October). Evaluation of massive open online courses (MOOCs) from the learner’s perspective. In European Conference on e-Learning (p. 340). Academic Conferences International Limited.

Palmer, S., Holt, D., & Bray, S. (2008). Does the discussion help? The impact of a formally assessed online discussion on final student results. British Journal of Educational Technology, 39(5), 847-858.

Prylipko, D., Rösner, D., Siegert, I., Günther, S., Friesen, R., Haase, M., ... & Wendemuth, A. (2014). Analysis of significant dialog events in realistic human–computer interaction. Journal on Multimodal User Interfaces, 8(1), 75-86.

Rosé, C. P., Bhembe, D., Siler, S., Srivastava, R., & VanLehn, K. (2003). The role of why questions in effective human tutoring. In Proceedings of the 11th International Conference on AI in Education (pp. 55-62).

Rosenberg, M., & Burkert, A. (2015). Learning Styles and their Effect on Learning and Teaching. Forschende Fachdidaktik, 103-130.

Rotaru, M., & Litman, D. J. (2006, July). Exploiting discourse structure for spoken dialogue performance analysis. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing (pp. 85-93). Association for Computational Linguistics.

Sardareh, S.A.; Aghabozorgi, S.; Dutt, A. (2014). Reflective dialogues and students’ problem solving ability analysis using clustering. In: Proceedings of the 3rd International Conference on Computer Engineering & Mathematical Sciences (ICCEMS 2014), 04-05 Dec 2014, Langkawi, Malaysia.

Soloman, B. A., & Felder, R. M. (2014). Index of Learning Styles Questionnaire [Internet]. North Carolina State University

Sottilare, R. A., Graesser, A., Hu, X., & Holden, H. (Eds.). (2013). Design recommendations for intelligent tutoring systems: Volume 1-learner modeling (Vol. 1). US Army Research Laboratory.

Thomas, M. J. (2002). Learning within incoherent structures: The space of online discussion forums. Journal of Computer Assisted Learning, 18(3), 351-366.

Vail, A. K., & Boyer, K. E. (2014, June). Identifying effective moves in tutoring: On the refinement of dialogue act annotation schemes. In International Conference on Intelligent Tutoring Systems (pp. 199-209). Springer, Cham.

Webb, G. I., Pazzani, M. J., & Billsus, D. (2001). Machine learning for user modeling. User modeling and user-adapted interaction, 11(1-2), 19-29.

Yengin, I., & Lazarevic, B. (2014). The DiaCog: A Prototype Tool for Visualizing Online Dialog Games’ Interactions. Research in Higher Education Journal, 25.

Zapata, M. (2010). Estrategias de evaluación de competencias en entornos virtuales de aprendizaje. RED. Revista de Educación a Distancia. Sección de Docencia Universitaria en la Sociedad del Conocimiento. Número 1. Consultado




How to Cite

Karahoca, A., Yengin, I., & Karahoca, D. (2018). COGNITIVE DIALOG GAMES AS COGNITIVE ASSISTANTS: TRACKING AND ADAPTING KNOWLEDGE AND INTERACTIONS IN STUDENT’S DIALOGS. International Journal of Cognitive Research in Science, Engineering and Education (IJCRSEE), 6(1), 45–52.