TY - GEN
T1 - Can information retrieval techniques meet automatic assessment challenges?
AU - Hasan, Maruf
PY - 2009
Y1 - 2009
N2 - In Information Retrieval (IR), the similarity scores between a query and a set of documents are calculated, and the relevant documents are ranked based on their similarity scores. IR systems often consider queries as short documents containing only a few words in calculating document similarity score. In Computer Aided Assessment (CAA) of narrative answers, when model answers are available, the similarity score between Students' Answers and the respective Model Answer may be a good quality-indicator. With such an analogy in mind, we applied basic IR techniques in the context ofautomatic assessment and discussed ourfindings. In this paper, we explain the development of a webbased automatic assessment system that incorporates 5 different text analysis techniques for automatic assessment of narrative answers using vector space framework. We apply Uni-gram, Bi-gram, TFIDF, Keyphrase Extraction, and Keyphrase with Synonym Resolution before representing model answers and students' answers as document vectors; and then we compute document similarity scores. The experimental results based on 30 narrative questions with 30 model answers, and 300 student's answers (from 10 students) show that the correlation of automatic assessment with human assessment is higher when advanced text processing techniques such as Keyphrase Extraction and Synonym Resolution are applied.
AB - In Information Retrieval (IR), the similarity scores between a query and a set of documents are calculated, and the relevant documents are ranked based on their similarity scores. IR systems often consider queries as short documents containing only a few words in calculating document similarity score. In Computer Aided Assessment (CAA) of narrative answers, when model answers are available, the similarity score between Students' Answers and the respective Model Answer may be a good quality-indicator. With such an analogy in mind, we applied basic IR techniques in the context ofautomatic assessment and discussed ourfindings. In this paper, we explain the development of a webbased automatic assessment system that incorporates 5 different text analysis techniques for automatic assessment of narrative answers using vector space framework. We apply Uni-gram, Bi-gram, TFIDF, Keyphrase Extraction, and Keyphrase with Synonym Resolution before representing model answers and students' answers as document vectors; and then we compute document similarity scores. The experimental results based on 30 narrative questions with 30 model answers, and 300 student's answers (from 10 students) show that the correlation of automatic assessment with human assessment is higher when advanced text processing techniques such as Keyphrase Extraction and Synonym Resolution are applied.
KW - And natural language processing
KW - Computer aided instruction
KW - Information retrieval
KW - Intelligent text analysis
UR - http://www.scopus.com/inward/record.url?scp=77749261173&partnerID=8YFLogxK
U2 - 10.1109/ICCIT.2009.5407259
DO - 10.1109/ICCIT.2009.5407259
M3 - Conference Proceeding
AN - SCOPUS:77749261173
SN - 9781424462841
T3 - ICCIT 2009 - Proceedings of 2009 12th International Conference on Computer and Information Technology
SP - 333
EP - 338
BT - ICCIT 2009 - Proceedings of 2009 12th International Conference on Computer and Information Technology
T2 - 2009 12th International Conference on Computer and Information Technology, ICCIT 2009
Y2 - 21 December 2009 through 23 December 2009
ER -