Abstract:
Assessment is an essential activity to achieve the objective of the course being taught and to improve the teaching and learning process. There are several educational taxonomies that can be used to assess the efficacy of assessment in engineering learning by aligning the assessment tasks in line with the intended learning outcomes and teaching and learning activities. This research is focused on using a learning taxonomy that fits well for computer science and engineering to categorize and assign weights to exam questions according to the taxonomy levels. Existing Natural Language Processing (NLP) techniques, Wordnet similarity algorithms with NLTK and Wordnet package were used and a new set of rules were developed to identify the category and the weight for each exam question according to Bloom's taxonomy. Using the result the evaluators can analyze and design the question papers to measure the student knowledge from various aspects and levels. Prior evaluation was conducted to identify most suitable NLP preprocessing techniques to the context. A sample set of end semester examination questions of the Department of Computer science and Engineering (CSE), University of Moratuwa was used to evaluate the accuracy of the question classification; weight assignment and the main category assignment were validated against the manual classification by a domain expert. The outcome of classification is a set of weights assigned under each taxonomy category, indicating the likelihood of a question to fall into a certain category. The highest weight category was considered as the main category of the exam question. According to the generated rule set the accuracy of detecting the correct main category of a question is 82%.