I.           
Are E-Assessment Tools Helpful
In Programming Courses

Four research questions
were formulated to measure the degree to which e-assessment tools have helped
to students and instructors [25]:

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

 

1) Have e-assessment
Tools proven to be helpful in improving student learning?

2) Do students think that
e-assessment Tools have improved their performance?

3) After having used the
tools, do instructors think that the tools have improved their teaching
experiences?

4) Is the assessment
performed by e-assessment Tools accurate enough to be helpful?

 

1.
Have e-assessment Tools proven to be helpful in improving student learning?

In 2003, Edwards [26] presented
fascinating results when he changed the e-assessment tool in a junior level
course on comparative languages, i.e. Curator was replaced by WebCAT,
demonstrating more timely submission of assignments along with test cases by
the students. In 2003, Woit [27] exhibited that online assessment of student’s
practical skills imparts a more accurate measure of student ability. This
opinion is supported by the data that was collected over five academic years,
comparing student performance on online tests with and without e-assessment
tools. In 2005, Higgins [28] described an experiment in which Ceilidh was substituted
by CourseMarker at the University of Nottingham. The passing percentage of
students was found to be very high, and has improved with the evolution of
CourseMarker. Also in 2005, Malmi [30] showed results from students using
TRAKLA and TRAKLA2, in which final exam grades increased when instructors
modified the ways in which students were allowed to use the automated tool and
were allowed to resubmit their work. In 2011, Wang [31] showed that final
grades of students using AutoLEP for grading were way better than grades
produced without using any tool.

 

Considering all these facts, a positive impact on
student learning with introduction of e-assessment tools into a course can be
inferred. End-of-grades or final exam scores were major measures used to
measure this.

 

2:
Do students think that e-assessment tools have improved their performance?

In 2003, Edwards [26]
created a 20-question survey for students using Web-CAT, and it was found that
perceptions of using Web-CAT were generally positive.  In 2005, Higgins [27] distributed a survey to
programming students who tested the tool CourseMarker and indicated that over
75% of students’ loved the flexibility to re-submit a programming assignment
due to use of an e-assessment tool. Specifically, most students felt that
several available submissions encouraged them to work for a higher grade. In
2009, when Garcia-Mateos [32] introduced Mooshak, he presented students with a
survey designed as a series of questions prompting for agreement or disagreement.
77% of the students specified that “they learn better with the new methodology
than with the old one,” while 91% said that “if they could choose, they would
follow the continuous evaluation methodology again.” Also in 2012, Brown [33]
surveyed students using the JUG automated assessment tool on their insight of
the tool’s impact. Given the question “Did the auto-graded tests match your
expectations of the requirements?” the majority of students opted for the
middle answer, “Sometimes.” But the question “did the reports from the
auto-grader clarify how your code should behave?” elicited a much more positive
response, with the majority of students answering “Often.”

Unconvincing results
concerning student perceptions of e-assessment tools were observed. Students
had a mixed reaction on this question; some were very positive, but a
significant number showed student dissatisfaction with the tools.

     

3:
After having used the tools, do instructors think that the tools have improved
their teaching experiences?     

In 1995, Schorsch [35]
reported that 6 out of 12 teachers who used CAP to grade assignments stated
that the tool saved them around ten hours of grading per section of roughly
twenty students. In 2003, Venables [35] stated that the feedback provided by
Submit, the e-assessment tool she discussed, provided answers to many of the
questions students would need to ask while working on an assignment. This tool
ability reduced class time that otherwise would have been required for responding
to students’ questions.  In 2012, Queirós
[36] briefly stated that automated grading is better than manual grading in
efficiency, accuracy, and objectivity. E-assessment tools remove biases and
other factors from the grading process, and submissions are marked at a greater
pace.

 

Overall, instructors
appreciate e-assessment tools for the benefits they provide, such as the time
savings. Most instructors report they must devote time in larger quantities
before a class first uses an e-assessment tool compared to subsequent
semesters, but the overall agreement is that these tools are effective
time-savers and are capable at the tasks they are designed to perform.

 

4:
Is the assessment performed by e-assessment tools accurate enough to be
helpful?

In 2005, Higgins [37]
stated that grading performed by CourseMarker tool in one section of a course
was at par with the assessment done by a teaching assistant in some other
section of same course. In 2012, Taherkhani [38] demonstrated that for about
75% of submissions, AARI could successfully recognize the algorithms used by
the student in a program that required them to sort integers in ascending
order.  In 2014, Gaudencio [39] reported
that instructors who manually graded assignments inclined to agree more with
the results of an e-assessment tool than with results provided by other
instructors.  

 

E-assessment tools have
proven to impart beneficial results in assisting the assessment process.

 

Leave a Reply

Your email address will not be published. Required fields are marked *