Archive | Digital Writing Assessment and Evaluation RSS feed for this section

Every Annotation Deserves a Response, But Here Lie Only Two

This post covers my responses to two of my peers’ Digital Writing Assessment & Evaluation Annotated Bibliography entries. For this particular assignment, I chose to respond to Jenny’s entry on Reilly and Atkins, and Leslie’s entry on Brunk-Chavez and Fourzan-Rice.

I chose Jenny’s annotations on Reilly and Atkins’ article, “Rewarding Risk: Designing Aspirational Assessment Processes for Digital Writing Projects,” as it seems to align very well with the article I read by Eidman-Aadahl et al., “Developing Domains for Multimodal Writing Assessment: The Language of Evaluation, the Language of Instruction.” Reilly and Atkin’s emphasis on the process of designing assessment that did not alienate students (rather, it encouraged them to be a part of the process) was encouraging, especially in terms of “trial-and-error” and students being self-reflective about how they chose to approach the work. Both trial-and-error and self-reflection are tools for people to become aware of their own processes, which can be critical when a person is struggling to learn and master new material and technological tools. Jenny did an excellent job linking the material from the article she read to not only what we are reading in class but also the kind of work and the digital tools that we are all slowly developing skills within. So often, work in the classroom becomes more about a percentage than the skills that are being learned and refined while working through material. I have seen some college professors and grade school instructors who were very concerned with their students proving themselves in a number scale that showing progress in learning course content was placed second. In their assignments, the students seem to come up against assessment that was all or nothing. The process of learning and producing should be even more important than the grading system that only looks towards the final product without any regard for the academic journey of the student. With the rise of digital writing projects and the fluidity with which such technologies can offer students in terms of not only how they produce their texts but also how they distribute them, is going to be very informative in watching students and instructors learn to navigate their changing relationships to texts.

Leslie’s article annotation on “The evolution of digital writing assessment in action: Integrated programmatic assessment” by Brunk-Chavez and Fourzan-Rice was fascinating in that, as a case study, it was concerned with a particular instance of digital writing assessment in a way that Jenny’s article and my article were not. It was fascinating to read about the University of Texas at El Paso (UTEP) acknowledging that new technologies require the curriculum and instructing practices to adapt so as to “improve student feedback” processes, “professional development,” “improved quality of programmatic assessment and feedback,” and to enable “students to write for a discourse community beyond their instructor.” While the goals of the program are laudable, the concern of students about feeling alienated from their professor (ideally, the one person whose aid is what they come to depend on in terms of learning how to improve and to seek guidance from) as their work is sent to “WriterMiner….[and] then randomly distributed to the Scoring Team, which is made up of first-year graduate teaching assistants,” raises questions not just about the language of assessment as seen in other articles in the Digital Writing Assessment & Evaluation, but what kind of structure should be in place so that students learn to work within in a digital learning atmosphere and what kind of network of evaluation would emerge to allow for an efficient “programmatic consistency.” My former university had something similar to UTEP’s process. The course was a required Humanities class taken by all students who had entered into the university with 59 credits or less (Junior-level transfer students lucked out), and was solely online. The course is strictly online and has an enrollment of up to 400 students a semester, divided between instructing professors with a certain number of graders assigned to each instructor. The material for the class was divided into two modules–Art and Music–and each module had designated units, which were based on different styles and mediums. While the class sounded highly efficient, students and graders tended to get lost in the mire. Each grader was generally in charge of anywhere between 40 to 75 students (though some graders could take on a greater number of students, but those were exceptions rather than the rule), so responding to student work was done through general stock comments according to a rubric that was both very specific (in terms of what needed to be covered) and very vague (in terms of how those content items should be covered). The students themselves never met their instructors (as the content of the course did not vary from semester to semester) and dealt virtually with their graders (who were better known as proctors). The course itself became a kind of assembly line of learning and the critical skills required of students in the coursework was meant to be learned elsewhere and channeled into their writing (despite most students taking the class were usually freshmen with no prior college writing classes). The course was redesigned my final year working with the university, but the administrators’ stance was that there should be stricter rules and more of them (many more rules) so that passing the course was a matter of moving through a checklist rather than strengthening critical and creative academic skills. That was a long-winded explanation to make a point that both Leslie’s entry and my experience with that Humanities course is that it seems instructors and administrators are right in the idea that curriculum and teaching styles should adapt to integrate new technologies, but the new technologies should not cause students to feel alienated. As digital assessment and evaluation are newer to the academic environment, some trial-and-error are to be expected, but the ultimate goal should be the encouragement and facilitation of student learning with an emphasis on students’ coming to understand their own progress with the materials and tools.