Archive | Digital Assessment RSS feed for this section

Annotated Bibliography: Digital Writing Assessment & Fairness

Poe Mya. “ Making Digital Writing Assessment Fair for Diverse Writers.” Eds. Heidi A. Mckee and Dànielle Nicole DeVoss. Digital Writing Assessment & Evaluation. Logan, UT: Computers and Composition Digital Press/Utah State University Press, 2013. Web. 3 February 2014.

In “Making Digital Writing Assessment Fair for Diverse Writers,” Mya Poe argues that digital writing assessment must consider fairness in order to provide an equal opportunity to all students. Digital writing assessment is becoming more important due to the rise in digital writing and multimodal composition. Poe presents two theories about assessment and technology: “writing assessment as technology” and “writing assessment with technology. First, writing assessment is a technology.  Poe sites the work of George Madaus, who argued that assessments fall under “very simple definitions of technology—the simplest being something put together for a purpose, to satisfy a pressing and immediate need, or to solve a problem” (qtd in Poe).  Second, digital writing is being assessed through automated essay scoring (AES). The research on AES is mixed. On one hand it has been shown as reliable. On the other hand, the programs are said to have the “raced ideologies of their designers” (4). In response to this, Poe presents 3 key terms in digital writing assessment: validity, reliability, and fairness. She defines validity and reliability; however, the focus here is fairness.

Fairness is fundamental to digital assessment because through fairness instructors are able to “make valid, ethical conclusions from assessment results so that we may provide all students the opportunity to learn” (7).  Poe uses the Standards for Educational and Psychological Testing as a starting point for developing more equitable large-scale digital writing assessments.  Using the Standards, Poe presents that fairness guidelines require the following:

  • Thoughtful design
  • Extension of fairness through the entire assessment
  • Data collection (locally sensitive data through surveys, ethnographic research)
  • Interpretation of evidence in context
  • Frame assessment results for the public

Under the fairness, instructors consider the goal of the assessment and to ensure that students understand the purpose of the assessment. In addition, when interpreting assessments, instructors consider the social context and connect writing program data to institutional data. The fairness guidelines also encourage instructors to gather evidence about digital identities, understanding students’ past digital writing experiences and the nature of those experiences. The Standards “provide us ways to think about the interpretation and use of assessment results” (14). Digital writing assessment has to go beyond traditional rubrics to seeing digital assessment as a way for instructors to make informed choices for the benefit of their teaching and student learning.

The questions posed in Poe’s work (and in the entire collection) remind me of our initial class discussion where we talked about defining a network and understanding the affordances and roles of networks. Poe’s work aims at the question: “How might the multimodal, networked affordances of digital writing affect issues of equity and access?” (Preface).  This goes beyond questions of access to the network but also the benefits that sad network offers to that particular group. A minority group may have access to the network, but lack the knowledge or ability to capitalize on this access. This could be a network of physical friends and co-workers or a digital network.

Poe’s work also made me think of assessment as a kind of boundary genre. Boundary genres are defined as genres that “may actively participate in interprofessional struggles about hierarchies, dominance, and values, helping to create, mediate, and store tensions” (Popham 283). Assessments work in this way, as scholar-teachers we struggle over the role of assessment and when and how to assess. Assessment goes across professional disciplines/boundaries because teachers, administrators, and the public use them to make changes to pedagogy, develop policies, and judge quality/effectiveness, respectively.

I am still pondering the connection between boundary genres and networks. Would boundary genres serve as nodes, providing connections between disciplines and groups in order to redistribute information and communicate between different points?

 Works Cited:

McKee, Heidi A., and Dànielle Nicole DeVoss, Eds. “Preface.” Digital Writing Assessment & Evaluation. Logan, UT: Computers and Composition Digital Press/Utah State University Press, 2013. Web.

Poe Mya. “ Making Digital Writing Assessment Fair for Diverse Writers.” Eds. Heidi A. Mckee and Dànielle Nicole DeVoss. Digital Writing Assessment & Evaluation. Logan, UT: Computers and Composition Digital Press/Utah State University Press, 2013. Web. 3 February 2014.

Popham, Susan L. “Forms as Boundary Genres in Medicine, Science, and Business.” Journal of Business and Technical Communication 19.3 (2005): 279-302.

Images:

Deuren, Joe Van. “Fairness heading” Balanced Life Sills. Web. 3 February 2014.

Tarbell, Jared. “Node Garden.” Gallery of Computation, 2004. Web. 3 February 2014.