Archive | surveillance RSS feed for this section

Annotated Bibliography Entry: Crow in DWAE

Crow, A. (2013). Managing datacloud decisions and “big data”: Understanding privacy choices in terms of surveillant assemblages. In McKee, H. A., & DeVoss, D. N. (Eds.). Digital writing assessment & evaluation. Logan, UT: Computers and Composition Digital Press/Utah State University Press. Retrieved from http://ccdigitalpress.org/dwae/02_crow.html

Crow addresses the ethics of assessment by defining online composition portfolios as surveillant assemblages, collections of electronic student data that may be used to create increasingly accurate aggregate student profiles. Composition studies seeks assessment techniques, strategies, and technologies that are effective and fair. As big data continues to proliferate, Crow argues that we need to understand and communicate specific ways that student data are used in surveillance. Our goal should be to move toward caring on a surveillance continuum between caring and control.

Google Drawing Visualization of Surveillance Continuum

Google Drawing Visualization of Surveillance Continuum

For-profit assessment platforms, from Google Apps to ePortfolio companies, have sharing and profiling policies that are troubling and may represent more controlling than caring policies. These controlling policies may remove agency from students, faculty, and composition or English departments and transfer agency to university IT departments, university governance, or even corporate entities. Crow concludes that the best option would be a discipline-specific and discipline-informed DIY assessment technology that would take into consideration these real concerns about surveillant assemblages.

The concept of a surveillant assemblage is a network concept. It’s a dynamic collection of student information grown ever larger by the addition of student files. Crow demonstrates that electronic portfolios used for assessment are networked collections of files, collected over time for assessments, that build a (potentially) dangerously accurate profile of the student in aggregate—a profile that can be used for extra-assessment purposes through data mining.

Contemporary networks make privacy a complicated issue, a moving target, one that requires decisions on the part of participants regarding levels of privacy expected.

“[I]n the midst of venues that facilitate social networks, and in the midst of increasing technology capabilities by corporations and nation states, conceptions of privacy are changing shape rapidly, and individuals draw on a range of sometimes unconscious rubrics to determine whether they will opt in to systems that require a degree of personal datasharing.” (Crow 2013)

Crow responds that English studies as a (supra)discipline has a responsibility to investigate the effects of surveillant assemblage collections and to maintain student, faculty, and departmental or disciplinary agency in technology and network selection and implementation.

Miller’s genre, Bazerman’s genre set, and Popham’s boundary genre all demonstrate the socially active nature of genre and genre collections. Crow makes similar observations about student files as surveillant data collections: they have and take on a social activity of their own that can’t necessarily be predicted or controlled. As networked action, genre can expand within its framework and, in the case of boundary genre, expand into interdisciplinary spaces. Tension and contradiction (a la Foucault) are continually present in such networks, including surveillant assemblages, and unexpected results—like the superimposition of business in medical practice seen in Popham’s analysis or the potential marketing of aggregated student data from assessment processes and results mentioned in Lundberg’s forward—can, perhaps likely will, occur, if disciplinary agency is not maintained.

I’ve been working on my Twitter identity this past week, and a Tweet from @google about its transparency efforts caught my eye in relationship to Crow’s article.

The tweet links to an entry in Google’s Official Blog, “Shedding some light on Foreign Intelligence Surveillance Act (FISA) requests,” dated Monday, February 3, 2014, and reports that Google is now legally able to share how many FISA requests they receive. The blog entry, in turn, links to Google’s Transparency Report, which “disclose[s] the number of requests we [Google] receive[s] from each government in six-month periods with certain limitations.”

What struck me about the Transparency Report, the blog post, and the Twitter post related to Crow’s article is the focus on the important role reporting has on my willingness to contribute to my own surveillant assemblage. I feel a little better knowing that Google reports on such requests in an open and relatively transparent way, even if I also know that Google uses my data to create a profile of me that feeds me advertising and other profile-specific messages. This is my own “sometimes unconscious rubric” to which I turn when making decisions about how much and whether to opt in. The question it raises is whether we give our students, faculty, staff, and prospects agency to make these opt-in decisions, consciously or unconsciously. As a Google Analytics and web metrics consumer, these are especially sensitive issues with which I deal on a daily basis.

[CC licensed image from flickr user Richard Smith]