Archive | platform RSS feed for this section

Kindle in the Writing Classroom

Acheson, P., Barratt, C. C., & Balthazor, R. (2013). Kindle in the writing classroom. Computers and Composition, 30(4), 283-296. doi:10.1016/j.compcom.2013.10.005

Summary

325px-Amazon_Kindle_3

Amazon Kindle 3 by NotFromUtrechtOwn work. Licensed under CC BY-SA 3.0 via Wikimedia Commons.

This article studies the pedagogical effects of using Kindle™ readers for accessing texts in an English classroom in 2011. The results demonstrate minimal changes in learning as a result of using Kindle devices for reading and writing, but predict the likelihood that students of the present and future seek to access texts in multiple modes using multiple platforms (like laptop or desktop, smartphone, and e-readers).

Findings

Two librarians and an English professor at the University of Georgia received a grant to provide Kindle 3.0 readers “to be used as an integral part of the writing classroom experience for students” (p. 283) in a literature and composition class. The three developed a mixed methods study to assess students’ “comfort with and use of technology, their preferred method for reading different types of texts, and their experience with the Kindle at the beginning, mid-point, and end of the semester” (p. 284). The researchers concluded that pedagogical aims were neither more nor less effectively accomplished with than without Kindles. “None [students] noted either benefit or liability in the use of the Kindles for learning” (p. 291).

Although learning outcomes were not affected, researchers noted that some students struggled with disorientation as they transitioned from print text to e-text. The researchers recognized the value of disorientation: “We as professors and instructional librarians would be wise to expect and even encourage new tools in the classroom; the disorientation that accompanies these evolutions is often paired with new and valuable possibilities” (p. 293).

The Kindle afforded searching, highlighting, annotating, and bookmarking, but not every student found those features useful. In fact, librarians and professor alike found students taking notes on paper with Kindles in hand during class sessions. The researchers realized that students accessed texts in multiple formats as conditions dictated. Some found print copies easier to read and annotate. All used their Kindles for reading, but most also used other digital devices to access texts.

Review

I found the conclusion that students “prefer access to materials in multiple formats” (p. 293) most interesting. This suggests that teachers must be prepared to support and provide information on multiple platforms for our students.

  • Provide Kindle section numbers and print page numbers for readings.
  • Expect students to highlight and annotate electronically and write marginalia in print copies.
  • Evaluate the fairness of asking questions about repeated uses of words as part of textual analysis, given the e-reader’s ability to conduct full-text searches.
  • Determine whether an e-reader’s ability to “read” the text back to the student is adequate to grasp its meaning and significance.

As a result of the study’s sharp focus on Kindle 3.0, I would recommend this article only to colleagues seeking information about the use of e-reading devices and/or e-reader software in classes. However, colleagues seeking insight into the future of digital text access will likely find the study informative.

Kindle in the Writing Classroom

Acheson, P., Barratt, C. C., & Balthazor, R. (2013). Kindle in the writing classroom. Computers and Composition, 30(4), 283-296. doi:10.1016/j.compcom.2013.10.005

Summary

325px-Amazon_Kindle_3

Amazon Kindle 3 by NotFromUtrechtOwn work. Licensed under CC BY-SA 3.0 via Wikimedia Commons.

This article studies the pedagogical effects of using Kindle™ readers for accessing texts in an English classroom in 2011. The results demonstrate minimal changes in learning as a result of using Kindle devices for reading and writing, but predict the likelihood that students of the present and future seek to access texts in multiple modes using multiple platforms (like laptop or desktop, smartphone, and e-readers).

Findings

Two librarians and an English professor at the University of Georgia received a grant to provide Kindle 3.0 readers “to be used as an integral part of the writing classroom experience for students” (p. 283) in a literature and composition class. The three developed a mixed methods study to assess students’ “comfort with and use of technology, their preferred method for reading different types of texts, and their experience with the Kindle at the beginning, mid-point, and end of the semester” (p. 284). The researchers concluded that pedagogical aims were neither more nor less effectively accomplished with than without Kindles. “None [students] noted either benefit or liability in the use of the Kindles for learning” (p. 291).

Although learning outcomes were not affected, researchers noted that some students struggled with disorientation as they transitioned from print text to e-text. The researchers recognized the value of disorientation: “We as professors and instructional librarians would be wise to expect and even encourage new tools in the classroom; the disorientation that accompanies these evolutions is often paired with new and valuable possibilities” (p. 293).

The Kindle afforded searching, highlighting, annotating, and bookmarking, but not every student found those features useful. In fact, librarians and professor alike found students taking notes on paper with Kindles in hand during class sessions. The researchers realized that students accessed texts in multiple formats as conditions dictated. Some found print copies easier to read and annotate. All used their Kindles for reading, but most also used other digital devices to access texts.

Review

I found the conclusion that students “prefer access to materials in multiple formats” (p. 293) most interesting. This suggests that teachers must be prepared to support and provide information on multiple platforms for our students.

  • Provide Kindle section numbers and print page numbers for readings.
  • Expect students to highlight and annotate electronically and write marginalia in print copies.
  • Evaluate the fairness of asking questions about repeated uses of words as part of textual analysis, given the e-reader’s ability to conduct full-text searches.
  • Determine whether an e-reader’s ability to “read” the text back to the student is adequate to grasp its meaning and significance.

As a result of the study’s sharp focus on Kindle 3.0, I would recommend this article only to colleagues seeking information about the use of e-reading devices and/or e-reader software in classes. However, colleagues seeking insight into the future of digital text access will likely find the study informative.

Re/Proposed Object of Study: Google Analytics

I’m sticking with Google Analytics as my object of study. I’m too invested in the object, and it remains an important part of my professional responsibilities and therefore an object that I need to study, whether for this class or for professional development. In fact, this month I earned another certificate of completion for a Google Analytics Academy program, “Google Analytics Platform Principles.” The outcomes benefit me academically and professionally: the course contributed to my understanding of the underlying data structure and collection principles for the assignment’s ongoing case study, and it provided me some intriguing ideas for importing data into GA beyond those data collected by the tracking code to help my team measure our marketing effort success.

The GA platform consists of four activities based on dimensions (user characteristics) and metrics (quantitative interaction information): collecting, configuration, processing, and reporting. The Google Developers guide provides the following helpful visualization to describe the platform’s activity.

Google Analytics Platform Components visualization

Google Analytics Platform Components. Original image on the Google Developers Guide.

Collection: User-interaction data are collected through either the embedded code snippet or through the measurement protocol, an alternative system for manually submitting user-interaction data from mobile apps and other internet-connected appliances.

Configuration: Data are configured by the GA account manager(s) through the GA web interface or management API. Configuration settings permanently delimit data collections; as a result, at least one configuration is required to be unfiltered to ensure all possible data are accessible in at least one configuration, or, as GA refers to these configurations, Views.

Processing: Based on configuration settings (filters, groupings, etc.), raw data are processed and stored in aggregated data tables and in configured raw forms. Data tables organize data in pre-determined collections for quick access, but queries can be constructed to pull data from configured raw forms. Often such queries will sample data rather than pull all values, once again to speed the presentation of results.

Reporting: Data are reported via the GA web interface or via the Core Reporting API or Multi-Channel Funnel Reporting API. Reports can be constructed that will not provide meaningful results; not all dimensions are compatible or reportable with all metrics. As a result, GA account managers must construct views carefully and develop reporting goals and practices that yield meaningful and accurate results.

The GA data model consists of three levels that help collect and organize dimensions and metrics: user (visitor), session (visit), and interaction (hit). Lesson 1.3 in the GA Academy Platform Principles course offers the following visualization of this model.

Google Analytics data model visualization

Overview of the Google Analytics data model. Original image from the GA Academy Platform Principles Lesson 1.3.

User (visitor): The user is identified by the browser or mobile device the visitor used to access the site.

Session (visit): The session is defined as the time the user (browser or device) was active on the website.

Interaction (hit): Interactions are individual actions taken by a user that sends hit data to GA servers. These may be pageviews (loading the page), events (clicking on a movie button), a transaction (checking out of an online store), or a social interaction (sharing content on a social device).

As this chart reveals, the GA data model breaks engagement into a hierarchy. Interactions occur within sessions, and sessions are associated with a user. A user may have multiple sessions, and each session may have multiple interactions and interaction types. The GA account manager must determine measurement scope using this hierarchy. Is the goal to measure and report on interaction-level activity (number of pageviews regardless of user or session); session-level activity (common entrance or exit pages for sessions regardless of user); or user-level activity (number of unique users who completed a specific task, regardless of session)? The measurement goal determines the reporting scope.

So far, I’ve struggled to define the scope of GA as I’ve applied theories to it.

I’ve described GA as the reporting “arm” of a web development and visitor ecology in which nodes include web marketers and web developers, web services technicians and coders, database managers, marketing writers, content managers, website visitors, browsers and platforms, Internet hardware and software, and GA servers. In this model, GA collects traces of the active relationships that occur among these nodes.

I’ve also described GA as a mediating technology that directly and indirectly limits and controls the data collected from website interactions. Specific, delimited data points are the target of data collection and reporting. Those data points, and only those data points, are available to GA end users who seek information about user behavior on a website.

Defining Google Analytics

While neither of these descriptions is inaccurate, neither quite achieves the focus I’d like to apply to my case studies. I propose a description that focuses more directly on the GA platform’s four activities and the GA data model. Specifically, GA is a digital tool that collects user interaction data at three levels — user, session, and interaction — in the form of dimensions and metrics. Data collected are configured based on specific, targeted, goal-oriented decisions by GA administrative users, processed in accordance with those decisions, and output through aggregated data tables to GA users, both administrative and standard or limited-access users. This description focuses specifically on agency of GA administrators; in the case of my GA account for the University of Richmond School of Professional and Continuing Studies, that agency resides primarily in me and indirectly in our marketing team.

Application to English Studies

GA focuses on assessing outcomes. GA administrators configure data collected in GA to assess the results of specific marketing efforts. For example, in order to examine general and specific browsing patterns of external (non-UR) visitors, I need to configure our GA account with a view that filters out internal (UR-based) web traffic by IP address. Examining these browsing patterns enables our marketing team to determine whether the information we’re providing is attracting prospective students in ways that our strategic marketing plan requires or expects. In short, we are using configured data reports to assess the extent of success of our web-based marketing efforts. Such assessments offer English studies models for assessment that can and should be incorporated into writing assessment, writing program assessment, perhaps even departmental assessments. Data-driven assessments can and should include both user characteristics and metrics; that is, they should be based on user profiles intentionally constructed to include or reflect contemporary, lived experience. For English studies, this means our data collection efforts must be based in localized environments and configured to process and report on specific objectives and outcomes.

GA collects metrics, but its ability to collect dimensions (user characteristics) means that its reporting is verbal and numerical. As such, its reports are rhetorical. They can and should be problematized as rhetoric. Specific decisions to collect or not collect demographic data, for example, could be problematized using cultural studies or gender studies. Specific ways of reporting demographic data, including terms used to describe or define those demographic qualities, are also areas to be analyzed and problematized. From its use of colors to its data processing strategies (which remain obscure), GA is fair game for rhetorical analysis and critique, and scholars in English studies should focus more critical attention on analyzing GA rather than using GA to measure the success of web-based instructional or informational efforts.

GA as Network

GA is free and remarkably powerful. Google appears to be working to make it even more broadly applicable as a digital analytics platform, not simply a web analytics platform. The distinction is important to its role as a network. Web analytics are useful and meaningful, but they are limited in scope to websites and web interactions. Digital analytics, on the other hand, encompass a much broader category of data, like digital advertising (including web-based and localized advertising efforts, like digital billboards and online display ads), appliance function (including communications between digital devices like wifi-connected refrigerators or cell-connected washer/dryer sets), and mobile phone uses beyond calling. As GA broadens its applicability as a digital analytics platform, its reach and scope become global, both in location and function. GA can begin to measure global network functions; its ability to measure those functions is dependent on its own flexible network structure. Its collection, configuration, processing, and reporting functions are network-based and network-focused. Its internal structure, to the extent Google allows us to view it, is based on related aggregated data tables. And its objects of measurement are related digital nodes on networks. The result is that GA is both network reporter and networked reporter.

[Top image: Screen capture of Google Analytics homepage: google.com/analytics]