Archive | Google RSS feed for this section

REALLY Preliminary Work: Word Clouds of Google TOS

Word Map #1: Google Apps for Education Terms of Service

Screen Shot 2014-08-31 at 11.38.37 PM

Word Map #2: Google Terms of Service

Screen Shot 2014-08-31 at 11.38.54 PM

REALLY Preliminary Work: Word Clouds of Google TOS

Word Map #1: Google Apps for Education Terms of Service

Screen Shot 2014-08-31 at 11.38.37 PM

Word Map #2: Google Terms of Service

Screen Shot 2014-08-31 at 11.38.54 PM

Annotated Bibliography Entry: Crow in DWAE

Crow, A. (2013). Managing datacloud decisions and “big data”: Understanding privacy choices in terms of surveillant assemblages. In McKee, H. A., & DeVoss, D. N. (Eds.). Digital writing assessment & evaluation. Logan, UT: Computers and Composition Digital Press/Utah State University Press. Retrieved from

Crow addresses the ethics of assessment by defining online composition portfolios as surveillant assemblages, collections of electronic student data that may be used to create increasingly accurate aggregate student profiles. Composition studies seeks assessment techniques, strategies, and technologies that are effective and fair. As big data continues to proliferate, Crow argues that we need to understand and communicate specific ways that student data are used in surveillance. Our goal should be to move toward caring on a surveillance continuum between caring and control.

Google Drawing Visualization of Surveillance Continuum

Google Drawing Visualization of Surveillance Continuum

For-profit assessment platforms, from Google Apps to ePortfolio companies, have sharing and profiling policies that are troubling and may represent more controlling than caring policies. These controlling policies may remove agency from students, faculty, and composition or English departments and transfer agency to university IT departments, university governance, or even corporate entities. Crow concludes that the best option would be a discipline-specific and discipline-informed DIY assessment technology that would take into consideration these real concerns about surveillant assemblages.

The concept of a surveillant assemblage is a network concept. It’s a dynamic collection of student information grown ever larger by the addition of student files. Crow demonstrates that electronic portfolios used for assessment are networked collections of files, collected over time for assessments, that build a (potentially) dangerously accurate profile of the student in aggregate—a profile that can be used for extra-assessment purposes through data mining.

Contemporary networks make privacy a complicated issue, a moving target, one that requires decisions on the part of participants regarding levels of privacy expected.

“[I]n the midst of venues that facilitate social networks, and in the midst of increasing technology capabilities by corporations and nation states, conceptions of privacy are changing shape rapidly, and individuals draw on a range of sometimes unconscious rubrics to determine whether they will opt in to systems that require a degree of personal datasharing.” (Crow 2013)

Crow responds that English studies as a (supra)discipline has a responsibility to investigate the effects of surveillant assemblage collections and to maintain student, faculty, and departmental or disciplinary agency in technology and network selection and implementation.

Miller’s genre, Bazerman’s genre set, and Popham’s boundary genre all demonstrate the socially active nature of genre and genre collections. Crow makes similar observations about student files as surveillant data collections: they have and take on a social activity of their own that can’t necessarily be predicted or controlled. As networked action, genre can expand within its framework and, in the case of boundary genre, expand into interdisciplinary spaces. Tension and contradiction (a la Foucault) are continually present in such networks, including surveillant assemblages, and unexpected results—like the superimposition of business in medical practice seen in Popham’s analysis or the potential marketing of aggregated student data from assessment processes and results mentioned in Lundberg’s forward—can, perhaps likely will, occur, if disciplinary agency is not maintained.

I’ve been working on my Twitter identity this past week, and a Tweet from @google about its transparency efforts caught my eye in relationship to Crow’s article.

The tweet links to an entry in Google’s Official Blog, “Shedding some light on Foreign Intelligence Surveillance Act (FISA) requests,” dated Monday, February 3, 2014, and reports that Google is now legally able to share how many FISA requests they receive. The blog entry, in turn, links to Google’s Transparency Report, which “disclose[s] the number of requests we [Google] receive[s] from each government in six-month periods with certain limitations.”

What struck me about the Transparency Report, the blog post, and the Twitter post related to Crow’s article is the focus on the important role reporting has on my willingness to contribute to my own surveillant assemblage. I feel a little better knowing that Google reports on such requests in an open and relatively transparent way, even if I also know that Google uses my data to create a profile of me that feeds me advertising and other profile-specific messages. This is my own “sometimes unconscious rubric” to which I turn when making decisions about how much and whether to opt in. The question it raises is whether we give our students, faculty, staff, and prospects agency to make these opt-in decisions, consciously or unconsciously. As a Google Analytics and web metrics consumer, these are especially sensitive issues with which I deal on a daily basis.

[CC licensed image from flickr user Richard Smith]

Object of Study: Google Analytics

I have chosen Google Analytics as my object of study for ENGL 894 Theories of Networks. More specifically, I have chosen the Google Analytics account I manage on behalf of the University of Richmond School of Professional and Continuing Studies. Although this account is a sub-account on the larger University of Richmond Google Analytics roll-up account, I will limit my study to my School’s subdomain’s account.

Google Analytics is a free web activity data collector, aggregator, and reporter. It’s among the most popular web metrics products because it is free, but it is certainly not the only one—other products exist that do a more thorough job of collecting all traffic across multiple web platforms, including advertising and direct email from multiple providers. Google Analytics provides data on all web traffic, but it segments that traffic in reports largely to the benefit of its related products, like Google Adwords and the Google Display Network.

Google Analytics offers web administrators and marketers a window into the activity of users on their website(s). It collects and aggregates data about all web activity on a given domain—in my case, the subdomain SPCS on the domain ( Any time a person visits any page in the subdomain, Google Analytics collects aggregate data on the visit, including but not limited to user operating system, browser type and version, platform (including desktop, mobile, and tablet), referral source, internal previous and next pages, time spent on the page, time spent within the domain, exit pages, and much more.

Google collects two types of data: metrics (“quantitative measurements of users, sessions and actions”) and dimensions (“characteristics of users, their sessions and actions”) (Google Analytics Academy, 2013). By combining specific metrics and dimensions in a report, web administrators and marketers can answer specific questions about visitor behavior, like which web pages generate more traffic than others and which pages result in visitors remaining on, rather than exiting, the website. As a result, Google Analytics provides key quantitative data to support specific goals, including increased time on site and increased traffic to a specific page. Adding e-commerce and online advertising data collection in Google Analytics offers a complete picture of the effectiveness of online communication efforts across multiple digital channels and platforms.

Google Analytics offers several interesting uses in English studies. Since web sites, especially in higher education, are written as communication, English studies should be able to use Google Analytics to measure whether written communications are effective. Professional communication pedagogy should address specific, measurable ways to determine whether communication is successful; by tracking aggregate visitor behavior on a specific communication goal, like a call to action, writers can hone messages to communicate more effectively.

Google Analytics is a tool for analysis. It collects metadata about web visits as a means to understanding the way visitors navigate a set of web pages. Its analytical tools and methodology are ripe for analysis and critique. Google obfuscates its search algorithm; Google Analytics offers a window into the results of searches, which helps administrators and analysts reverse engineer Google’s search algorithm. As search becomes the default way people engage with the web, Google’s social and economic clout offer intriguing opportunities to open and close markets, to serve the underserved or to underserve a specific population. Obfuscating the source of this power invites social critique, a favored method for English studies.

Google Analytics is a window into the remarkably detailed visitor data Google collects on each visit to a given web site. Such aggregate data provides Google a powerful tool to offer online advertisers that seek to target specific demographics. Scholars in English studies have opportunities to consider the potential social impact of collecting and sharing such data—to analyze and critique collection methodologies, social implications, marketing efforts, and communication channels. The data are quantitative; English studies provides an opportunity to examine, qualify, and put a human face on the data results.

If we consider a single web domain (or, in this case, subdomain) a node in a global virtual network, Google Analytics is the shadow “metanetwork” informing the node. Whether we consider the node the subdomain itself, a folder in a subdomain, or an individual page within a folder, Google Analytics is the shadow network providing metadata at the intersection of the human and the electronic, of the virtual node and the visitor node. Its ability to function as a network while informing about other networks is exciting and complex, ideal fodder for theorizing networks.


Google Analytics Academy. (2013, October). Key metrics and dimensions defined [Video transcript]. Digital Analytics Fundamentals. Retrieved from

[Behavior flow visualization courtesy University of Richmond SPCS Google Analytics account]

Cloudy with a Chance of Connection

Cloud Computing, A World Connected

How Stuff Works? Assignment

Image comes from Wajeeha blog.

Image comes from Wajeeha blog.

Things to Know Before Diving In (or, swimming up?)

If you are like me and have a love/hate  relationship (with a lot of cursing involved) with your computer, techie jargon is less than fun to decipher. So, this is a brief, and not at all exhaustive, list of terms (with definitions) to refer back to whenever necessary (which I will probably be doing often).

Intranet vs. Extranet - An intranet is a private network, usually used by companies, that is founded on internet technologies but is inaccessible to the global internet community. An extranet is an intranet that is shared between more than one organization, making it accessible to particular individuals outside of the specific company but still remains inaccessible to the main internet community (an example on the BBC website was that of inventory management) (Schofield). As a way to assist companies with intranets and extranets in relation to cloud computing, Google released the Google Search Appliance, which allows users in a company to search through their documents and other data in much the same way an internet user would search for information through a search engine (though it comes with a hefty price tag).

Client Computer (or computer network) is the physical computer owned and operated by the client rather than the cloud operator. This could be a personal computer, work computer, or set of computers in either the home or the work place that is/are going to be linked to the cloud system (Strickland).

Back End vs. Front End – With cloud computing, there is the Front End, which is the user interface (this can be in the form of mobile music apps like Amazon Player or iTunes), and the Back End, which is the server and cloud-computing services (Strickland; Crawford). One of the main concerns with cloud services (along with issues of security and privacy) is that as more and more users come to depend on cloud services, users will no longer need to rely as heavily on IT specialists, so those workers will find more jobs on the Back End than the Front End (Strickland, “Cloud Computing”).

Middleware is a software that allows computers on a network to talk to one another and is part of the central server (Strickland, “Cloud Computing”). An example of this would be Oracle Fusion.

Redundancy here is defined as the process of making copies of data for backup. Since the cloud is information on a hard drive not owned by the client (think of cloud computing as renting digital space), the owner of whichever cloud system is being used (Google, Amazon, and Apple are top contenders) then makes copies of data to different physical computers in case of a computer crashing, power outage, and the like. Redundancy is necessary to keep the cloud operator from losing a client’s data (so pitchforks aren’t necessary…most of the time). (Strickland, “Cloud Computing”)

Grid Computing System ”is a computer network in which each computer’s resources are shared with every other computer in the system.” This provides great possibilities for researchers who require high processing power from computers than what an individual computer is capable of, especially if the grid system was the basis for the cloud system (Strickland, “Grid Computing”).

Server Virtualization is a nifty procedure that tricks a server into thinking that it is actually multiple servers, “each with its own operating system” (OS), which in turn eliminates a lot of “unused processing power” and reduces how many computers are actually necessary (Strickland, “Cloud Computing”)

Autonomic Computing is mostly theoretical at the moment, with labs like NEC Laboratories America doing research on how to create such systems. This type of system would hypothetically manage itself in regards to repairs and problem prevention within a networked system, which also has the potential to decrease IT jobs considerably as the system would be taking care of itself (Strickland, “Cloud Computing”).

Authorization Format is a procedure that would give users limited access to “data and applications relevant to [their] jobs.” This is a way for clients and cloud operators to strengthen privacy, along with authentication (which is what we do when we type in passwords to Google Drive, iTunes, and a whole host of other applications that we use on a daily basis) (Strickland, “Cloud Computing”). Privacy and security are both big issues for those who are trying to decide whether cloud services are right for them and/or their companies as the client is allowing the cloud operator to take data and store it in digital space (and physical hard drive space) rented and not owned by the client.

Onwards and Upwards to SkynetCloud

Cloud computing, which is gaining dominance in how we deal with data across different fields (such as business, academics, shopping, listening to music, and personal communications), is the essence of a network. The cloud system is capable of linking devices from desktops, laptops, mobile phones, gaming consoles, and tablets and linking them together as a way to store data so that the client can be anywhere in the world and still have access to his or her information without the need to carry a particular device. This does raise a lot of questions and concerns (those in love with the Terminator franchise like I am will be reminded of Skynet without the rampaging, murderous robots…just yet) about security and privacy as the client is essentially handing over data to an outside party who then stores the information on physical computers elsewhere (several, if you remember from the term redundancy, as a way to keep data from being lost due to an accident or hardware malfunction).

Conference Poster for the 2012 South by Southwest Conference.

Archived Conference Poster for the 2012 South by Southwest Conference.

For those who a little wary of placing their personal data in the hands of strangers on a computer they will probably never see, cloud computing may be closer to home than they realize. These systems have become especially prominent in the act of listening to music because a person can buy a song from Amazon or iTunes and listen to it on his or her phone while on the go, or the user can use the digital radio station-esque Pandora and listen to the random song list the program generates and then adapts to the user’s liking or disliking of individual songs (Strickland, “Music Clouds”; Crawford, “Amazon Cloud Player” and “iTunes Cloud”). With iTunes Cloud, users can sync their devices (generally without too much issue) together in order to create backups and share files across devices. Images, music, contact lists, videos, and other media are no longer restricted to just a computer or phone, making it easier for content to be retrieved should something malfunction or need to be replaced (Crawford, “iTunes Cloud”). This has further connection ramifications as such software turns a home into a network as computers can link together not just on internet connections, but also through Home Sharing. In effect, we and our devices both become nodes of connection, linked together through cloud computing systems.

Cloud computing itself has the feel of science-fiction as it allows users all over the world to connect their devices to each other in a closed system where only they have access, or they may extend their reach outwards and participate in larger virtual communities founded on cloud technologies. In a major way, cloud systems are reshaping our relationship to data and data storage as we no longer need to worry about our computer crashing or not having access to documents while on a trip; cloud operators promise us that our data will be there when we need it, wherever we are (unless we are lost on some remote island or stuck in the most remote region of some mountain where cell phone service is non-existent, which is becoming less and less a possibility it seems). Cloud computing is all about interconnections, whether for personal data storage, applications, collaboration, or business efficiency. It allows any device with internet connection to link to whatever cloud computing system the user has access to, and information has become just a few clicks away. This technology, for better or worse, is enhancing the image of the world as a digital network, with us as the nodes and cloud as the connectors.

ENGL894 Asynchronous Activity: How connected are you?


Crawford, Stephanie. “Does ‘to the Cloud’ Mean the Same Thing as ‘Let’s Google That’?” How Stuff Works? How Stuff Works, 08 Aug. 2011. Web. 17 Jan. 2014.

Crawford, Stephanie. “How the Amazon Cloud Player Works.” How Stuff Works? How Stuff Works, 20 Apr. 2011. Web. 17 Jan. 2014.

Crawford, Stephanie. “How the Apple iCloud Works.” How Stuff Works? How Stuff Works, 08 Aug. 2011. Web. 17 Jan. 2014.

Schofield, Jack. “What are Intranets and Extranets?” BBC WebWise. BBC, 09 Sept. 2010. Web. 17 Jan. 2014.

Strickland, Jonathan. “How Cloud Computing Works.” How Stuff Works? How Stuff Works, 08 Apr. 2008. Web. 17 Jan. 2014.

Strickland, Jonathan. “How Grid Computing Works.” How Stuff Works? How Stuff Works, 25 Apr. 2008. Web. 17 Jan. 2014.

Strickland, Jonathan. “How Music Clouds Work.” How Stuff Works? How Stuff Works, 08 Aug. 2011. Web. 17 Jan. 2014.