Biometric Battles

The Alabama IVF court ruling and the move for a national abortion ban highlight the rising threat to reproductive freedom. Another battle over bodily autonomy is taking place in the corporate world. It revolves around the question of whether companies have the right to gather biometric information about employees or customers without their full consent.

Collection of fingerprints and voiceprints is not as oppressive as restricting the right to terminate a pregnancy, but it raises a legitimate privacy concern nonetheless. This is especially true as more companies embrace facial recognition, iris scanning and the like.

Disputes over biometric data collection frequently end up in court, where plaintiff lawyers bring class action claims and often win substantial settlements. For example, the Presence Health Network in Illinois just agreed to pay $2.6 million to settle litigation alleging that the privacy rights of employees were violated by requiring them to scan their fingerprints for timekeeping without first obtaining consent.

Violation Tracker documents 30 similar fully resolved lawsuits with total settlements of $1 billion. These cases are typically brought under the Illinois Biometric Information Privacy Act, a 2008 law that is the strictest in the nation. BIPA cases can be brought in state court in Illinois, but in certain circumstances they can be filed in federal court.

Some of the biggest settlements have come in federal cases. The largest of all is the $650 million payment by Facebook in 2021 to resolve claims that its collection of facial data from users violated BIPA. The following year TikTok paid out $92 million in a similar case.

The largest state court settlement was the $100 million paid by Google in connection with facial data collected by its photo service. In another state case, Six Flags agreed to pay $36 million to resolve claims it improperly collected fingerprint data from pass-holders.

Large employers which have entered into biometric settlements include Walmart, which paid $10 million to resolve claims it improperly collected worker handprints, and the Little Caesar pizza chain, which agreed to pay nearly $7 million to settle litigation alleging it violated BIPA by using a fingerprint-based timekeeping system without getting informed consent from employees.

BIPA lawsuits rarely go to trial. The risks for companies of refusing to settle are illustrated by a case brought against BNSF by a class of 44,000 truck drivers who claimed the railway company improperly collected their fingerprints. In 2022 a federal jury found in favor of the plaintiffs and awarded up to $228 million in damages. That award was thrown out for technical reasons, but the company recently agreed to settle the matter for $75 million.

Cases arising out of BIPA have prompted other states to consider adopting their own biometric privacy legislation, yet none have begun to match the Illinois law. Efforts in Congress to pass a national law have also made little progress.

For now, BIPA class actions are the main thing standing in the way of the corporate effort to turn us all into human bar codes.

A Challenge to Intrusive Workplace Monitoring

One of the drawbacks of the growing presence of electronic technology in the labor process is the ability of managers to conduct continuous surveillance of workers. Those who toil at computers have their keystrokes measured and evaluated, while others are monitored via handheld scanners or other devices.

U.S. corporations think they have every right to use these techniques in the pursuit of maximum output and higher profits. As Amazon.com has just learned, that may not be so easy when it comes to their European operations. The e-commerce giant was just fined the equivalent of $35 million for employing an “excessively intrusive” system of electronic monitoring of employee performance at its warehouses in France.

The French Data Protection Authority (CNIL) said it was illegal for Amazon to measure movements of workers to such an extent that they would have to justify every moment of inactivity. CNIL condemned Amazon not only for using what it called “continuous pressure” but also for retaining the monitoring data for too long.

CNIL’s case was based on the European law known as the General Data Protection Regulation (GDPR), which includes a principle largely unknown in the United States: data minimization. Americans are used to giving up vast amounts of personal information to corporations. In Europe, companies are supposed to restrain their data appetites.

That message has not gotten through to American firms operating in the EU, especially the tech giants. Meta Platforms, the parent of Facebook, has been fined more than $5 billion for GDPR penalties—far more than any other company. Alphabet Inc., parent of Google, has racked up over $900 million in fines. Even Amazon has previously run afoul of the law. In 2021 it was fined over $800 million for misusing the personal data of customers. An appeal is pending.

What is relatively unusual about the latest fine against Amazon is that it involves GDPR violations in the relationship between employers and workers, as opposed to companies and their customers. Employment-based cases are not unheard of. In fact, Amazon itself was fined over $2 million for improperly doing criminal background checks on freelance drivers.

What makes the new case even more remarkable is that it concerns not only personal information but also the labor process. The CNIL’s challenge to Amazon’s monitoring is a challenge to its ability to control what workers do every moment they are on the job.

By restricting intrusive employee monitoring, the GDPR is being used to shield workers from the worst forms of exploitation. And because excessive monitoring pressures workers to do their job in an unsafe manner, the law also protects against occupational injuries. In other words, it is challenging management domination of the workplace.

It remains to be seen whether the CNIL and the other agencies enforcing the GDPR in Europe go after other employers engaged in intensive monitoring or if they treat Amazon as an outlier requiring a unique form of enforcement. For now, at least, the CNIL has shown the possibility of using privacy regulation to enhance the liberty and well-being of workers.

Blowing the Whistle on Twitter

There has never been much doubt that the tech giants do not take government regulation seriously, but it is helpful to get confirmation of that from inside the corporations. This is the import of a whistleblower complaint from the former security head of Twitter that has just become public.

Peiter Zatko submitted a document to the SEC, the Justice Department and the Federal Trade Commission accusing top company executives of violating the terms of a 2011 settlement with the FTC concerning the failure to safeguard the personal information of users. The agency had alleged that “serious lapses in the company’s data security allowed hackers to obtain unauthorized administrative control of Twitter, including both access to non-public user information and tweets that consumers had designated as private, and the ability to send out phony tweets from any account.”

Zatko’s complaint, which will play into the company’s ongoing legal battle with Elon Musk over his aborted takeover bid, alleges that Twitter did not try very hard to comply with the FTC settlement and that it prioritized user growth over reducing the number of bogus accounts.

These accusations are far from surprising. In fact, three months ago Twitter agreed to pay $150 million to resolve a case brought by the FTC and the Justice Department alleging that it was in breach of the 2011 settlement for having told users it was collecting their telephone numbers and email addresses for account-security purposes while failing to disclose that it also intended to use that information to help companies send targeted advertisements to consumers.

Since Zatko was fired by Twitter in January, he is in no position to describe company behavior since the most recent settlement. It is difficult to believe that the $150 million fine will be sufficient to get Twitter to become serious about data protection.

Twitter is not the only tech company with a checkered history in this area. In 2012 Facebook and the FTC settled allegations that the company deceived consumers by telling them they could keep their information private and then repeatedly allowed it to be shared and made public. Facebook agreed to change its practices.

As with Twitter, it eventually became clear that Facebook was not completely living up to its obligations. The FTC brought a new action, and in 2019 the company had to pay a penalty of $5 billion for continuing to deceive users about their ability to control the privacy of their data. The settlement also put more responsibility on the company’s board to make sure that privacy protections are enforced, and it enhanced external oversight by an independent third-party monitor.

Zatko’s allegations may prompt the FTC to seek new penalties against Twitter that go beyond the relatively mild sanctions in the settlement from earlier this year.

The bigger question is whether regulators and lawmakers are willing to find new ways to rein in a group of mega-corporations. The effort in Congress to enact new tech industry antitrust measures seems to have fizzled out for now. Such initiatives need to be revived. We cannot let an industry that plays such a substantial role in modern life think it is above the law.

Getting Tough with Corporate Privacy Violators

Privacy violations, which used to be a relatively minor compliance issue for large corporations, have now become a much more serious concern. And a recent Federal Trade Commission case could be a sign of more aggressive enforcement practices to come.

Back in the early 2000s, privacy cases consisted mainly of actions brought by state regulators against fly-by-night operations that ran afoul of Do Not Call rules by placing large numbers of unwanted marketing robocalls. The data in Violation Tracker indicate that aggregate federal and state privacy penalties across the country were only a couple of million dollars per year.

Over the past decade, total agency privacy penalties have grown substantially, exceeding $50 million each year since 2016. The blockbuster cases fall into two major categories. The first involves corporations that were fined for allowing major breaches of their customers’ data to occur. For example, in 2018 Uber Technologies had to pay $148 million to settle a case brought by state attorneys general for a breach of data on 57 million customers and drivers—and for attempting to cover up the problem rather than reporting it to authorities.

The other category consists of cases in which corporations were directly responsible for the privacy violation. In 2019, for instance, Google and its sister company YouTube agreed to pay $136 million to the FTC and $34 million to New York State to settle allegations that the companies violated rules regarding the online collection of personal data on children.

This category also includes the largest privacy penalty of all—the $5 billion paid by Facebook to the FTC in 2019 for violating an earlier order by continuing to deceive users about their ability to control the privacy of their personal information.

Also in this category is a recent case handled by the FTC and the Department of Justice against WW International (formerly Weight Watchers International Inc.). The agencies are collecting $1.5 million in civil penalties from the company for violating the Children’s Online Privacy Protection Act in connection with their weight management service for children, Kurbo by WW. The government had alleged that WW collected personal data such as names and phone numbers as well as sensitive information such as weight from users as young as eight years old without parental consent.

In addition to the monetary penalty, the FTC took the unusual (but not unprecedented) step of requiring WW to delete their ill-gotten data and destroy any algorithms derived from it. As a blog post from the law firm Debevoise & Plimpton points out, this kind of punishment can have a major impact, given that a single tainted dataset may require the destruction of multiple algorithms.

Requiring corporate miscreants to destroy intellectual property is in line with the ideas recently proposed by Consumer Financial Protection Bureau director Rohit Chopra for using measures beyond monetary penalties in regulatory enforcement. Chopra called for forcing misbehaving companies to close or divest portions of their operations—and, in the most egregious cases, to lose their charters.

The moves by the FTC and the CFPB are signs that regulators are recognizing that aggressive new enforcement tools are needed to shake up large corporations that have grown too comfortable paying their way out of legal jeopardy.