Logos for Twitter, YouTube and Facebook. (AP/AP)

AN INTERNET privacy bill introduced by Sen. Catherine Cortez Masto (D-Nev.) would prohibit discriminatory data practices. That is a good goal. But which practices qualify as discriminatory is a complicated question.

Privacy activists have long stressed that data over-collection and misuse cause disproportionate harm to minority groups. Often, that harm is prohibited by existing civil rights rules. But those rules were put into place before anyone could have imagined an age of digital discrimination, and companies are circumventing them.

Take targeted advertising. Leaders in the industry make money by allowing advertisers to select the very specific segments of the population they think are most likely to want their products, or by selecting the segments themselves. Sometimes, those categories are the same classes of people that civil rights law exists to protect, such as minorities and women.

That can lead to forms of marketing that are not insidious at all: say, promoting women’s shoes exclusively to potential customers who have displayed an online interest in women’s fashion. It can also lead to obvious abuses, such as companies displaying housing ads only to white individuals, whether by explicitly excluding minorities or engaging in digital redlining via Zip code restrictions. An expanse of gray lies in between. Regulators will have to decide whether to limit anti-discrimination rules to areas where there are traditionally heightened protections or whether — and how — to push beyond those frameworks. And they will have to address how platforms’ traditional immunity from liability for users’ actions runs up against any new rules.

Lawmakers will also have to look at data-based discrimination that is not designed to have an adverse impact on protected groups but does anyway. An algorithm that adjusts an ad’s audience to maximize engagement could end up showing a job posting only to men if men click on it most frequently — which could occur for a profession historically unfriendly to women. The same unintentional discrimination can occur in hiring, loan approval and elsewhere: Tools trained with information from years of disparate treatment often perpetuate those unfair outcomes.

There’s an added wrinkle. Sometimes, targeting sensitive advertisements based on protected characteristics can actually promote equity. Directing education opportunities to an underserved community is a kind of advertising affirmative action that regulators should take care not to prohibit. Similarly, an algorithm that tends to privilege white people for hiring because of historical bias in the profession might need to take race explicitly into account in order to correct for it.

Whatever Congress decides — Ms. Cortez Masto’s bill would leave the particulars to the Federal Trade Commission — any law should require that companies of a certain size study how their algorithms do, or don’t, hurt the vulnerable. In the data privacy debate, generalized philosophical gripes can sometimes overshadow concrete harms. Putting the discriminatory use of data front and center focuses discussion of a federal framework on what it actually ought to do: protect Americans, especially those who need it most.