Last month, Google took action against seven ads purchased by President Trump’s 2020 campaign, claiming that they violated the company’s rules — even though they had been viewed at least 24 million times.

But Google said little else: It didn’t share a copy of the ads in question or disclose what standards they had violated. To experts, those unknowns are just two of many mysteries that demonstrate the company’s continued struggles to spot and shield users from potentially problematic political content with the 2020 presidential election a year away.

Much like its Silicon Valley peers, Google finds itself under heightened scrutiny for the powerful targeting and messaging tools it provides candidates and their allies pay to expand their audience online. The issue has commanded unusual public attention in recent weeks because of Facebook, which maintains a policy that essentially permits politicians, including Trump, to lie in their political ads.

But critics contend that Google — a company that came into existence roughly 20 years ago promising to “do no evil” and “organize the world’s information” — suffers from its own blind spots around paid political speech, which has generated nearly $124 million for the tech giant since it began releasing its data in May 2018. Over that period, conservative advertisers have massively outspent their opponents, by a margin of 3 to 1 among the top 10 spenders. The Trump campaign and an associated fundraising committee have spent nearly $12 million.

Google isn’t required to maintain these records of political ads under federal law. But some experts say that voters shouldn’t have to depend on voluntary efforts by the tech industry in the first place — and they stress that regulation is the only real, lasting solution to help voters understand who is targeting them and why.

“Right now, we’re operating in a system where there is simply a hodgepodge of policies at each tech company,” said Michael Beckel, a top researcher at Issue One, a nonprofit organization that studies money in politics. “And this voluntary approach, this voluntary patchwork approach, leaves Americans in the dark about vital information about who’s trying to influence them online.”

Google declined to discuss the Trump ads, the reason they violated company rules or any of its policies around political ads. “We strongly support greater transparency in political advertising and have voluntarily invested significant resources to create a transparency report that enables researchers, reporters and users to study political ads that run on our platform,” spokeswoman Charlotte Smith said in a statement. “We are constantly working to improve the report and appreciate feedback on how we can make it better.”

The Trump campaign declined to comment.

Google unveiled its political ad transparency efforts last year, responding to regulatory threats from Congress in the wake of the 2016 election. During the race, Russian agents took to YouTube and the Web’s other social-networking sites in a bid to stoke political unrest, relying on a mix of ads and organic posts, photos and videos to undermine Democratic contender Hillary Clinton and boost then-candidate Trump, U.S. investigators have found.

But Google’s efforts ultimately stopped far short of what lawmakers had hoped. The company vets organizations that seek to run ads about federal candidates, and it caches many of them in a publicly available archive. But the search-and-advertising giant still discloses far less than its competitors, offering little transparency about the sprawling network of dark-money groups and super PACs that run ads about polarizing issues such as abortion or immigration. Those are the kinds of ads that Russian malefactors deftly exploited four years ago and that continue to vex tech companies today.

“Google’s criteria for inclusion in their transparency archives is only content that is by or primarily about federal candidates or officeholders,” said Laura Edelson, a researcher at New York University’s Tandon School of Engineering. “Their policy was written in such a way to exclude the kind of content that has disinformation in it.”

State and local races aren’t included either, though advertisers certainly can purchase space in search results and alongside YouTube videos to support or oppose down-ballot candidates. And Google provides only limited information about ads it takes action against for violating its rules, such as the seven ads run by the Trump-supporting campaign committees over the past few weeks.

Experts said that approach poses major problems.

“People have potentially been exposed, but we can no longer reconstruct the effect,” said David Lazer, a professor of political science and computer and information science at Northeastern University. “It’s bad for two reasons — not just the accountability of Google but also the accountability of the Trump campaign, which can do more with impunity.”

Entering the 2020 election, the first tests of Silicon Valley’s new digital defenses have been domestic in nature — from campaigns and their allies, particularly those with ties to the president.

Last month, Trump’s campaign stoked widespread ire by running ads that contained falsehoods about former vice president Joe Biden. Facebook refused to take down the ads despite Democrats’ repeated requests. Quietly, though, Google-owned YouTube adopted a similar approach, noting its policy prohibiting ads that contain misrepresentations didn’t apply in this case. In the end, three versions of the Trump campaign ad received at least 12 million views over just a two-day period last month on the video-streaming site, the company’s ad archive shows, even though Google has said little about the matter.

Internally, Google executives have debated changes to the company’s political ad policies, according to two people familiar with the deliberations who spoke on the condition of anonymity because they were not authorized to discuss them on the record. The sources said that there are many technical and policy options on the table, including those that could restrict campaigns from running ads that are too narrowly targeted to specific users. They stressed that banning political ads outright is not one of them. Similar changes to ad targeting are under consideration at Facebook, a third source said this week. Twitter said it would ban political ads, including issue-oriented ones, beginning later this month.

On Monday, a group of technologists led by the browser-maker Mozilla asked Facebook and Google in an open letter to “take a stand and issue an immediate moratorium on all political and issue-based advertising,” arguing that their inconsistent or poorly applied policies could have a dangerous effect on a different political contest — the fast-approaching Dec. 12 British election.

Mozilla and its peers noted a litany of problems with Google’s policies and political ad transparency tools, citing the need for regulation. “These issues will take time to resolve,” the group wrote, “but in the UK we do not have time.”

The tech industry’s slow progress also has again infuriated top lawmakers on Capitol Hill, who have long advocated for new regulations targeting social media giants without much success.

Since the 2016 election, Democrats including Sen. Amy Klobuchar (Minn.), who is seeking the party’s 2020 presidential nomination, has sought to subject companies including Facebook, Google and Twitter to heightened ad disclosure laws similar to those that govern broadcast TV and radio. But the ideas have gained limited traction in Congress, where a partisan dispute — and a GOP reluctance to tackle campaign finance — have kept the Honest Ads Act from even receiving a hearing. Google hasn’t endorsed the bill, either, Klobuchar said Thursday.

“Google sells the most political ads and makes the most money,” she said in a statement, “but they are often conveniently absent from the debate about how to prevent foreign interference in our elections and curtail the spread of misinformation online.”