Facebook on Tuesday unveiled its blueprint for an independent oversight board to review the company’s decisions about the posts, photos and videos it takes down or leaves online, responding to a wave of criticism that inconsistent policies have undermined the platform.

The roughly 40-person panel is supposed to function as the social media giant’s version of a “Supreme Court,” serving as the final word for Facebook users who want to appeal the company’s moderation decisions. It will also offer recommendations for how the tech giant should tackle problematic content in the future.

“We are responsible for enforcing our policies every day and we make millions of content decisions every week,” chief executive Mark Zuckerberg said in a post. “But ultimately I don’t believe private companies like ours should be making so many important decisions about speech on our own.”

In shifting responsibility away from top executives and engineers, however, Facebook may be able to distance itself from the criticism it has faced over its decisions, which have fueled calls for government regulation.

While the Oversight Board detailed Tuesday is still in its early stages, its ultimate test will be its ability to navigate and interpret Facebook’s thicket of rules to reach decisions in real time — while revealing more about how Facebook comes to its conclusions about content in the first place. That means figuring out the ever-elusive line between free expression and harmful speech, and serving as a court of sorts for a global network that has different needs and varying visions for what the Web should look like.

The release of the charter comes a day before Facebook will join its peers from Google and Twitter at a hearing on Capitol Hill to probe how social media contributes to real-world violence. For years, lawmakers have pressured Silicon Valley to take a more proactive role to stop the spread of white supremacy, detect violent threats in real time and combat falsehoods, including manipulated online video — and Facebook announced Tuesday it had tightened its rules and tools to spot and remove hate speech.

Facebook has long maintained detailed policies to combat and remove harmful speech, including attacks on the basis of race or religion, terrorist propaganda and disinformation. But the company often has struggled to implement and enforce such rules uniformly, resulting at times in the viral spread of harmful content — or accusations that its executives and engineers are biased.

To help better navigate these and other political pressures globally, Zuckerberg in November first sketched out his vision for an “independent body” that would serve as a check on the human reviewers and artificial-intelligence tools that vet the posts uploaded by its community of 2.2 billion users.

The charter released Tuesday outlines new oversight at Facebook meant to address allegations of unfairness, on a global scale. The company aims to have a board of “likely” 40 members, representing different regions of the world, each serving for a three-year term. Facebook said Tuesday it intends to select a few members to start. Those members will then choose the remaining members, all of whom will be overseen by an independent trust Facebook plans to establish to handle logistical matters such as the budget.

The roster of members has not yet been announced. But it is likely to be one of the most closely watched elements of the endeavor, given the difficulty of establishing a truly globally representative body, said Kate Klonick, a fellow at the Information Society Project at Yale Law School who is studying Facebook’s work.

“How do you pick one person from the U.S. who represents all of the U.S., and should there be one person from the U.S.?” Klonick asked. Still, she praised Facebook for a “massive commitment of resources” toward trying to figure it out.

Users who disagree with Facebook’s content decisions can appeal to the company, and if they still don’t like the resolution they will then be able to appeal to the board. Facebook has said that board decisions will be binding and implemented quickly. The company said it will also send cases to the board for automatic, expedited review if there is a potential for “urgent real world consequences,” according to the charter released Tuesday. It expects to take its first cases from users in 2020.

Facebook’s new review board will have the ability to recommend broader changes beyond the content it is asked to study, such as additional content takedowns or new business practices. But it will be up to the social-networking company to decide whether and how it will implement them — though Facebook pledged to publish detailed reasoning about the decisions it makes.

“This charter brings us another step closer to establishing the board, but there is a lot of work still ahead,” Zuckerberg said. “We expect the board will only hear a small number of cases at first, but over time we hope it will expand its scope and potentially include more companies across the industry as well.”

The stakes grew after the deadly March massacre in Christchurch, New Zealand, when some users uploaded videos of the attacks in ways that evaded tech companies’ censors. Others in the United States criticized Facebook earlier this year for refusing to take down a manipulated video of House Speaker Nancy Pelosi (D-Calif.) that made it appear she was drunk.

Globally, regulators have issued an ultimatum to Facebook, Google and Twitter, threatening to hold the companies directly liable for the decisions they make and the content they allow online unless they improve their platforms. At the same time, though, the companies have also faced heavy criticism for flagging and removing content that officials and users say should have been allowed under their policies. In the United States, Silicon Valley has faced additional attacks from Republicans who claim their policies result in the suppression of conservatives’ speech online.