MR. SCOTT: Hi, and welcome to Washington Post Live. I'm Eugene Scott, political reporter for The Fix at The Washington Post. And I'd like to welcome our first guest this afternoon, co-founder and CEO of the data-driven human resources platform Humu. Laszlo, thank you so much for joining me.

MR. BOCK: It's such a pleasure to be here. Thank you.

MR. SCOTT: So, you were considered the godfather of people analytics at Google. You headed up human resources, which was called People Operations. Tell us a little bit about your professional philosophy when it comes to managing work environments and workplace culture, and how do you put that data at the center of what you do?

MR. BOCK: Well, I think when you think about the workplace, first you have to start with privacy. You don't want to do anything that creeps out the people around you. There's lots of things you can do that are meaningful for the business that people don't expect you to do, and therefore you shouldn't, like reading people's emails or what have you.

So, with that aside now, there's a huge opportunity in how management works in that science hasn't really been applied to what it's like to work in office settings. Academics often do kind of controlled experiments. Consultants do their studies. But one of the great things we were able to do at Google was actually do academic-quality research on 10- or 20- or 50,000 people over a period of years, which helped unlock things that now translate into what I do now around what actually does drive manager performance. What actually does drive productivity? How do you make people happier, and how do you make work more fulfilling while also making people perform better?

MR. SCOTT: So, alongside two other former Google executives, Jessica Wisdom and Wayne Crosby, you went on to found Humu. Which did each of you bring to the project--what did each of you bring to the project, I mean? And what were your goals, and what were you solving for when you created this?

MR. BOCK: Well, we had an epiphany, which was that there's 4 billion people who work on this planet, and for most of those people, work is just a means to an end. You know, you go, you grind it out, you get paid for your job. And even in professions like nursing, for example, or the clergy, where you'd expect it to be deeply meaningful, Professor Amy Wrzesniewski from Yale University has found only about a third of those people actually find those jobs meaningful. So, our thought was, what if you can actually make work everywhere meaningful? And it turns out when you do that, you also solve a bunch of business problems, like how to drive change, how to drive productivity.

And so, we talk about Humu being a combination of machine learning, people science, and love. Wayne Crosby sold his first company to Google and was an engineer and a leader there for many years and was one of the best people managers at Google. He's one of my co-founders. And Dr. Jessica Wisdom, who got one of the first Ph.Ds. in behavioral science from Carnegie Mellon, and has done a lot of work in nutrition and nudging to help people be healthier and live better lives, was also a partner of mine at Google. And so, together Wayne brings the science and the technology, Dr. Wisdom brings the science, and what's left for me to bring is the love.

MR. SCOTT: So, I was hoping you could give us a primer on Humu. How exactly does it work? You know, I understand that it is an action management platform that makes suggestions in the form of nudges. What are nudges?

MR. BOCK: So, nudges are something that were invented, or the phrase was invented, by Dick Thaler at the University of Chicago and Cass Sunstein at Harvard, where they were professors. And Dick actually won the Nobel Prize for his work on this a couple years ago, the Nobel Prize in Economics. A nudge is a small intervention that makes it easier for you to make a good choice that you, if you were your best self that day, would prefer to make. So, from a--if you take nutrition for example, you know, do you choose the apple or the candy bar. Most of us would actually--we'd love to have that candy bar, but on our best day we'll have the piece of fruit. So, the nudge would be, that would drive that behavior would, for example, putting a fruit bowl on the center of your kitchen table or--so that it's in front of you and you can see it all the time.

In a business context, if you for example care about innovation or inclusion, you need people to feel free to speak up. So, some of the nudges Humu sends to drive those outcomes of innovation inclusion would be, for example, not just reminding somebody before a meeting, try--hey, try speaking up earlier in the meeting because it's easier, but reminding other people in that meeting to ask people to speak up if they've been quiet. And it's that combination of nudging different people at the same time and in combination that actually drives massive behavioral change, and that's the action that we support.

MR. SCOTT: So, the product works on different levels--you know, individuals, teams and the organization. Can you explain how AI and machine learning operates at each of these levels and what kind of data does that platform use to make these suggestions?

MR. BOCK: Yeah, so we start from understanding what's going on with the individual and the team. And there's a few different ways to do that. One is, if the company has an employee survey, we can use that. We also have our own more sophisticated diagnostics. Or if a company just says we really want to focus on an agile transformation, or we have an inclusion agenda, or we want more customer focus. We take that data, and it gets mapped algorithmically to the psychological states that drive those outcomes. So, if you want inclusion, one of the things you need, as I mentioned, is psychological safety. If you want agility, one of the things you need is conscientiousness, the sense that people are going to follow through.

So, we've taken data from a variety of sources. And it can be very slim, just who reports to whom, what's your job title, what do you do, or it can be very deep, what kind of activities are you up to, how productive are you, what's your job history. We take all that with permission from each employee, and then our algorithm runs and starts making suggestions that show up in the form of nudges, little reminders to people, to individuals, to managers, to executives, to try behaving a little bit differently.

So, for example, when the pandemic hit, one of the nudges we were sending that was most popular was, you know, what people were missing in their day-to-day lives was a sense of connection, a sense of, like, meaning and enjoyment in your work. People were freaking out. So, we started sending nudges around, having virtual water coolers to--for managers to organize them, for employees to show up to them, and people started doing that at the companies we work with and felt more relaxed, more connected and felt better--with the AI behind the scenes is constantly watching to say, okay, what's actually working, what's not, and how can we actually drive people to feel better about the work they do.

MR. SCOTT: So, how do you decide what success or improvement actually look like? Is it efficiency, or productivity? Profit?

MR. BOCK: So, we discovered a beautiful thing, which is when people feel more happiness--and there's two kinds of happiness--hedonic and eudemonic. Hedonic is what we think about when we're thinking about happiness. An I smiling, am I having a great time, am I high-fiving? That's hedonic happiness. Eudemonic happiness is, do you feel purpose and contentment and flow?

And here's what we discovered: When people feel more eudemonic happiness, more meaning and contentment and flow, they actually are more productive. They stay in their jobs longer. They don't turn over as quickly. They're more creative and innovative. So, the outcomes that Humu drives vary by company. Some companies are focused on just wanting employees to learn more and grow. Some companies actually want their employees to be more efficient. But what's beautiful is, the way you get there is you actually have to make people feel better about their teams and the way they're treated and trust one another more.

And so, the outcomes, sure, we typically see an 8-12 percent lift in productivity and efficiency. We typically see a 5 to 40-point improvement in retention in retail environments where you tend to see a lot of turnover. But the best part is, we do it by helping people be their best selves and feel better.

MR. SCOTT: So, this next question is kind of related to that. One of the stated main objectives of Humu is to use behavioral change technology to help employees become happier in their jobs. But how do you define something as subjective as happiness?

MR. BOCK: So, you know, philosophers have written about this for thousands of years. And we started with that, and we started with some of the experience that we had at Google. And you know, the company's less than 5 percent former Google people now, so we have people from every kind of background. And we looked at, then, what the academic literature says about what actually causes people to feel better. And no surprise, it's going to sound very familiar: Some of the biggest factors are meaning. Is this job important to me? Am I having an impact on the world? Trust: Do I feel like my boss trusts me, and can I trust my management and my company to behave in my best interests? Are they ethical? Are things just? And empowerment: Do I feel free to kind of do things and explore and try new things and take risks? So those are some of the constituent components that it turns out makes people feel better about their work.

So, I'll give you a quick example. One of our customers is a financial services company. And when we first started working with them, we were just working their call centers. So, these are people who the jobs are very structured. They just answer the phone all day, deal with emails all day, and it's all about dealing with the problems their customers have and resolving them.

It turns out the thing that was going to most impact happiness, and then productivity--but happiness is where we started for this group--was more autonomy, people feeling like they had more freedom. And traditional management would say you can't do that because you have to manage these people down to the second, they have, you know, first call resolution rate you have to measure. They have hold times they have to measure. Their day is highly structured and regimented. But by nudging them, the employees, to think about their job differently--what do they enjoy, what do they not, can they swap things around with other people, can they just have a different mindset, can they focus on the outcomes how they're helping people secure their financial futures instead of just grinding through this deluge of inbound email--we send nudges on that.

We send nudges to the managers encouraging them to talk to the employees about how their work is actually making a difference in people's lives. The combination of those things caused people to feel they had more autonomy and freedom in the jobs, and we saw an 8 percent productivity improvement in this group, which translated into a $200 million net income improvement for the company. So, that's the kind of things you can do when you start from the premise of what's actually going to drive human performance, and actually it's treating people better, not worse.

MR. SCOTT: Is there a way to keep people from feeling manipulated?

MR. BOCK: It's a terrific question. And actually, you know, in Europe, where we have a lot of large customers, this is--there's a very strong sense of privacy and not manipulating people and how important that is. And I myself, I was born in a communist country. I was born in Romania and came to the U.S. as a refugee. And I'm from a place where the government tracked every single thing around you, and it's creepy, and it's wrong.

So, everything we do starts from a premise of, this is your information, and you don't have to play. No employee has to participate. And if any time an employee says actually, I'm not comfortable with this, not only do we delete their data, we actually rerun our algorithms to make sure nobody can ever go back through and find all--any individual hint of that employee's data.

But the thing we do affirmatively that actually really helps is, we tell people everything we're gathering and what we're doing with the information, number one. Number two, it's in their best interest, and we present it that way. We say like, look, this is to help you get better and to help the people around you get better. If you don't want to do it, awesome, that's fine. But what we find is about 70-80 percent of people open the nudges and take action on them. And we have not seen people saying this feels weird or creepy. In fact, they tend to say, wow, this really made a difference in my life. I spoke up in this meeting, and it was my first time, and my manager actually said thank you, that was a good idea. And two weeks later, they called on me first in a meeting to have an opinion. So, it's generally been an incredibly positive thing for individuals rather than scaring them.

MR. SCOTT: Is there anything that you all can do when people do ignore the nudges?

MR. BOCK: So, there's two things. One is, within our AI--one of the things that happens is, is people are receiving nudges, people are motivated by different things. So, if, for example, you know, I'm encouraging somebody to--you know, one of the things that often happens is people don't trust management. And when they don't trust management, they don't put themselves into their work, and they're not as happy.

One of the ways to increase that trust--because most managers are not terrible, evil people. They're just trying to do their jobs. So, you could nudge an individual to say, let me just ask my manager how these--how raises get determined, how promotions get determined, how schedules get determined. And in the nudge, sometimes we'll say, go ahead and ask this, because at your organization people who ask feel better about it.

But that reason isn't compelling to everybody. Some people prefer different reasons or respond to them. So, we may--if somebody doesn't respond to that nudge, I'd send a different one later that says why don't you try asking about this issue, because the research says most managers are well-intentioned, not evil and out to get you. Give it a shot. So, by varying the rationales and the delivery mechanism for the nudges, we can boost the rate at which people respond. And that's one case.

The second case, though, is some people don't respond. We find not everyone does, and that's okay, too. This isn't--you know, if you're trying to make work better, you can't do that by forcing something down people's throats. It has to be something that they want.

MR. SCOTT: How does Humu support diversity initiatives and psychological safety in the workplace?

MR. BOCK: So, this is a topic that's really, really important to me. We've--you know, despite having three people who present as white cis gender people running our company, we've built a company that's I think perhaps one of the most diverse companies in tech when you look at our representation, and that shows up in our products. We deliberately diversified our investor base. So, when we did series B, we invited in 10 investors who were Black, or Latinx, or female, or some combination of the above to take part, because people from those communities often don't get the chance to participate in, you know, these hot funding rounds. So, it's core to what we do.

We have a framework for inclusion that includes seven components, and I won't work through every single one of them. A bunch of them are on our website at humu.com. But one of the ones that's really interesting to me is well-being. There's lots of ways to measure inclusion. There's factors like is there fairness, is there strong allyship. But well-being is one that's often overlooked, because if you're underrepresented in a group, you often pay what people from some of these communities call the tax. And the tax is, you're the person asked to facilitate panels, to do extra interviewing, to go on campus to represent everyone in the world who has your skin color or gender or identity. And that incremental emotional and cognitive tax you are paying means you have to work harder, quite frankly, than somebody who looks and sounds like me to do the same work, and your environment is different.

So, one of the things we focus on measuring is well-being, because you can be doing everything right. But if you're not being fair in how you're tapping into different communities and you're asking too much from underrepresented ones, those people are going to burn out, and they're going to leave because you didn't give them the experience you promised.

And we find nudging is a powerful way to attack these issues because most people when you talk to them about inclusion, very few people will say they are racist; very few people will admit they're biased. But by nudging combinations of people around the underlying issues like creating well-being or psychological safety, you actually can unpack a lot of this and make a big difference.

MR. SCOTT: Can you talk a little bit about Humu uses--how Humu uses science and the science of learning to change behavior, and what is different about Humu than a traditional company training or teambuilding or team bonding exercise?

MR. BOCK: So, teambuilding and traditional training is a lot of fun, and it's nice to get out of the office and hang out with people. But the problem is, when you take somebody out of their environment and teach them something new, they haven't actually applied it in the environment they're coming from. And so, they often don't follow through.

And more than that, in this artificial external training environment or team event or what have you, everyone's encouraging them. Everyone's supporting them. When they go back to the real world, nobody's supporting them.

So, we have one of our customers, they developed this manager development program. They spend a million bucks a year on this program. They thought it was amazing. And they asked us to take a look at how well it works. Well, it turns out that a month after taking it, the team members of the managers who have learned to be more inclusive and what have you, feel like things are going better. Three months afterwards, the team members feel like things are back to the way they were. And six months later, team members report that things are actually worse than they were before.

Now, the training didn't actually make things worse. What it did was, it didn't change the manager's behavior in a persistent way, and it increased expectations among the team, so now it feels worse even though the manager is right back where they were.

In contrast, we found that when we nudged these particular sets of mangers on the behaviors the team were missing around procedural justice and fairness, we saw a persistent improvement in how the teams felt, and also how the managers felt, because they were changing and things were getting better.

So, we view our nudges--in some cases you can replace learning and development programs, but they're a powerful complement to existing initiatives, because they're a very different way of making that change and growth and learning happen in the moment in the real world.

MR. SCOTT: Do you have any concern that you're engineering an unhealthy attachment to work? We hear a lot about the importance of work-life balance and I'm interested in how Humu thinks about those issues.

MR. BOCK: It's a really great question. Our goal is not to cause people to work tons of hours, and our goal is not--I mean, in reality, what we actually see is that, as you'd imagine, you know, people get more productive to a point and then they crash, it drops off.

So, two quick examples that will illustrate how you need to be mindful of this. Pre-pandemic, one of our customers asked us to--they were considering work from home, and they asked us to look at whether or not it worked. And we actually discovered in their data that the ideal amount of work from home was one-and-a-half days a week, because that gives you the distance to get heads-down time and focus and disconnect and be productive while still giving you the social cohesion, that sense of connection that comes from being at an office and around people. And so, the goal is not make people work all the time. It's find the right balance where every part of your life is in balance and when you're working, you're as productive as you can.

On the Humu front, for example, when the pandemic hit, we decided to give everyone in the company every other Friday off. And that's because, while this is important and people with kids or elderly parents or even with pets have to support those folks, and people who don't still need time to catch up and kind of deal with all the craziness we've been dealing with this year. So, finding that right balance is critically important. The goal is absolutely not to, you know, turn everyone into a machine that works 80 hours a week. That'd be--that'd be terrible.

MR. SCOTT: So, this is a nice segue into a discussion about how the pandemic has and will continue to shape the nature of work. I'd like to have you talk about, you know, two or three of the biggest trends you've seen over the past six-plus months with respect to formerly office space professions.

MR. BOCK: Well, one is when employees are asked how they're doing, they are being less than honest today, and less honest than they were a year ago. The pandemic started, and soon after tons of companies did layoffs and restructuring. And part of the thing that's been keeping that from happening in some industries is the stock market's been doing well. If the stock market slips, the next thing that's going to happen is cost cutting at every large company. And so, employees feel the stress both from the business, from hearing what's happened to their friends, people are actually living with the pandemic. A lot of people have died, more than a quarter million people.

And so, the rational thing for employees to do when their employees say how do you feel, is to say, I'm good. I'm fine. You don't need to do anything. Don't need to worry about me. I'm just going to keep my job. What that means is it's creating this reservoir of people who actually are not doing well and are just muscling it out. And when eventually the vaccines come out and things get better, these people are going to bolt. They're going to not want to be in the same place of employment. So, that's one.

The second thing is, executives--well, executives are saying, I feel like I've lost one of my senses. They can't walk around anymore. They don't know what's going on. And so, there's a need actually, despite employees being more resistant to being forthright, to understand better what's going on. And employees are saying, I feel like I'm doing the work but without the fun, right? Like today, joining--taking a new job at a place like Google or General Motors or, you know, any business doesn't feel that different. You get a computer in the mail; you start doing video conferences. Most companies haven't really differentiated what they're doing, and they haven't created that social cohesion that brought us together. So, those are two things that are going on that I think are really hard for people right now, and it's why I think on the Humu side we've seen more traction. People really need that extra push, that nudge, that support to help get through.

MR. SCOTT: Do you think the situation expedited a shift to remote work, or, you know, is the office dead? Is remote work here to stay?

MR. BOCK: I think what we'll end up with is a hybrid. I think the--you know, there's a lot of companies who said they're going to be fully remote forever. Twitter sort of announced that recently. I think that's very hard to do because for your existing employees, you have existing relationships. And when you are suddenly remote, you can live off of those for a while. But even then, remember, people are doing the work but without the fun. So, those connections are weakening.

Psychologists talk about something called "affective distance," which is how much emotional connection you feel for the people who work around you. We can see each other. We can talk to each other. But that emotional connection comes from the small moments that we no longer have. So, I think the biggest impact that's going to have is on all the new hires for all these companies, right?

So, you can run for a year having gone fully remote and muscle through it. But when you're hiring people, they don't really know what company they're joining because it feels like everyone else's company. So, I think where we're going to land is--most companies will--you'll end up with two states. One is companies are going to go back to the traditional one. I know some large tech companies behind the scenes are actually acquiring a lot of real estate these days because, you know, it's cheaper to acquire now than it would have been a year ago. But I think most other companies will end up in a hybrid where it's one or two days a week from home, whether that's a formal policy or just letting people do what they want. But you have to bring people physically together periodically.

One of the things we've talked about at Humu is, well, let's be fully remote, let's have a couple small offices, people will drop in two to three days a week. And with the money we can save on the smaller offices, we actually will have enough budget to take the whole company to Disneyland, or Hawaii, or skiing a couple times a year to really strengthen those connections. And I think you'll see more of those kind of creative things.

MR. SCOTT: Lastly, I just wanted to ask you how do you think the election of Joe Biden will impact the tech industry?

MR. BOCK: That's an interesting question. I think--I think the current investigations from the DOJ will continue, and I think there's some merit to the arguments being made about the tech industry.

I think if you'll look at do people truly have a choice, yeah, you can click to whatever search engine you want, but do people actually ever--ever--do it? No. I think when you look at the gig economy, you know, California voters just passed a law supported by Lyft and Uber that overruled a law passed by the legislature that made gig workers employees. So, now they're no longer employees.

The gig economy is a very brittle one from an employee perspective, so I think there's going to be--you know, I would expect a new administration to look closely at that, because those are not tenable, long-term situations.

So, yeah, I think--I think the tech industry's going to be just fine. It's an incredibly profitable industry, but it's an industry that could benefit from a little more competition, a little more, I think, oversight.

MR. SCOTT: Well, I'm afraid that's all the time we have for this segment. And so, Laszlo, thanks so much for joining us.

MR. BOCK: A pleasure, thanks for having me.

MR. SCOTT: We'll be back in just a moment with mathematician and data scientist Cathy O'Neil to discuss big data and how algorithmic bias impacts the future of work. Stick with us.

[Video plays]

MR. SCOTT: Welcome back. If you're just joining us, I'm Eugene Scott, a political reporter for The Fix at The Washington Post. And joining me now is mathematician, big data scientist, and New York Times bestselling author Cathy O'Neil. It's great to have you. Welcome to Washington Post Live.

MS. O'NEIL: Thank you so much, Eugene. It's great to be here.

MR. SCOTT: So, Cathy, you were an early big data skeptic and have warned that its misuse can lead to the reinforcement of biases and civil rights abuses. Your work focuses on how the data we collect and the algorithms we use to analyze that are shaped by human biases and can sometimes hurt the less powerful. For those that might not understand what we mean by algorithmic bias, can you give us some big data 101?

MS. O'NEIL: Sure. Let me start with what I mean by an algorithm. An algorithm is sort of using historical data, training your algorithm on patterns to figure out what might happen in the future. And the most important thing to remember about any predicative algorithm--which is most algorithms--is that it predicts that the future will unfold exactly as the past unfolded. And what you're basically doing is training it on sort of initial conditions. In this situation, this is what happened; in this situation, this is what happened. And now you're presenting an algorithm with a current situation. You're saying, hey, algorithm, what's going to happen? And what the algorithm does is not unlike what we do on a daily basis when we decide what to get--what to wear for the day. If I want to be comfortable, here's the outfit that I remember being comfortable. So, the algorithm says, oh, given these initial conditions, here's what will happen in my--in my estimation, just looking at the old data. So that's pretty understandable. It's just a pattern matching situation.

What happens with bias is that we--well, the truth is, our past isn't perfect. So, when we have--so, for example, we have predictive policing algorithms, we're not actually--we don't actually have crime data, so what we're doing is we're predicting arrests. And so, we're saying, well, here's where arrests were in the past, and then we're using that history of arrests to send police to look for crimes. And what it ends up doing is sending police back to the same neighborhoods, the same location as they've made a lot of arrests in the past. And thereby, it sort of, again, propagating the past, and in particular, if it's an unfair over-policing situation, which it often is because of our history of over policing poor Black neighborhoods, then it will propagate that history as well by saying, hey, that's where the arrests are. That must be where the crimes are. That's just one example of very--of a lot of examples of how, like, historical injustices are propagated essentially because algorithm s just assume that the future will be like the past.

MR. SCOTT: You have said algorithms are opinions embedded in code. What does that mean? Is data objective?

MS. O'NEIL: That's a great example. So, the other thing about an algorithm besides pattern matching is what you're trying to predict. So, you're typically--the way I say it is you're predicting success. And so, an example--I gave the example of wearing clothes, but I often give the example of like feeding my children dinner. And the reason that's a great example is because there's lots of different stakeholders in that scenario, in particular my kids. So, when I build an algorithm in my head to predict a successful dinner, I'm thinking again about historical data. What was successful in the past? I'll make this a dinner that's like that successful dinner in the past, because then I deem it will be successful. But what do I define success as? And that's where I'm embedding my opinion, my agenda. My definition of success for dinner is that my kids eat lots of vegetables. As you can imagine, this isn't the definition of success for my children.

So, that's what I mean when I say algorithms aren't objective. They are embedding opinions in code. Because even though this is marketed and presented as somehow scientific objectivity, the truth is I am making what I want to have happen, happen. I am--I am sort of embedding my agenda into the algorithm, and I'm optimizing to my agenda.

Similarly, like when social media algorithms decide what to show us. The definition for them of success is to keep us on the social media platform. So, they measure success through how long we stay on their platform. That's success for them because it's a proxy for a profit. The longer we stay on, the more ads we click on. That's not, of course, how we would define success. Just as my child wouldn't define vegetables as a successful dinner, we wouldn't define getting into arguments, hating our neighbors and hating our family members as success, but it is the definition of success for Facebook.

So, the two things I want you take away from that is, number one, these aren't mathematical concepts. These are opinions. These are agendas. They're political concepts. And number two, the people who get to decide what success look like are the people in power. And of course, it's reasonable for me to be in power over my family's dinner; it's less reasonable, perhaps, for Facebook to decide what information we should have.

MR. SCOTT: So, you came to this work after the 2008 crash, and you were working on Wall Street and saw how flawed risk calculations for mortgage-backed securities led us to an economic catastrophe. Do you draw a line from those models to other predictions we rely on today?

MS. O'NEIL: Absolutely, yes. I mean, look, those AAA ratings, those are risk algorithms, right? They were supposed to be telling us how risky these mortgage-backed securities were, and they have a lot of properties that I find problematic. So, for example, they are very widespread, high-impact. They--because of those AAA ratings, they engender trust, mathematical authority and trust that was, you know, widespread. We had international investors investing in our mortgage-backed products because they trusted those AAA ratings. So, that's high impact. They were secret, of course, except for the investment banks that sort of gamed them. So, they were secret, they were high impact, and they were destructive. And so those are the three characteristics of the algorithms that I critique that are now not being used for markets but rather being used to score humans in the realms of hiring and mortgages, who gets a loan, under what terms do you get a loan, insurance, and I mentioned predictive policing but also other kinds of things like sentencing algorithms, crime risk scores. So, we are being scored in multiple ways at every sort of juncture where we--when we interact with a bureaucracy.

And those algorithms, those predictive algorithms that score us often have those three characteristics: They're secret. We don't know about them. They're high impact. They matter to us a lot, even though we can't appeal them. And they are often unfair.

MR. SCOTT: So, in the context of the future of work, can you give us some examples of how algorithmic bias impacts the workforce, like, how are they deployed in hiring or teacher evaluations? You mentioned some big data in policing. What about sentencing?

MS. O'NEIL: Yeah. So, one of the prime examples in my book "Weapons of Math Destruction" is the teacher evaluations, which was a--sort of a--part of a sort of bipartisan push to have teachers held accountable, part of No Child Left Behind and Race to the Top. And they were purported to assess teachers for how good a teacher they were. And these were high-stakes numbers, often given months after a year's long class was held, telling teachers presumably that whether they were good or not at their job. These numbers were used to fire people. They were used to give bonuses if you got a good number or a few good numbers. They were used to deny tenure.

But ultimately, the underlying statistical algorithm was really not much more than a random number generator, as we learned. Can you imagine? So, it was like--the irony was the teacher accountability system was not itself accountable. There was no appeals system.

More looking forward instead of looking back, because I mean, it's still used a little bit but not nearly at the scale it was at the time, I am really concerned about the future of work, the near future of work, because there are so many people out of work now, and we know they're stuck at home on their computers. We know they're going to be applying for jobs on these platforms, these matchmaking platforms, between people looking for jobs and employers looking for people. There is no reason to think, in my estimation, that those match-making algorithms, whether they're on LinkedIn or Monster.com or the other platforms, have been audited for fairness, which is to say I suspect they often just follow the historical practices, which is typically how they work.

Like, you know, white guys get offered STEM jobs or even told about them. Just to be clear, these platforms don't offer jobs, but they do tell people what jobs are open to apply for. And so, that's the--it's called sourcing. The sourcing mechanism that we all rely on as a country is, from my perspective, completely unaudited and suspicious.

MR. SCOTT: You recently wrote a piece about the need to keep sensitive data out of the hands of would-be employers or insurers that might use it to deny coverage or employment. We're talking about personal information, like where people live, who their friends are, that you argue could lead to a failed apartment rental or loan application. Can you talk us through those concerns?

MS. O'NEIL: Yeah, absolutely. And this touches on the future of work, as well. I mean, the truth is, we don't have a lot of data protection in this country. Most of our data is up for grabs if it's not specifically protected by some kind of law--like there is a medical protection law called HIPAA. But the truth is, big data algorithms, big data techniques, AI, they're very sophisticated and can infer people's health--many times they can infer people's health status without actually having that specifically protected data that your doctor's files have.

In particular, it's not that hard at this point to guess whether someone has diabetes or will have diabetes based on data that can be bought and profiles that already exist about us. Consumer information, sort of how you tend to move around, whether you're a smoker, these kinds of--this kind of information, which is technically a medical record, is not really protected anymore. And I just focus on that particular thing. It's really problematic.

I mean, listen, it's--it would be great if your doctor estimated your chances of getting diabetes as high and gave you A1C tests and medication to keep track of your risk and your actual status as a type 2 diabetic. But it's not great if a future potential employer also has that information, also has the risk score for you having diabetes and uses it potentially against you in order to not hire you because they think of you as too expensive as an employee. That right now is possible, and as far as I know, not illegal. It's not a preexisting condition if you don't have it yet. So, that's the kind of thing that we talk about--I worry about in the near future with respect to privacy. I don't--I don't think about privacy--I think it's kind of the cat is out of the bag in terms of, like, that kind of data.

But what I do think about, Eugene, is like how do we make rules about when you're allowed to infer things about me. And I think there should be rules about that kind of--that kind of thing in specific situations like hiring. Like, when you're thinking or hiring me, you're not allowed to infer my medical status even if you can.

MR. SCOTT: Can you tell us your current work as an algorithm auditor and who some of your clients are?

MS. O'NEIL: Sure, I can tell you some of my clients. I'm under NDA for some of them. But for example, I'm allowed to discuss my work with the Washington State Department of Licensing, which was really fascinating work. So, they're the DMV of Washington State. And they wanted us to audit--my company is named ORCAA. So, ORCAA was hired to audit their usage of facial recognition technology, which is a very hot topic because it's known by the work of Gender Shades--the gender shades study at the Algorithmic Justice League with Joy Buolamwini and Deb Raji--it's known that this--and Timnit Gebru I should mention--that facial recognition technology, generally speaking, works better for White people. It works better for men. It works better for young people. But the DMV is required--and this is true for all DMVs in every state, as far as I know--is required by the SAFE Act, which was passed after 9/11, to make sure that anybody getting a license isn't--doesn't already have a license. So, they're like basically looking for fraud in the DMV and they're looking to make sure that nobody can have, you know, multiple licenses. That means that they are using facial recognition, which is known to be flawed, again.

So, the question is, what do we do as an auditing team? We go into the Department of Licensing and we think through what could go wrong here, and for whom would it fail. And that's really the--I mean, in a nutshell, the audit, what consists of the audit. So, we have to name the stakeholders, like, for whom it might fail, and we might--and we have to name what their concerns would be, what would it mean to them to fail.

And for example, in that--in that context, we had immigrant rights groups join the conversation so that they could say what would it mean for them that the DMV fraud detection system would fail. And that's generally true of the ORCAA clients, is that we go in and we talk--we facilitate a conversation between the stakeholders of an algorithmic context about what it would mean for them to fail, and then we build tests to see whether the most worrisome scenarios are, in fact, happening or not, whether we can stop worrying about them.

MR. SCOTT: And in your opinion, what would effective government regulation look like? Do we need a federal agency that oversees the use of tech exclusively?

MS. O'NEIL: That's such a great question. Thank you. And it's such a timely question. Well, listen: The good news is, we already have a lot of existing laws that are simply not being enforced in the era of big data.

As I mentioned, there are algorithms in insurance, in credit, in hiring, and in the justice system. They all have laws. And the question is, like, to what extent are they compliant with those laws? And most of the answer is, we don't know because the regulators are, to be honest, not completely sure of how to check, how to ask an algorithm, are you fair? What does that mean? What does it mean for a hiring algorithm to be fair, or an insurance algorithm not to be racist? And it's an ongoing question. I'm not saying that they should have already figured this out, but it is something we have to figure out urgently.

And to that point, I would argue that instead of having every regulator sort of bone up on the technology to--required to do these audits, it might make more sense to sort of endow one regulatory body--like the FDA, for example, I'd like to call it the FDAA, potentially, the Food, Drug and Algorithmic Administration--to sort of do the same thing for algorithms that they do now for drugs, which is to say, is it safe, is it effective? And "effective" would mean does it work at least as well as the current system, because usually algorithms replace other kinds of bureaucracies, and sometimes they replace other algorithmic bureaucracies. So, is it as effective? And is it safe? Does it cause undue harm to specific protected classes? Those are questions that are very basic about algorithms but are not being used.

And yet, algorithms are being deployed and sometimes causing a lot of harm, sometimes invisible harm. So that's another thing that we need to keep track of, is who gets--who gets screwed by these algorithms.

MR. SCOTT: I want to take a question from one of our audience members.

MS. O'NEIL: Great.

MR. SCOTT: And we have one from Elena Acevedo from California, who asks, "What are best practices to ensure ethical AI? Can you discuss uses of AI involving employees and ways to ensure an inclusive and equitable workplace?"

MS. O'NEIL: I think the most important thing with respect to ethics and AI is that right now we're doing it really backwards, because we're not--we're sort of building the algorithm and then worrying about possible negative side effects afterwards. And we're not even sure what that means or how to test it. From my perspective, we should first have a broad, nontechnical question of what our values are, and what are the--and more--just as importantly, what are the ethical conundrums that are embedded in this process?

So, for example, I know that sounds really abstract, but if we were talking about the teacher evaluation system I mentioned earlier, we would want to make sure that the teacher evaluation system did not give off a lot of false negatives or false positives. We would make--want to make sure that people who were terrible teachers weren't scored well, and people who were really wonderful teachers weren't scored very badly. And before deploying the algorithm, we would check to make sure that's true. It seems kind of like a no-brainer, but the truth is, we don't do that. We, like, deployed that algorithm nationwide without any checks on whether it actually was successful.

So, that's the kind of thing we should have a discussion about, kind of like an ethics review board beforehand. Then we would have--once our values were decided, we would have the data scientists and the technical people translate the values into code. That's the technical work that they do but other people can't do. And then we would have ongoing monitors to make sure that the algorithm was faithful to our values--faithful to the values that the ethical review board stated. Right now, literally we have the data scientists sort of implicitly imbed values into an algorithm, deploy it, and hope for the best. We often never know the extent to which the algorithms that are already deployed are faithful to our values, and we often never get around to stating our values.

MR. SCOTT: Well, unfortunately that's all the time we have today, Cathy. And I really want to thank you for speaking with me.

MS. O'NEIL: It was really a pleasure, Eugene. Thank you for having me.

MR. SCOTT: Great. And thank you for joining us. Tomorrow, join us for Post Live Election Daily hosted by my colleague Bob Costa featuring both senators from Pennsylvania, Democrat Robert P. Casey Jr. and Republican Patrick J. Toomey. As always, you can head to WashingtonPostLive.com to register for upcoming events. I’m Eugene Scott, and thanks for watching Washington Post Live.

[End of recorded session.]