On Tuesday, March 20, The Washington Post brought together pioneering researchers, business leaders and elected officials for Transformers: Artificial Intelligence, a live news event focused on technological advances that are poised to reshape the way we live and work. Speakers discussed the future of artificial intelligence and the implications of AI for public policy, business and society.

Coratti:            Hi.  Good morning everyone.  My name is Kris Coratti.  I’m vice president of communications and events at The Washington Post.  Thank you for joining us on this very rainy morning.  I’m glad you all made it out.  We are going to have a fascinating series of discussion this morning on artificial intelligence.  This is the latest in our ongoing event series that we call “Transformers.”  And our speakers this morning are going to explore the regulatory questions around this technology.  They going to look at how AI is reshaping the way we live and work.  And they’re going to discuss how to make sure this technology is used responsibly in the future.

Before we begin, I just want to quickly thank our presenting sponsor for this even, Software.org, the BSA Foundation, and our supporting sponsor, the University of Virginia.  And so now I’d like to go ahead and welcome to the stage The Washington Post’s Tony Romm and Senators Maria Cantwell and Todd Young.  Thank you.  [APPLAUSE]

A view from Capitol Hill: Senators Cantwell, Young on regulating artificial intelligence

Romm:            Good morning everybody.  I’m Tony Romm, technology reporter at The Washington Post.  Senator Cantwell, Senator Young.

Cantwell:         Morning.

Romm:            Thanks so much for being here on a rainy, rainy morning here.  And for those who don’t know, Senator Cantwell is a Democrat from Washington State.  Senator Young, a Republican from Indiana.  Both are members of a Senate commerce committee which touches on artificial intelligence and many tech issues that we’ll talk about today.

Before we get going, just a reminder that we will be taking questions from both the audience and on social media.  Just tweet us with the hashtag #Transformers.  I’ll see it on my nifty little tablet up here, and then we’ll ask your questions.  But again, thanks for joining us.

Cantwell:         Thank you.

Young:            Thank you.

Romm:            We have a lot to talk about in artificial intelligence.  It is a very large field, but I would be remiss if I didn’t start by asking you about the big news of the week, and that’s Facebook and Cambridge Analytica.  For those who missed the story, Facebook is in a bit of hot water right now because Cambridge Analytica, which is a data analytics firm tied to President Trump, was able to abscond with information from about 50 million Facebook users, and perhaps without their permission or without their knowledge.  We’re now hearing calls for investigations and so forth.

So given the fact we’re having this conversation about the power of algorithms, and machine learning, and deep learning, and so forth, I’d love to get your take on the news and what you think government should do from here.  Senator Cantwell?

Cantwell:         Well, I definitely think we need transparency.  My colleagues have certainly—Senator Klobuchar and others proposed legislation to make sure that we have fair and honest elections online.  That is that people comply to the same kind of laws that you have to comply to for advertising and information that we do in the broadcast world.  So that’s one aspect of this.

And then the other aspect is just transparency.  We need to know and understand how information is being used, and who is behind that information.  Obviously, concerns about falsifying information in the bot realm of anything from political use to information like on net neutrality at the FCC, those are things where transparency—how information’s being created.  Transparency is very, very important.

Romm:            Senator Young?

Young:            So I agree with everything Maria said.  We need transparency with respect to what data is being collected.  That’s not always clear to users.  And how that data is being used.  But I would also add we need to ensure there’s accountability from all parties involved in these different decisions.  And so Congress has an important role to play in ensuring if we don’t have clear rules with respect to accountability—who should be responsible for what, what transparency should look like—then we need to optimize our existing systems.

Romm:            So on the note of accountability, should Mark Zuckerberg come testify on Capitol Hill?  You both sit on the commerce committee.  Should he come?

Young:            Well, John Thune, who is indeed the chairman of the commerce committee, I believe has invited him recently to come appear.  It would be my hope that we hear from top leadership.  If Mr. Zuckerberg wants to appear, I’d certainly welcome his appearance.

Romm:            Senator Cantwell?

Cantwell:         Well, there’s a lot of people I’d like to hear from on this thing at large.  So I think that Mr. Zuckerberg should make himself available to discuss where technology is going in the future, and discuss the challenges that we face in this realm, and add to the debate, not be silent on it.

Romm:            So to zoom out a bit, do you find that some of these companies, the leading, cutting-edge companies when it comes to artificial intelligence are black boxes; that you don’t really know how the algorithms work; you don’t know what the inputs are?  And does that make it hard to do oversight from your perspective?

Cantwell:         I think we’re entering an age where artificial intelligence is going to provide great benefits.  If you look back to the early days of the internet, there was lots of anxiety about what the applications would be, and yet here we are, years later, and we see the full power of it, and how unbelievable it is.  You know, we probably, a few years ago, had the same discussion about drones and whether drone technology should be allowed.  And yet, we’re coming to the precipices now where we see the advantage; whether it’s fighting forest fires and having accurate information, or a lot of different areas; that it isn’t about the technology itself.  It is about the application.

So I would hope that we would have the same approach to AI; that it is going to empower us.  I think particularly in the area of cybersecurity, for a lot of great solutions.  Do we have to have some discussions about how it plays out?  Yes, and that is why Senator Young and I introduced our legislation because we want government to be part of that discussion, and to make sure that we are not only taking advantage of the opportunities, but also looking at those questions of bias, which we know will be there and on the table for discussion.

Young:            So your question, it sort of cuts to the heart of a very important policy issue, which is, under what circumstances should we have full transparency?  That is, an algorithm made public versus just accountability.  That is, accountability for whoever happens to have that algorithm available to users.  This is one frankly that I’m not equipped to be able to offer my perspective on yet, which is exactly why Maria and I have put together the Future of AI Act.

We see incredible potential in this technology.  It’s already moving forward at a rapid pace in the private sector.  Government is a bit behind here.  And before we over-regulate it, we want to make sure that we get a better understanding of what sort of policy structures need to be in place so that people can meaningfully participate in an economy driven, in large measure, by AI, so that it’s not biased, as Maria mentioned.  And so that hopefully America can lead with respect to this technology, which has the potential to increase our rate of economic growth.

I’ve been briefed by up to doubling it within just over 15 years.  So your policy question, a good one.  I don’t think we have a clear answer on it yet.

Romm:            Sure.  On the Future of AI Act, which had just mentioned, talk a little bit about the legislation.  It’s essentially a taskforce.  Is that right?

Young:            That’s right.  We house the taskforce at the Department of Commerce.  We will convene data scientists, members of the manufacturing industry, technologist, and various other stakeholders to advise members of Congress and our federal government about what the future of AI legislative and regulatory policy should look like.  So that, again, everyone can meaningfully participate; be skilled up so that they can fit right into an economy-driven by AI.  And also, we want to make sure that people’s privacy is protected and that these algorithms are unbiased.

Romm:            Go ahead, Senator.

Cantwell:         Well, just adding on that, we wanted to look at four areas that we thought were important.  One, what are the areas of competition that the U.S. should be mindful of given other people’s investment in AI, whether that’s in China or other countries, and where do we fall into keeping our R&D prowess here.  And what do we need to do to keep that going?

We’re going from here to an energy hearing where I’m going to be very concerned about the level of cuts in the R&D budget for energy.  The fact that people want to zero out ARPA-E in this administration is just like crazy.  So what do we need to do to keep the level of competitiveness.  And then as Senator Young mentioned, both privacy and bias will also be part of the discussion.  And then lastly, workforce.  What are the workforce implications and what do we need to about that?  Both in capturing—I can tell you right now, if you have any kind of AI education, please head to Seattle, Washington.  [LAUGHTER] We need you.

The employers there are telling me, you know, it’s a very, very, very competitive field right now for anybody who has any expertise here.  And it’s just going to grow.  So what do we need to do to both grow the workforce in this area, and what do we need to do to prepare and diversify our workforce too?  So those are the four pillars of the legislation.  And I think that we are not saying that’s the only thing to be discussed, but at least it gives us a framework for the important policies that are out there today.

Romm:            Sure.  And if I’m reading between the lines correctly, and correct me if I’m wrong, what I gather from that is that you think that some of your peers, perhaps, in Congress and throughout Washington maybe aren’t equipped to understand the issues.  They’re not familiar with them.  They’re not talking to some of the companies in the way that you guys are.  Is Congress equipped to tackle AI right now?

Young:            I don’t think we are.  That’s why we’ve created this panel.  I like a measure of humility from my legislators, and this is certainly an emerging field.  What we need to do before we prescriptively regulate or legislate in this area is understand what sort of challenges and opportunities are created by this technology; recognizing it’s inevitable that we will continue to have advancements in machine learning, and data science, and all the other sort of subsets of artificial intelligence.

Romm:            What would you like to see from the Trump administration right now?  I know there was that whole controversy about a year about with Steve Mnuchin, saying that it’s going to be a long time before AI starts to have an impact on the economy.  What would you like to see from the Trump administration on AI?

Cantwell:         Well, the last administration, the Obama administration, did a report and came out with some basic findings.  One of them was the huge economic potential in AI for us as a nation.  So I think that those ramifications need to be followed up on.  That report, I think, outlined some areas in which we could all agree that we need further investment.

To me, I think maybe we’re even talking about an AI engineering institute, similar to what exists now at Carnegie Mellon for software.  Something that where we’re going to talk about standards; we’re going to talk about certification processes; we’re going to talk about a lot of the issues we just discussed.  So what is that next phase of development, and then what are following up on, what are some of those applications that best are suited to us at the federal level; those applications that are going to help us, whether it is cybersecurity or disaster relief issues.  You know, big data information.

It kind of bugs me that the Europeans on the weather forecasting, just because they use supercomputing for their algorithms, they’re constantly producing better data on storm and storm impact than we are.  So what are we going to do to stay competitive on some of these important issues.  So I think the Trump administration should take the Obama administration recommendations here and go further on that investment.

Now, I don’t know where the president is on science.  If I could, you know, do anything, I would give him a little tattoo right there, science, because [LAUGHTER] I think he needs to put a more down payment on these areas.  But that’s, again, a very Northwest perspective, kind of view of the world.

Romm:            Senator?

Young:            Well, I would start with the recognition that contrary to conventional economic belief, countries do compete, not just firms.  And there’s a competition in the realm of AI right now.  And so we need to make some strategic investments as a country in particular technologies.  And AI strikes me as a natural one based on our existing expertise in both data science and supercomputing.  You mesh the two and you get artificial intelligence capabilities.

So we need to make some strategic bets.  Once we decide what those bets should be, we invest in those particular areas.  I don’t think it’s a real obstacle that the Trump administration hasn’t been prescriptive in this area.  I actually welcome it.  As a member of Congress, I think it’s great that we have an opportunity to legislate in this space as opposed to having very little interaction with the executive branch, which is what I experienced during my six years in the House of Representatives.

I see this as an opportunity.  And I think it’s a good opportunity for bipartisan work.  And that’s what Maria and I are doing with this AI legislation.

Romm:            Sure.  Let’s talk about the industry and some of its policy challenges.  I’m struck every time Elon Musk gets on stage and talks about how AI is akin to nuclear weapons.  I think he said it could cause more damage than a nuclear bomb.  Previously he’s talked about going to Mars to avoid whatever catastrophe AI may cause here on the planet Earth.  When you hear statements like that from somebody who works in this field, in this industry, what does it mean for you from a policy and political perspective?  Whichever one of you wants to take the question.

Cantwell:         Well, I think that we can all go a long way and have in lots of—there’s probably been many a movie about this already, right?  So just as I said earlier about other technology applications, the issue isn’t, are you going to move forward on technology.  The issue is what do you want to do about those applications.  I’m kind of struck by the same discussion we had over the last, I would say, 15, 20 years, about stem cell research.  And yet, right now at the University of Washington, we’re making regenerative heart tissue.  So I’m pretty sure I’m glad that we had the conversation.  And I’m pretty sure I’m glad that we moved forward.

Now, was there a lot of question about what stem cells were going to be used for, and trying to have a broader debate, yes.  But I think this is the issue with AI, is that while Elon can bring up some very important issues, that’s why we want this legislation to have that discussion, and to have that consideration.  So I think that we have plenty of time for that.

Romm:            Senator, do you agree?

Young:            So I think that was a good example.  I think those previous conversations about other difficult issues allowed us to achieve at a least a measure of consensus on what was, in the beginning, an incredibly difficult issue, and now we’re starting to identify a lot of consensus in that area, and therefore medical breakthroughs.  In the area of artificial intelligence, which I think, frankly, it’s an anthropomorphic term.  Maybe we should just call it sort of gap filling; gap filling our existing capabilities.  It is another tool.  It’s a tool that sometimes makes people uncomfortable because the notion of extending one’s intelligence and augmenting it, for whatever reason, makes us less comfortable than extending our physical capabilities, like one would with a hammer or something else in the physical space.

So we need to normalize that idea and also recognize that as with any tool, it can be used for good or for ill.  And I would hope that this panel that we’ve convened through the Future of AI Act would consider some of these contingencies as well, to put the public at ease, and also to prepare for the potential of using AI, not just for those wonderful things we’ve talked about: doubling the rate of economic growth, increasing the productivity of a worker by up to 40% in just over 15 years.  But also, addressing concerns people have.

Romm:            Sure.  We say some of the potential perils on the automation side of all of this just yesterday with reports that a self-driving car operated by Uber had killed a woman in Arizona.  This is coming a time when Congress is considering legislation that would put more self-driving cars on U.S. roads.  Is this an example of people reacting too strongly to something that happened, or is this case in point, maybe Congress needs to slow down and think a little bit more before it does something like put more self-driving cars on U.S. roads?  Senator Cantwell?

Cantwell:         Well, we’re definitely going to get—first of all, I’m so sorry for the loss of a life in Arizona, and to that woman’s family.  So our sympathies go to them.  We’ll hear from the NTSB about what that accident was about, and the details.  And I’ve read some reports from the Arizona newspapers about what they think has been the experience with these cars; that people feel like in some of these areas they actually have worked successfully to stop accidents from happening.  So we need to look at all the data and information.  That’s why we have this oversight.

I can tell you from the aerospace industry that the technology-driven cockpit has provided us more safety and security.  It has improved the ability and performance of our airline industry.  So, again, we want to move ahead with things that are going to help us provide more safety and security.  But, yes, we have to get to the bottom of what was the detail in this individual instance and area, and what do we need to do to resolve any of these issues.

It’s not as if we’re not going to have problems.  There are things that are called “software glitches,” and they can have serious consequences.  But again, it’s the question of how do we move ahead.  And our job in oversight, and particularly whether it’s NIST or the Department of Transportation is to make sure that we are not putting people in undue risk by not having that oversight and structure when it comes to the implementation of new technology.  And that’s the job we do in talking to those agencies about that oversight responsibility.

So, as you can imagine, we had a lot of discussion with Toyota about their cars, and what was a software glitch.  And yes, it was complex, very complex, to the point we have to get—I think you’re hearing from NASA later today—but I think we actually had to get NASA to tell us what exactly happened in that instance.  So the complexity of this, because it’s going to be related to software and software algorithms, is going to be harder.  Okay.  Well, then let’s set up an agency and the proper people here to understand it.  We don’t want to be laggards on AI at a government level, where an industry is moving forward, and we have no real ability to do our oversight responsibilities.

That’s why I think something like an AI institute, engineering institute, to help the government, just as we do right now on FAA issues, is probably a proper role and responsibility.

Romm:            But culturally speaking, are Americans ready for software glitches where the consequences or the repercussions might be the loss of human life?

Cantwell:         Of course not.  And so, of course not.  But at the same time, if that technology drove into the cockpit of an airplane better safety standards and better measures, which is, I think, what people would tell you today has happened, then, yes, we want to keep improving for safety standards.  We want that help and information.

And so it’s not an either/or situation.  It is what is the responsible oversight role for us to play, and making sure we have people at the federal level who have the ability, with a basic understanding of the technology, to actually oversee it.  And I think that is probably right now where we’re missing a little bit of, if you will, technology oomph here to make sure that we are building that, because it’s such a new area.  Such a new area.

Young:            So as we adopt new technologies, there is unquestionably a higher standard for those technologies than the existing technology.  I just think we have to recognize that.  In this particular instance, this tragic incident—of course, we feel for the family, our prayers are with the young lady’s family—but we also, as policy makers, I think, need to provide some context when we talk about these things.

In 2015, 9 out of every 10 auto fatalities in this country happened on account of user error.  We can improve on that.  And that’s what autonomous vehicle technology aims to do.  This is safety technology.  And as it’s developed, there will likely be some unfortunate mishaps along the way, and we need to do everything we can to put in place a regulatory structure, a legal structure, to make sure that’s not the case.  That’s why the AV START Act, which we’ve passed out, the commerce committee, I hope is soon put on the president’s desk and signed into law so that we can create a proper regulatory structure for development of these technologies.

Romm:            Sure.  One of the other consequences, when we’re talking about AI, it always comes back to job loss, the potential for job loss.  And I was struck as I was doing research for this, I stumbled upon a Gartner Report that said by 2020, AI could wipe out 1.8 million jobs, but then generate 2.3 million jobs.  What role does Congress have in retraining Americans that they’re able to pursue those 2.3 million jobs that AI helps create?

Cantwell:         Education, education, education.

Romm:            Is Congress going to pay for it?  Are they going to put the money out?

Cantwell:         Well, first of all, we need to do all we can to drive down the cost of education.  Senator Collins and I have a bill, a first-ever federal incentive for apprentice because we think we need to skill and train so many Americans.  As I talked about, cybersecurity, it’s already clear.  We need 1.5 million new energy workers.  A big chunk of them in cybersecurity.  So we already have this problem today.  In fact, I personally believe that the challenge of our era is the transitioning nature of our economy that is going to continue to change.

I always say to people in my office, there’s a reason Ma Bell doesn’t exist anymore.  But then the young people are like, “Who’s Ma Bell?”  [LAUGHTER] They don’t even know.  And so the fact that we’ve gone from a behemoth in telecom to now this handheld device is a major transition.  But that is going to happen in every sector.  The newspaper industry.  What have you been through?  Okay, so it’s going to happen.  I believe you prepare for that.

And one of the ways you prepare for that is to upgrade our education investment.  Sure, you can make it more efficient.  You can help drive down the cost.  But we have to prepare a system that is going to allow us to skill and train people for those new jobs, and to be able to help us capture the economic opportunity that’s there.

Romm:            Senator?

Young:            We also need to change how we train for careers and for jobs.  McKinsey Global has done some interesting work in this area.  They looked at 800 different job categories.  They found that—their estimate is that 1 out of 20 jobs will go away entirely as AI technology develops, but 60% of those 800 job categories will see a portion of their job be automated through AI technology.  So that suggests, yes, some workers will have to be entirely retrained, and we need support systems, and public investment to ensure those support systems are there for retraining in the new job categories.

But we also need, what I would imagine would be, more compressed training regimens for those workers, who the nature of their existing job will change.  So we’re already seeing colleges and universities, many private programs, offering six-week programs, 10-week programs, as technology continues to evolve, to prepare people to be on the cutting edge of their given profession.  There’s some creative things we need to do as well, because the cost of education has gone up, we all know, over years, and college debt, and so forth.

For example, I offered legislation that would allow private investors to invest in students.  Whether someone wants to major in electrical engineering or data science through a standard four-year curriculum, or a 12-week coding program, private investors could pay for that student.  The student would then pay back the private investor after completing the program of study, thus shifting all the risk of noncompletion onto the private investors.  And only putting a debt burden on that student should they land a job on the back end.

So these are the sorts of creative things that I think you’ll see emerging increasingly as we adapt to this new normal of education and training.

Romm:            One of the ideas that’s been proposed is this idea of a robot tax, right?  Whether it’s Bill Gates or folks in California.  There’s been conversation about whether you tax robots or other forms of technology that take the place of a job currently held by a human.  Is that one of the ways that the federal government can help pay for retraining?

Cantwell:         Well, this is probably an area we might disagree, but I am so grateful to have my colleague here, and to work on this together.  But, yeah.  I would have taken the tax bill and put a big down payment on retraining.  I would have.  You know, even on the repatriation discussion, in the past we had talked about saying that some of that should have gone for retraining.  Now, do I think the public and the private sector are in this together, and that companies are pretty motivated here too?  Yes.  But I do think that this, as I said, is one of our biggest challenges is how to accelerate this right now.

So, to me, I would use whatever incentives we could because it is what is going to help us with the productivity wage growth.  I think Seattle has had something like 2.3.  So we’re one of the cities in America that has actually seen that wage growth, but you’re going to have to make this investment.  So let’s figure out the most cost-effective ways for the public and private sector to partner together, to drive that down.  But would I put that on the table instead of some of the other elements of the tax bill?  Yes.  That’s what I would have done.

Romm:            Senator, robot tax?

Young:            I would not start by taxing capital investments, which is what this is, a new form of capital investment.  We could have also taken—and Maria and I might disagree on this—the stimulus package, and invested heavily in worker training.  So I think we both agree, in a bipartisan way, the worker training is essential in this new hyperdynamic economy we’re in.  We’re going to find ways to adapt to the new sort of training that’s required.

Romm:            Sure.  We have just about a minute or so left, so I don’t want to leave before we touch on one last issue which is bias when it comes to artificial intelligence.  It’s one of the things that the Obama administration had warned in its final report about artificial intelligence; this potential for prejudice to be embedded in the code itself.  What role does Washington play in this space?  Does it force companies to change their hiring practice?  What can Congress do here?

Cantwell:         Well, we need a robust discussion.  We need a robust discussion just as we had one, as I said, about stem cell research.  One of my biggest complaints right now is on capital formation for the SBA and the amount of capital that—or so little of it that goes to women and minorities.  And why is that?  I mean, they had like 4% of SBA capital, some ridiculous number.  And you’re thinking, “Why is that?  So what’s the bias there?”

Well, when you peel it back, you find lots of different things.  Women like smaller loan amounts, and the programs aren’t geared towards them, or the counseling that exists are geared towards not the same applications as women startups.  So there’s a whole bunch of issues there, but is that a bias?  Yes.  It’s a bias.  It’s a problem.  And do we want somebody to bake a bias into an algorithm?  No.  And we are going to do our best to try to figure out what is the proper role to make sure that doesn’t happen.

But I guarantee you, there are so many biases that exist today in other policy, and we have to keep in mind that the broadest—that’s, again, why I think an institute—these international stand—like IEEE, or other organizations that help us discuss standards for, you know, electronics and other areas, are important tools for a broader discussion.  And we should empower somebody, like with our taskforce, to have that discussion, so that we can come up with some ideas and parameters about this.

Romm:            Sure.  And that, unfortunately, is going to have to be our last word on this because Senator Cantwell’s got to run, and I’m going to get the hook.  But thank you both for—[OVERLAPPING]

Young:            Thanks so much for having us.

Romm:            —Senator Young, Senator Cantwell—

Young:            Yeah, thank you.

Cantwell:         Thank you.

Romm:            —for discussion.  Thanks everybody.  I’d like to kick this over to Drew Harwell, my colleague.

[APPLAUSE]

Humans and Machines: One-on-one with Microsoft’s Peggy Johnson

Harwell:          Good morning, everyone.  Thank you for joining us.  I’m Drew Harwell.  I’m a national tech reporter for The Washington Post covering AI and big data.  And I am very honored to welcome Peggy Johnson.  She is an executive vice president for business development at Microsoft, which is a company that’s been thinking about this for a long time.  Bill Gates has been talking about AI research for probably 25 years now.

Johnson:          Twenty-five years.  Yeah.

Harwell:          So we just want to talk a little bit with Peggy.  She oversees the investments through Microsoft’s venture capital arm, which has invested millions in AI and other startups.  She is also an engineer, and before this she spent 24 years at Qualcomm, which has also been in the news for different reasons.  There’s also a book out in the lobby that Microsoft did called The Future Computed: Artificial Intelligence and Its Role in Society.  It’s very good and very smart.  And I’ve also been told that before we get started I’d like to remind our audience that you can tweet your questions using the #Transformers.  So, make them good.

So, let’s begin.  Microsoft has been thinking about this for a long time.  Where are we at now in AI?  Where is the research?  Where is the development?  And how is it sort of affecting people’s lives?

Johnson:          Right.  So I think a lot of people think AI is a new trend.  It’s actually been around for quite some time.  Bill Gates, about 25 years ago, when he was starting Microsoft Research, sort of had this prediction back then.  He said, “Someday computers are going to be able to see us, hear us, talk to us, understand us.”  And it was quite prescient, because now we’re coming to that point where they are.  And I think the reason it’s now top of mind is a lot of trends have all converged.  So the idea of cloud computing, big data, AI algorithms, you mix all those together and now we have some momentum behind it.  And we’re starting to see real impact.

Harwell:          Are they really understanding us, though?  Help us understand—they’re hearing us, there’s voice recognition.  There’s facial recognition.  But what are the limits now?  I mean, are they really sort of comprehending like a human one?

Johnson:          Well, it’s interesting.  For a long time, it seemed that was a very hard problem to solve—voice recognition, image recognition.  But over the last few years that’s really accelerated.  And now, both of those actually are cognitive services that we have that have both voice and image recognition, are testing out better than human voice recognition and human image recognition.  Slightly better.  So we’re now at the point where this is a tool now that we can put into place for a number of good uses.  So it is real, and it is starting to have some early impact in this space.

Harwell:          Great.  I want to riff off a little bit of what Tony was talking about.

Johnson:          Sure.

Harwell:          This issue of bias and this issue of things getting baked into the system.  We know that AI depends on data and compute power.  And if the data is biased, there’s going to be issues in the compute and in the product.  So from the company side and from the research side, how do you all think about paving over those issues, trying to make as fair and equitable of an AI product as possible?

Johnson:          Right.  So it’s important to think about it.  And we do.  We take it very, very seriously.  But if you just imagine a pool of data and that data is now going to be used to train an AI algorithm, there’s no pool of data that’s perfect, right?  And particularly when you start to talk about social impact and social uses of it, you want to make sure that that data is the best data possible.  And you want to make sure that you go into it knowing there’s going to be some bias in the data, which is why you still want to have humans involved in the equation.

So, for instance, you mentioned I’m an engineer.  So let’s say we had an AI algorithm that we wanted to train to find us the most successful engineering candidates.  I’m getting ready to start a company, I need the best engineers.  I might say, “Let me search back in the pool of data that I have on engineers, and let me try and find what will be the attributes that would make the most successful engineers.”  So I might look at skills at training, those sorts of things.

But one thing that that data would show is it’s mostly men.  I know because I grew up in that industry.  My university, most of the time I was the only woman in the room, when I was practicing engineering.  So if we just relied on the data, we would input that data into the algorithm.  The algorithm would say, “Successful engineers are men.”  And we know that’s not the case, for a whole number of reasons.  We know—we’re starting to understand more why there is this gender gap in engineering.  But if you solely relied on the data, you would introduce a bias and then you would amplify the bias.  Because on the other end you say, “Well, this is what the computer says.  The computer says successful engineers are men.”  Those are the sorts of things we have to take the utmost care and ensure that we have a human involved in that equation who understands, “Well, this data might be slightly tainted.”

Harwell:          I find that part of it really fascinating, the idea that you could solve these problems of sort of recruitment bias and that sort of thing, with a computer.  But having the human oversight there, what does that provide us and how can we keep that part of it in check?  What sort of guidelines would you put an oversight of the kind of human moderator in the process?

Johnson:          Right.  So we definitely need to deploy this technology responsibly.  And a couple years ago, about mid-2016, our CEO, Satya Nadella, who had been thinking about this, introduced a set of principles, and he said, “As we develop this technology, we need to ensure that we have protections in place for things like fairness, safety, security, transparency, accountability”—you heard the senators bring up accountability.  Privacy, inclusiveness.  All of these things have to be things you think about as you’re training these algorithms.  And you have to do so with a core of empathy.  And you have to ensure that as you’re training the algorithms you’re doing so with dignity.  Because again, as you said, you know, we could be using these algorithms to help us find the best candidates for jobs, or maybe to accurately diagnose a medical ailment.  Or maybe even to get a loan.  And we want to ensure that we have those sorts of protections in place so that the algorithm doesn’t spit out an answer that you just take as the final answer.  You have to always balance it with those protections.

Harwell:          Yeah.  Empathy in the product, empathy in the science, I find that really fascinating.  I think AI especially, maybe it’s all the science fiction and Skynet movies and that sort of thing, but like, it feels like that’s an important part of the development.  And even in the book they talk about maybe there should be a Hippocratic Oath for AI developers and engineers.  What would that look like?  Are engineers thinking about that right now, thinking about the human side of the equation?

Johnson:          We are.  And we think very carefully when we begin to build our AI teams.  We want to ensure that the teams look like the population.  Because again, that’s a way that you can introduce bias.  If it was a team full of Peggy Johnsons, it would be the best product for Peggy Johnson but maybe not for you or anybody else in the room.  So you have to have this element of empathy in that equation.

But really, at its core, AI is a tool.  I think we’re talking so much about all the good it can do and, you know, could it be a doomsday device as well?  There’s sort of arguments on both ends.  I think of it as a tool.  And you know, in a very simplistic form, the way a wrench helps you unstick a bolt, it kind of augments your human strength.  You can think of AI as augmenting your human intelligence.  And it’s just a tool.  And we should keep it centered there.

In fact, we were attending the World Economic Forum in Davos, and we happened to have lots of our customers there.  I think we went through 60 or 70 meetings in the space of about three days.  Every meeting started with the CEO or whoever was on the other side of the table saying, “What do I do about AI?  Tell me about AI.  What do I need to know?”  And essentially we came back to, you know, it is a tool.  It’s a tool that can help you.  It can help you in your ability to gain insights from your data.  But it’s not magic fairy dust, right?  You can’t just sprinkle it across your business and think things will grow.  And that, I think we have to get a little grounded there.  It can do a lot of good, and we just have to deploy responsibly.

Harwell:          It is a tool.  It does have human implications.  We saw that in Arizona yesterday with the Uber car; we were talking about that earlier with Tony.  What lessons can you take from that, either from the Uber side of it or from just in general sort of self-driving or AI deployment on how you can protect against that kind of sort of fatal loss of life or bad results?  And how you can respond as a company when something like that happens.

Johnson:          Right.  So first, that was just a tragic accident, and my heart goes out to that family involved.  But I think, first and foremost, Uber did the right thing by stopping it.  They said, “Okay, all of this is halted.  We need to understand what happened here.”  And that’s the right thing to do.  Anytime we have a technology, we have to deploy very, very carefully.  I’m not close enough to the situation to know what exactly happened.  But immediately stopping it is the right thing to do.

And just as an analogy, we had something similar happen to us.  It didn’t involve harm to humans, but it did involve—there was an empathetic problem that we had.  Essentially, we had introduced a chatbot called Tay.  And we were using it to try and understand how natural language input could look inside of an application.  So it was really kind of a tool that we were learning with, but we had done something kind of fun with it; we had created this persona of a young—I think she was 19, 20-year-old young woman.  And she was kind of hip.  But she was all driven by an AI algorithm.

And what we did is we just want to understand how humans would interact, so you could ask a question using just sort of your normal language, like we’re talking here right now, and Tay would respond.  And so we put it up on Twitter, and very quickly, within hours, the small group of people had targeted Tay, realized Tay could learn, and trained her to be racist.  And immediately, it of course caught our attention and we brought it down immediately.  We didn’t know what had happened, exactly.  But we said, “This is hurting people.  This is offensive.”  And we came back to that core of empathy.  This is hurting our users, it’s hurting our employees, we’re taking it down.

And so we took it down, and we went back through and kind of analyzed everything, but I want to just share a story that happened the next day.  It’s very interesting.  The team that had worked on it is just a brilliant team of scientists.  They’re very deep in natural language processing.  Just an incredible team that we had internally.  Now you can imagine how they felt.  They were demoralized.  They thought, “What happened here?”  You know, to their product that they worked so hard on.  And again, Satya, our CEO sent them an email and he said, “Look, first of all, we did the right thing.  We were offending people; we took it down.  But I want you to know—let’s use this as a learning moment.  Let’s not shelve this technology.”  Because the technology is very, very good.  You could imagine it could be answering questions for maybe elderly people who are homebound, and giving them real-world answers to their questions.  That’s the sort of thing we want to continue to promote.  And he said, “Let’s just do a reset.  Let’s work to understand what went wrong.  But know that I have your back.  Know that you’ve done a good thing with this technology.  And we’re going to keep going.”

So, I think it’s important to respond quickly, to respond with empathy if something happens, to hopefully ensure that you don’t get to that point, but if you do, to take things off the air as quickly as possible.  But not to shelve things.  I think it’s very important.  These are tools that can help us solve things like eradicating diseases and finding solutions for poverty, climate change.  We don’t want to stop that kind of progress.

Harwell:          Yeah.  Tay is an interesting example, and it was sort of early in the process.  And we’re still seeing it now, even with YouTube elevating a video suggesting some of the kids in Parkland were crisis actors to the top of their trending list.  That was sort of a limited use of algorithm, but it was a case where a small number—a seemingly small number of bad actors were able to use a technology for ill.  Is there a way that the engineers can be protecting against that kind of misuse of the platform in the first place?

Johnson:          Yeah.  And I think the way is to assume that this will happen, and then work back from there.  Which is why we were pleased to—it started with Satya’s set of principles around AI, but now with Future Computed, we’ve gone deeper on that.  You have to have these conversations.  You can’t just build a technology in your lab and unleash it on the world.  That’s not being responsible.  You really want to take the proper steps to ensure that the technology will be used for good, and to assume ways that it might not be, and work back from there.

So, we try to instill that in our engineers on these development teams, that they again come from a sense of empathy: “What might go wrong?  Let me work back from there so I can prohibit it.”  And by the way, Tay was relaunched as Zo, also a young woman, on the Kik platform, and it has the proper safeguards in place and we haven’t had any incidents since then.  But it taught us—it trained us [LAUGHS] to understand sort of the limits of the technology, and where humans have to get involved to keep things on the rails.

Harwell:          Right.  You mentioned some of the good use cases of AI, probably that people don’t know that much about.  And I was reminded this morning, seeing the picture of Jeff Bezos with the Boston Dynamics robot dog—

Johnson:          Oh yeah.

Harwell:          —that I feel like is probably the most overexposed, over-photographed robot of the modern era.  What uses of the technology are being overlooked?  And which ones are being sort of overexposed in that way, where we’re thinking about it too much?

Johnson:          Right.  So there are already several applications of AI.  Even, for instance, the mapping on your phone.  That’s an underlying application of AI that I think probably most of you really appreciate.  I know I do whenever I’m walking around a new city.  So there are already many use cases out there, a few of them that I’d love to highlight.

We just released an app called Seeing AI, and it uses AI for image recognition for blind or low-vision-impaired people who can take their phone with them, and now it sort of gives them some freedom.  They can read currency.  We’ve had reports of folks going into grocery stores and finding the right spices for the first time.  We had a great story from a young woman who said that she was able to cook dinner that night, having shopped on her own.  And so it’s opening up doors for people who previously had to maybe rely on another human; now they can be augmented by this AI.  I think that one is just a small use case, but to a lot of people it’s a very, very important, very freeing opportunity to be able to use that.

Another one that we were involved in recently is with Adaptive Biotechnologies.  They are looking—their endgame is actually to be able to find—to be able to develop a universal blood test to map the auto—or to map your immune system, which you can imagine then could be used for things like autoimmune diseases or early detection of cancer.  And the reason that this partnership came together is they’re the experts in this area, in studying the immune mapping of humans.  The problem is it’s a lot of data.  It’s just massive, massive amounts of data.  And if you have to rely sort of just on your standard algorithms, it’ll take too long to come to the conclusion that they want, which is this universal blood test.

So, we teamed up with them, and we are helping them sort through that data, using AI.  And that work is underway.  And I think it’ll be very important work eventually, because the idea with this early blood test is early—or with the blood test is early detection of cancers and trying to understand autoimmune diseases, which are very, very complex.  And I think it’ll give us some insights in there that we haven’t been able to see without having this AI in place.

Harwell:          Yeah.  Okay, I think this is all we’ve got time for.  Last sort of question.  Where do you think we’re going to be paying attention to these AI stories in the next year, five years?  What should we be looking for?

Johnson:          Well, I do think, as the last story just told, I do think healthcare is an area.  I think it’s areas that have a lot of data.  Those are the areas that now that we can access them and reason across them, we’re going to see some impact.  So healthcare, I think, in the financial services area.  I think in areas like climate change, we have massive amounts of data and we just need the ability to sort through that, and AI is going to be a tool that will allow us to do that.

Harwell:          That’s good.  All right, well, thank you, everybody, for joining us.  Next I’d like to welcome my colleague Anna Rothschild.  This has been Peggy Johnson.  Thanks so much.

Johnson:          Thank you.

[APPLAUSE]

The Future of Work: AI and Automation

Rothschild:      Good morning.  I’m Anna Rothschild.  I’m an on-air science reporter for The Washington Post, and I’m also the host of an upcoming science show from The Post called Anna’s Science Magic Show Hooray, which is a science variety show geared for kids, parents, and curious people everywhere.

I am very honored to have these wonderful guests with me today.  I’d like to introduce Peter Schwartz.  He’s a futurist and senior vice president of strategic planning at Salesforce.  Mona Vernon is the chief technology officer at Thomson Reuters Labs.  And Douglas Terrier is the acting chief technologist at NASA.  So, thank you all for being here.  And I’d like to remind you all that you can tweet your questions to us using the hashtag #Transformers.  So, go for it and make them great.

So, we’re about to talk for half an hour about artificial intelligence and the future of work.  But I think if you asked different scientists, they will actually each have different definitions of what they’re talking about when they talk about artificial intelligence.  So I’d actually like to start by sort of asking each of you, when we’re talking about this, about AI and how it will transform work, what are you talking about.  And maybe even give some examples from your own work of how we’re defining this.

Schwartz:        Well, look.  I think—in fact, it was mentioned in the panel with the senators, who I thought were really good, by the way, this morning.  I thought they were unusually well-informed and thoughtful.  That having been said, having said that, I was actually at the beginning of artificial intelligence research.  I started my career at Stanford Research Institute, and then the idea was a top-down model of AI—understand the brain and put it on a chip and that will behave like a human brain.  But the problem is, understand the brain.  And it’s taken us a very long time and we’ve got a long way to go.

The new model of artificial intelligence is bottoms-up, and that is let the machine learn rather than teach it.  And that now works.  And so we are many, many, many instances where we can create the opportunities to use algorithms to make sense of data that actually then perform useful, cognitive functions.  That’s what we mean by AI today, not the old model of something that looks like a brain put on a microchip.  That’s why Elon is wrong, by the way.

Rothschild:      I was going to ask.  Do you guys have the same definition, at least for the purposes of the conversation we’re having today?

Vernon:           I think so.  I think the piece that is really important to bring up, which Peggy mentioned, is that the algorithms have got better for confluence of trends like big data, and that’s exactly it.  We can train this bottom-up view, like Peter is saying, but it’s not just magic; it’s about bringing intelligent data, understanding that data, and bringing human expertise to validate those insights.  So, this whole conversation about why bias is happening, on argue is that folks forgot the scientific method.  Like, there is a way to check your work and put some context around it.  So, I agree, but I think it’s not just algorithm, it’s the combination of the human expertise and the quality of the content that you feed the algorithms.

Terrier:             And I would agree.  I think, really, we should think of artificial intelligence as augmenting human efforts, not really something that replaces or is a substitute for.  So, if you think about the—certainly in our world in space, and in every field of technology, we have volume, veracity and velocity of data that’s just overwhelming for humans to deal with, as we see more and more connected smart devices and so on.  We have that in our field as well.

So, we need machines to help us process those volumes of data into real information, into knowledge, in decisions that can help aid human decision-making.  And I think that’s where artificial intelligence can really be an augmenting tool.

Rothschild:      Would you say then that the biggest sort of misconception about artificial intelligence today and how it applies to the future of work is thinking about it as a replacement for humans?

Schwartz:        Yeah, I think that’s right.  Look, I think you’re absolutely right.  What we’re really talking about is augmented intelligence, making people more capable in most instances.  Your paper yesterday had a very good article on warehouses—right?  On FedEx, et cetera, who are putting robots alongside workers to make them more productive, rather than replace them.  And I think we see that.  I think one of the great myths right now, for example, is about truck drivers.  Right?  That we’re going to eliminate all the truck drivers and taxi cab drivers and replace them with robots.  Not true.

First of all, we need more driving, more truck driving.  We are moving to this delivery economy where in front of my house every day at least a dozen trucks go by and half of them stop at my house.  My wife’s the profession al retail researcher, we call it.  [LAUGHTER] And at least half of them stop at my house.  Having said that, we need more truck driving.  And so the truck driver of tomorrow is actually going to behave like the guys who fly remote piloted vehicles today.

Outside of Las Vegas there’s a building where a number of Air Force pilots just go to work every morning and fly drones over Afghanistan and Syria and elsewhere.  The truck driver of tomorrow will be like that.  They may go out to their garage, get in their truck-driving pod, pick up their first truck—pick up their load, put it on the freeway headed toward Phoenix.  They then may pick up the second load, do the same thing headed toward Las Vegas, and so on.  They’ll drive it around the city streets, supplementing the AI on board, and the AI will drive down the freeway.  So, they’ll be driving five trucks, not one.

And in fact, a company in Las Vegas called Starsky Robotics is actually building it right now; they think they can do 10 trucks.  I think that’s ambitious, but let’s say only five trucks.  So, more productivity, the truck driver goes home and gets to sleep with their spouse at night, gets to know their kids, doesn’t suck fumes all day, and doesn’t die in accident.  That’s the truck driver of tomorrow, and the skill set is Grand Theft Auto.  [LAUGHTER]

Rothschild:      So, Mona, I know that you have maybe some different ideas about which sectors are going to be most impacted.  Do you think truck driving and that sort of sector is going to be most impacted, or what’s your thought?

Vernon:           So, I’m going to focus on the sector I know.  Thomson Reuter serves knowledge workers, and I agree with Douglas that it’s about augmenting them.  In fact, I talk about giving knowledge workers superpowers.  So, let me give you a concrete example.

Interestingly, data privacy has been in the news in the last couple of days, and one of the challenges for data privacy experts in large companies is that they’re dealing with an increasingly complex data privacy regulation environment, and a proliferation of information.  So, we did a survey of a thousand data privacy experts, and 44% of them felt that they might fail to comply because it’s getting increasingly complex.  So, we developed a tool combining the human expertise we have with our legal experts, the quality of data, and feeding into algorithms, to come up with a tool to give data privacy experts this augmentation, this superpower to make sure that, for instance, they don’t miss information that they would otherwise be looking for.

So, there’s a discovery feature that’s powered by Watson, and combines our data, that helps them make sure they’re done doing their research and gives them the feeling of confidence.  So, I think that translates into knowledge work.  Knowledge workers are going to get superpower with this combination of AI, trained by the right data and validated by experts, and they’re just going to be doing more and more exciting things, and perhaps do less of the really boring tasks.  So, it is similar to what Peter is talking about, but I think it definitely is going to happen.  It’s already happening for knowledge work.

Rothschild:      Is there a new skill set that knowledge workers will need to gain in order to do their jobs, or is this just really going to take away the more tedious stuff?

Vernon:           That’s how I feel.

Rothschild:      Do they need to learn to play Grand Theft Auto?

Vernon:           No, I think it’s—if you really think about designing user experiences that could truly understand how a lawyer works or a data privacy expert works, then what they’re getting is a tool that gives them superpower; they don’t need new skills.  Rather, they’re not going to do the tedious part of their work, and really focus on what they’re really good at.

Schwartz:        Look—oh, sorry—

Rothschild:      I just wanted to pause because the iPad seems to be missing and I—

Schwartz:        It’s right here.

Rothschild:      Wonderful.  Thank you so much.  Sorry.  Go on.

Schwartz:        Look, I think the point was just made that I think is really quite central.  There are going to be jobs; the jobs are going to change; new skills are going to be needed.  So, the central task we’d really face is an educational and retaining task.  And I think that’s the single most important thing.  I’m really not at all worried about whether there are going to be good jobs out there, and whether there are going to be opportunities for people.  History supports that.

Look, in 1950, we had 60 million people working; we’ve got 160 million people working in the United States.  We created a hundred million jobs.  We’re very good at that.  The real question is that skills retraining, and here I think the private sector is very important, I think, because most of those people work for us.  so, our responsibility is retraining, reskilling, providing the tools for people to be able to actually do that in their jobs, where they work.  The opportunity for that 50-year-old truck driver who didn’t play Grand Theft Auto to be able to actually be able to drive that new truck.

Terrier:             So, I think that’s a really, really important point, that this is not something, by the way, that we kind of talk about it like it’s about to happen.  It’s important to realize it is happening, and it’s gradually going to continue to happen.  And if you think about the jobs that we have today, probably half the jobs that young people are going into didn’t exist when I went to college, and that’s going to be true when you look into the future.

So, yes, there will be jobs that will change and the skill set will change, but there will also be myriad jobs that will be created that don’t exist today.  So, I think there’s a lot of opportunity.  And I think Peter’s point about retaining is really important, and we need to recognize that probably the biggest difference in the way we think about careers today is you go to college, you get one skill set; you’re good for a career for life.  That is probably a different model where we’re going to have to update our skills as the technology continues to change, because as I said, it’s going to be a continuous process.

Rothschild:      Do you think, then, we need to actually change how, like, our education system in order to adapt throughout our lives?

Schwartz:        Yes.  I mean, this is about learning, relearning, and relearning.  Fortunately, one of the interesting technology trends that’s going on is a massive investment in what we call ed tech, educational technologies, to enable people to learn in context, in their work, in their schools, back home, et cetera.  And we’re spending about $15 billion in investment in the private sector investing in these new technologies.  We in Salesforce are investing in something we call Trailhead—allowing our customers and anyone else to learn about the technology on their own, at their own pace.  And I think that’s the kind of investment that every company is going to need to make.

Rothschild:      Right.

Terrier:             And I think one of the things that we should pint out is that a lot of people think you have to be directly involved in one of these sectors to see the benefit, right?  So if you take the work that NASA’s invested, for example, over the last decades in artificial intelligence or computers, and if you think about how I got here today with the GPS on my phone, how you operate when you’re driving to go visit your relatives—we really all benefit in many ways, and other sectors of the economy benefit from the introduction of these technologies in other fields.

Rothschild:      I actually wanted to ask you about this in particular, because we all know that there’s so much NASA technology that ends up being repurposed for not space travel, obviously, or other space exploration, and now we’re using it in our iPads or in our kitchens.  So, are there particular projects that NASA is working on now that you can talk about that you think will have a big impact on all of our lives in the next five, ten years?

Terrier:             So, absolutely.  And again, I think it’s important to point out that this has been the case for some time, right?  So, the first miniaturized computers were developed during the Apollo era, which Peter worked on, to create the capability to do all the processing and augment the human capability to do all those orbital mechanics computations which humans couldn’t do.  And we continue to do that as we push humans further out into space.

One example that I think is really interesting is that we, in the search for Earth-like planets we have to use a lot of machine learning and artificial intelligence to process the volumes of data.  No human can look at that volume of data and sort out the patterns in it, so we use artificial intelligence to do that.  Those same algorithms are used in any big data set when we try to look for patterns.  You’d be surprised to learn—so, for example, we use that same kind of processing in doing line set data where we look at crops around the country, where we look at how to reprogram airplane flight paths—those same algorithms are applied there, and in a myriad of other cases.

Vernon:           And I think, to add to Douglas’s point, is ITF—I mean, there’s still a lot of jobs that we haven’t created.  So, let’s give an example.  Companies in the private sectors have hired a lot of data scientists that do usually really well that front end thinking about what is the first use of an algorithm.  I’d argue that we need industrial process control through the full life cycle.  How do you qualify the quality of the data, your assumption base?  How do you then check how the model is improved and evolving over time?  Great set of jobs that probably can be created and are fundamentally going to be important to industrialize this technology and deploy it more widely.

Rothschild:      How can a company then sort of prove to its customers, to its consumers that its algorithm is working equitably and fairly?  I mean—and we don’t need to talk about how to actually prevent bias in the data—I mean, how can a company express this to the people they’re serving?

Schwartz:        I think you’ve gone to I think perhaps one of the most difficult issues.  I’m not worried about AIs taking over and runaway robots, but this is a real issue.  And the reason this is an especially difficult issue is because as the algorithm gets more and more sophisticated, it learns, becomes more complex, and frankly, very difficult to unpack why it made the particular recommendations.  And if it’s made a recommendation that you get credit for your house and you don’t, she wants to know why.  And that is a challenge of unpacking that algorithm.

And let me say, I don’t think it’s easy.  I think simply opening the box and looking inside isn’t going to tell you very much, because, in fact, the algorithm is not the one we started with; it’s evolved itself.  So I do think this is a bit of a challenge for the industry, to try and figure out the rules for algorithmic transparency.

Vernon:           So, the way we manage that is we stay really close in the innovation labs with cutting edge technology research, for example, that exact topic Peter talked about is being studied at MIT, the AI lab and the CSAIL, and I think there is a role to continuously ask those questions, but also work really closely with the researchers that are trying to unpack these difficult questions.

Terrier:             And I think it’s important to say that consumers have a little more power than they might think in this, as well, right?  So, when you have humans making these decisions in the current systems, we have choices, and in the free market we’re able to choose who we trust, who we don’t trust—and as Peter said, it’s going to be more difficult because of the complexity of these algorithms—but at the end of the day, I’m confident that the market forces will—that providers that provide the capability that consumers are looking for will rise to the top.

Schwartz:        And remember, it is likely that in the end, as we get very good at this, the algorithm will actually be less biased than human beings.  The algorithm won’t be racist; it won’t have attitudes about men and women; it’ll look fairly objective.  And look, the issue was raised in the very first panel, about the accident that just happened in Tempe.  It’s important to remember that on that same day that that happened, there were probably 99 humans killed by other humans in accidents, so it was 99 to one in favor of the machine.

Terrier:             Yeah, that’s a super important point, and I think we are always going to be concerned if there’s any software glitch.  You know, we’re always concerned about any—certainly, a tragedy of human life—but I think that you have to assume that the benefits far outweigh the potential dangers.

Rothschild:      Well, what do you say—I know this was brought up earlier—what do you say when people like Elon Musk say that AI is more dangerous than nukes?

Schwartz:        Yeah, well, someday maybe.  Right?  We’re a long way from that understanding of the brain sufficient to give an algorithm, a robot, the kind of autonomy and capacity to make judgments that a human being has, and to do really evil things.  We’ve got a lot of bad people to deal with long before we get to bad robots.

Terrier:             And speaking of bad people, I think that statement’s an interesting statement, because if you unpack it—yes, there are things that are very dangerous today, but we have systems in place and responsible legislators and governments that control those things, and we need to have the same kind of process going forward with artificial intelligence.

Rothschild:      Mona, do you want to comment?

Vernon:           I’m not adding to that.  [LAUGHTER]

Rothschild:      Switching gears a little bit, so much of just our culture generally is sort of forged at the workplace, and I’m wondering how in the future, as AI changes, what jobs exist, where you’re actually doing your job, how you think that might change our society generally.

Schwartz:        Well, we have a great example that actually just recently played out in Michigan with Steelcase Furniture.  Steelcase is one of the big manufacturers of office furniture—I wouldn’t be surprised if everybody here is sitting on a Steelcase chair, it might be possible.  And in the 1990s, as rising labor costs grew, Steelcase moved a lot of those jobs for making furniture to Mexico.  But what’s happened in recent years is they’re now bringing a lot of them back to Michigan, and they’re pairing workers with robots.  The machines are doing the heavy movement and holding in place; the human beings are doing the fine work, and so on.

And so, what you’ve done is upgrade those jobs; make them more skilled; paid them better.  They’ve brought back about 80% of the jobs—they’ve lost a few, but they’re better, higher-quality jobs.  And what that’s telling us is that in many, many instances, some of the more painful, difficult, physical—not only boring, but things that actually wear on workers, that bring them down over the decades, actually are going to be dealt with by the machines.  Those areas where human judgment, skill, refinement, capability, control are required—actually, teamwork—the most important skill for a human being is human empathy, the ability to work with other people.  It isn’t software programming; it is that capacity for collaboration.  That’s the skill we most need to develop, because that’s what will be uniquely human.

Vernon:           I agree, and I think for me there’s a couple of things.  One is that if you’re a knowledge worker, right?  So, if you’re a lawyer, you work in finance, or basically your way of making money is using your brain, it’s going to be a basis for being competitive to be augmented by these tools.  You will see that happen across industry.  So, not having it is being left behind.  The other piece is, to build really useful AI tools, it fundamentally requires rethinking how we think about design, and I’ve been really excited to think about this topic, which is how do you design useful tools to help knowledge workers be more effective?  And that’s not something that today a robot can do.

So, one is the adoption of those tools is going to become table stake and a basis for competition, so it will change the workplace for knowledge workers.  And the second, for those of us that are building those tools, being really able to be impacting to bring a design-thinking approach to it is going to be a critical aspect to get those tools adopted.

Rothschild:      Are there particular things that you think we should be doing, then, to foster these—I know this is kind of a silly thing to say, but to foster thinking about empathy in the workplace, in order to create people who can work better with AI in the future?

Schwartz:        Well, look, the book, Emotional Intelligence, a number of years ago, by Dan Goldman, I think was really quite a profound shift—that is to recognize that intelligence also has its emotional components, and that it can be developed, it can be trained, it can be tested for.  And you can actually work with your own workforce to make them more emotionally intelligent, and therefore, much more productive.  So, I think we actually do have the tools to be able to make that human workforce much more capable of collaborating with other humans using machines.

Terrier:             So, it’s really interesting when you think about how far we’ve come in such a short period of time.  One of the things that Alan Turing in the ’50s proposed, this test for AI, which is you can’t distinguish the machine from a human being, right?  So, that’s a really interesting concept, not necessarily the way we think about it today, but it’s important, when you think about this emotional relationship.  So, this is the first generation that’s coming up now that will live and work among intelligent machines as an integral part of their team, and I think that has profound implications in society and in the way our workplace operates.

We have a situation at NASA which is, for many years now we’ve had where we’re again extending human capabilities through artificial intelligence and robots on Mars or through our observatories in space.  We very much consider those machines as part of the team, and the way the team interacts with that has been really interesting to watch, how that’s evolved over time.  And I think as we get more sophisticated in machines, we’ll build in that empathy and that humanlike quality in the machine to help that relationship work better.

Rothschild:      I would love to know a little bit more about how the people who are actually working with these machines—whether they’re ascribing human emotion to them at this point.

Terrier:             So, I’ll tell you a really interesting story.  I’ve work primarily for most of my career in the human space flight arena, and when we have a launch or we have a dynamic event, there’s a lot of emotion in the control room because of the concern about human life.  I had the opportunity to be out at the jet propulsion lab when we were landing one of the curiosity rovers, one of the rovers on Mars, and the tension and the hope and the prayers and the emotion in that room were just as palpable because of the connection that people had with that machine they’d invested decades of their life in designing.  So, yeah, it’s very much a real relationship.

Schwartz:        In fact, that Mars lander, the vehicle that came in that was one of the great AI achievements of engineering, one of the all-time greats, and they managed to pull it off the first time, the only time it was every done.  It was one of the great achievements by NASA.

Rothschild:      The thing that’s sort of complicate, though, is the minute that we start kind of ascribing emotion to a machine, this is where we get into the territory that I think scares people.  And I just wonder—

Schwartz:        Well, but we used to name our cars—you know, I was a car guy when I was a kid, right?  And people named their cars Betsy and Bill and stuff like that.  We’ve had these kinds of relationships with machines that we share.  We probably didn’t name our washing machines, but our cars had kind of character to them, and they identified with who we were.

Terrier:             How many people have hit their computer when it was giving trouble, right?  The computer actually doesn’t care; that’s about you.

Rothschild:      But that’s just to fix it.

Terrier:             Well, it actually doesn’t, but it makes us feel better.  And the point is that the emotional component is actually not a function of the machines.  This is not something that we need to worry about the machine; this is actually our issue that we need to deal with.

Vernon:           And that’s just—add to Douglas’s point earlier—this is not—I don’t think we’re sitting at a change—it’s a continuous evolution, right?  Many of us have had components of AI technology in our products in the last 25 years; it just so happens that for some reason there is an excitement around that buzzword today because of how mature it’s getting.  But it has been a slow evolution.  Your iPad is powered by AI; it was powered by AI five years ago.  We just talked about big data and the cloud and not necessarily this.  So, what that means is that we have a continuous opportunity to get used to the more powerful tool they’re augmenting us.

Terrier:             Yeah, I think that’s a really great point, and I think that if you look at—you know, very, very simple things, when you push that button that says potato on your microwave, that machine’s doing a lot of sensing, a lot of decisions about what time to—when you hit the brake on your car, hundreds of times per second the car is measuring the rotation speed of the wheels and making a decision—you’re not actually controlling that brake; the machine’s doing that for you.  And we get used to these things, as Peter said, because the outcome is better for us that we stopped quicker and more safely.

Schwartz:        I’m a pilot.  Yesterday I flew here; I didn’t fly my own plane here, but an AI flew me here, except for about 20 minutes, the take off and the landing.  And the AI could have done that.  But, having said that, the autopilot flew the plane across country.  Every one of us does that all the time, and we never think about it.  We’re so—I completely agree—this is a continuum that we have been moving toward over a long time.

Rothschild:      Great.  We only have a few minutes left, and I would love to talk about the impact of AI sort of internationally.  So, first of all, I’m curious to know how you think AI will sort of transform industry in the developing worlds in the next few years.

Schwartz:        Well, look, first of all, I think this is a technology that is increasingly accessible to many people.  It used to be a thing that existed in the laboratory.  But many companies—Amazon, ourselves, Microsoft, Google—are all providing the tools for the public to be able to use.  And that means in places like Africa, Southeast Asia, Latin America, there are young academics, young start-ups, actually picking up these tools and inventing new ways of, for example, distributing medicines in Africa using drones and AI to get to the incredibly remote locations that they wouldn’t have been able to otherwise.  And so you’re seeing this wave of young entrepreneurs that are actually picking up the new tools and creating new kinds of products and services that would not have been possible otherwise.

Terrier:             Yeah, that’s a really great point.  And I think we, even at NASA, for example, one of the things that we have is we have outposts where we’re putting humans.  We’re trying to put humans all the way out to Mars, so we need to put a lot of intelligence that normally would reside in a control room here on Earth, we need to make that available, whether it’s doctors, medicine, technical information, so on, in artificial intelligence systems because of the light time.  That same technology is available to remote medicine, to austere areas where people are separated by great distances, to enable them to have access to all the technology.

And I think in many ways, just this technology in general is a great equalizer internationally, for particularly the underdeveloped countries that don’t have the traditional industrial infrastructure.  They can leapfrog that and jump right into this arena.

Vernon:           I agree.  I think the way to think about it is, under the umbrella of AI is the maturity of a set of digital technologies that now can really help emerging markets leapfrog.  So, we talked about mobile, social, which is a lot of data and connecting people through a proliferation of data.  AI is that component that brings it all together and really creates digital first solutions that are truly innovative across many, many sectors.

Schwartz:        And look, learning in the developing world is the—like for us, but even more important in that part of the world to actually see the opportunities to develop.  And the new leaning technologies universally distributed make a really remarkable opportunity.  Wikipedia is the greatest antipoverty tool ever created.  Any kid on the planet with a device like this now has all the world’s knowledge available to them.

So, I think we’re in an era that enables AI to enable education, learning, and productivity all over the planet.

Rothschild:      And do you think it will help connect the kid with the iPad maybe someplace in Africa to—

Terrier:             Absolutely.

Schwartz:        Absolutely.  I think it is.

Terrier:             I think it already is.  The social networks are borderless, right?  And I think Peter’s point is really important, that it’s, again, a great equalizer.  Kids anywhere can have access to all the world’s information, and that is a remarkable, remarkable tool, and step forward.

Rothschild:      Great.  Well, we are about at the end of our time.  I just want to thank you all so much for being here.  This was a great discussion.

We are going to take a quick break, but the next part of our program will begin momentarily.  Thank you so much.

[APPLAUSE]

Content from Software.org: the BSA Foundation: Sponsor Segment: Maximizing the Benefits of AI

Haddad:          Good morning.  Hi, everyone.  I’m Tammy Haddad, here with Victoria Espinel.  Put your phones down, because she’s the guru.  Let’s hear it for Victoria.  [APPLAUSE] Okay, somebody left this.  Is this a hint?  Are these questions I’m supposed to ask?  All right, I’ll just leave it over here.

Thank you so much for being with us.  We’ve heard so many amazing things.  But you can put your hones back on, because you’re going to want to write down everything she says.  Victoria was the nation’s first chief intellectual property enforcement—were you an office or a coordinator?

Espinel:           Coordinator.

Haddad:          Coordinator, no officers in the Obama administration.  And she runs BSA, the software alliance.  She is talking to world leaders, she is talking to the greatest scientists, the greatest companies, about what’s going on.  It’s a privilege to be here with you.  And I have to ask you first—we’re talking about artificial intelligence, obviously, but can we just back it up and talk about what is happening, who is doing it, and how they’re doing it?

Espinel:           That’s great.  So, thanks, Tammy, and thanks to everyone for being here today.  So, artificial intelligence, at its essence, is built on data on lots and lots of data, on much more data than human beings can process themselves—although you can process a lot of data, Tammy—but more data than most human beings can process.  And so, when people talk about artificial intelligence, and when they talk about how to build it and train it correctly, a lot of what they’re talking about is—what that means—is how do you make sure that the information, the data, that is being used to build or create or train the artificial intelligence is as good as it possibly can be?  How can you make sure it’s as accurate and complete as it can be?

Because if the data that is used to train the artificial intelligence is incomplete in some way, then the output is going to be skewed.  So, when people talk about bias in artificial intelligence, really what they’re talking about is, how do you try to make sure that the information and the data that is being used to train the artificial intelligence is as good as it possibly can be?  And there’s a few different things in my opinion that need to happen in order to make sure that that happens.  But really, bias—eliminating bias in artificial intelligence is about trying to make sure that the data that goes into it is as good as it possibly can be.

Haddad:          So, is that people?  Is that code?  I mean, is there a code of ethics?  How do you do it?

Espinel:           So, there’s not yet a code of ethics.  That is something that a lot of companies, including Software.org companies are talking about.  But I think it’s a few things.  So, part of it is people; part of it is making sure that the data scientists inside the companies are trained as well as they possibly can be.

Haddad:          Who trains them?

Espinel:           But, a big part of that—I think probably a bigger issue—is trying to make sure that the information is as good as it can be.  And in my mind, there’s at least two different components of that.  One is, trying to have as much information as possible.  So, if part of that is, for example, trying to get the government to share all the information that it has with companies that can use it in order to make sure that what they’re building is as good as possible.

Another part of it is trying to make sure that those that are trained in the AI themselves have a diversity of backgrounds and experiences.  So, something that I feel very strongly about is, you have software that’s disrupting the world, artificial intelligence is disrupting the world.  And when it’s being developed, created, built, trained—whatever words you want to use—the people at the table have to have a wide range of experiences and perspectives in order for it to be as good as it possibly can be, and that is something where we need to do more work.

Haddad:          So, that’s my question.  How do you do that work?  We already know there’s STEM programs, there’s a variety—how do you get more diversity?  Because that’s really what you’re talking about.  More diversity of opinion, experience—how do you get that into the program?  We all know, right?  We all agree there needs to be more diversity.  Do we agree?  Okay.  So, how can you do that?  I mean, because you’re talking about taking this entire industry, all of the companies, governments, everyone around the world, together, are trying to design—they’re making these decisions, right?  So, how do you add that in, in advance?

Espinel:           So, I don’t think—like anything that’s complicated, there’s not going to be a single answer.  But here are a few ideas.  One is, I think it’s really important that governments—and I would include that of the United States government—be rethinking the educational system to make sure that at a young age children are being exposed to coding, computer science, as early as possible.  And that won’t just help get more girls into coding—which is part of the issue that we have—but it will also I think help ensure that we have the promise of economic opportunity coming from this spread out across the United States.

So, my dream would be that any young person, regardless of where they live in the United States, it is a feasible future for them to go into this area if they want to be.  And that is not the reality today.  So, part of it is focusing on gender, but part of it, I think, is very much focusing on making sure that anywhere that you sit in the United States, this is an opportunity that you have, and there’s a lot of work to be done there.

And so, I think government has a big role to play in terms of educational curriculums.  You know, the industry, the tech industry and the software industry has already been doing a lot in terms of programs that we’re supporting.  And that’s great, and that will continue, and we will be doing even more.  But I also don’t think it should be—and it’s not good for society for it to be kind of on the tech industry alone, so I think it needs to be the tech industry and the government trying to work together.

Haddad:          Well, that’s funny, because one of the issues in the large picture of the culture today is that people are looking at their company that they work for—whether it’s GE or Under Armour, or you name it—to help them, to advance them.  You should be giving me more job opportunities.  My success is going through the I call it the Daddy CEO situation, but it’s actually double in AI, right?

Espinel:           Right.  And just to divert for a moment.  I think one thing that’s really important is that when we in the software industry are talking about training people for new jobs, we are not necessarily talking about training them to get jobs at software companies.  I mean, that happens, too, but more what we’re talking about is training people to get jobs that use tech skills or digital skills, either in the company that they are in, or the company that they want to join, regardless of what sector that is.  Every sector out there is using software today.  And so the training that we are giving people will help them get jobs across industry sectors, not just in ours.

Haddad:          Well, that’s why I want to go back to the economic opportunities, and the application of all this.  So, you’ve done it.  You’ve got more of a diverse workforce; you’ve got more buys.  How will all of this be implemented at companies all across the world?

Espinel:           So, companies right now are hungry for people with these skills.  I think an issue that we need to figure out, though, is we have employer demand and we have people that are eager to get these skills.  What we don’t have yet—and hopefully this is something where software and tech can help—is a great matching of those.  So, where employers need people and where there are people that have those skills and want those jobs—how do we make sure that they are coming together?

And a part of that, also, is thinking about whether or not—and again, I think software can be really helpful here—can we spread out those job opportunities across the United States?  One of the things that software lets you do is work from anywhere, and that we haven’t really exploited yet, the opportunities that that could bring workers and try to get access to jobs either in tech companies or in other types of companies, wherever they are.  In other words, bring the jobs to the people rather than making people go to the jobs.

Haddad:          Is that a gift of AI?

Espinel:           I think that is a gift of software generally, but I think artificial intelligence is one of the areas where—that is one of the benefits that artificial intelligence could bring.

Haddad:          And is there a program that you see right now that’s looking in any of the companies or governments—maybe when you were in the Obama administration—that are looking at bias this closely and trying to come up with solutions?

Espinel:           So, yes, there definitely are people that are looking at it.  The companies that we work with are looking at it intensely.  But here’s—because I know we don’t have that much time—here’s another element of this that I want to talk to.

Haddad:          Two minutes.

Espinel:           So, we’ve talked a lot, and we will continue to talk, and we should, about how you train artificial intelligence to make sure that it is unbiased—and that is something that absolutely has to happen.

Haddad:          How do you do that?

Espinel:           And part of that is making sure that the information that goes into it is as complete and as accurate and reflects a diversity of experiences.  But that’s about creating artificial intelligence.  I think what we also need to be thinking and talking about, and I hear lots of discussion of, is, okay, now it’s been created.  How do you use it in a way that eliminates bias, and how do you use artificial intelligence in a way that broadens inclusion?

So, I’m going to give one example that I think is really tremendous, but this is just an example.  And there are so many it’s hard to pick.  So, now I’m upset at myself that I limited myself to one—but just to give one.  There are companies that are using artificial intelligence to help people with autism who are not as good at recognizing facial cues as others, to be able to use artificial intelligence so that when they are interacting they are getting a correct interpretation of the emotions of the person that they are speaking to.  And if that can happen in a way that is seamless, it will transform people with autism or Asperger’s ability to interact in the world.  For one thing, it will open up career opportunities for them that don’t exist today.  But even in just their daily life and their interactions with their friends and their families and their relationships, it will change their lives in a way that will be so fundamental.  And that is one example of a way that artificial intelligence could be used to broaden inclusion.

Haddad:          That’s pretty exciting.  Well, the other thing is, that totally changes the workforce, right?

Espinel:           Completely changes the workforce.

Haddad:          Because you are now bringing intelligent people from another part of society right into the middle of things.

Espinel:           So, if we—one more example?

Haddad:          We have one more minute.  They’re not going to come out for another minute.

Espinel:           All right.  One more example, really quickly, that is even more directly related to workforce is, companies right now are using artificial intelligence to rethink their hiring strategies, including how they advertise for their jobs, to make sure that the way that they are seeking employees doesn’t have some sort of hidden bias that they themselves may be entirely unaware that the way that they’re advertising for jobs and the way that they are seeking employees has a bias in it that is skewing it to be more attractive or appealing to certain types of people.

So they’re using artificial intelligence right now to look at their hiring practices and try to ferret out of them any hidden bias that might be in there.  And that is, I think, another area that is really exciting.  That is, all of these things are nascent—you know, they’re not—they are either only just starting to be deployed or not even being deployed yet—and I think this is just the tip of the iceberg, in terms of the applications of artificial intelligence to reduce bias.

So, fundamental is we need to build it in a way that is unbiased, but then we need to use it in a way that will reduce bias.

Haddad:          Thank you so much, Victoria Espinel.  Terrific job.  [APPLAUSE] Now I’m going to hand it back to The Washington Post.  Thank you so much.

Using AI Responsibly: One-on-one with IBM’s Dario Gil

Harwell:          And I am happy to introduce this time, Dario Gil, vice president of AI at IBM, a company everybody has heard of.  We’re going to be discussing artificial intelligence and responsibility.  And once again, I ask you, if you use Twitter, to tweet question to us using the hashtag #Transformers, not the movie.

So let’s get into it.  I feel like now would be a good time, now that we’ve figured out that AI can solve all these problems, to probably get a reality check on where we are in the status of the technology and what we’re going to be looking at in a couple years.

Gil:                  Yeah.  I think that’s right because we tend to oscillate with an enormous level of enthusiasm and ambition.  And sometimes, we think ahead too much, right?  So let’s put it a little bit in context of where we are in AI.

Thanks to actually the fruit of many decades of progress, I would say that we’re in a forum where a narrow form of artificial intelligence has begun to work.  And what that means is that for very specific tasks and objections.  Like, let’s say, classifying what may be in a picture, or being able to do good, but narrow forms of speech recognition.  We have gotten the technology to be good enough.  So that’s this narrow form of AI.

Bookend it to sort of like the more utopian or dystopian discussions that people have, that’s what they refer to as “artificial general intelligence.”  That’s, by all accounts, many, many decades away.  But that speaks about a form of intelligence that is more a kind of human intelligence, where you can solve problems across arbitrary tasks, and domains, and you can keep learning.  You have a high degree of autonomy in your actions.  So that’s the other bookend.

And in between, we’re entering a new phase that we refer to as “broad AI,” which is how do we go from this from very narrow and segmented capabilities that we have today—that are powerful, but very narrow—to the ability to do and solve more tasks.  Meaning, give me an example.  If I learn to perform a task, is it easier for me to perform an adjacent task, something that is close by, or do every time I have to learn a new task, do I have to start from scratch?  Do I have to build a new AI system?

I think being able to do this so that we can do broader and broader tasks, that we can partner better with people to complement their expertise, to make it more transparent, is this realm of broad AI that we’re entering.

Harwell:          And even with the narrow AI that we’re into right now, there’s a question of responsibility for who develops it, and who should take responsibility for the ethics of it; the guidelines that establish how it should be created.  Who should have the responsibility for that?  Should it be the engineers?  Should it be the regulators down the street?  Should it be the consumer that sort of votes with the pocketbook?  Where should that debate be happening?

Gil:                  Well, I think it needs to happen.  It’s a multi-stakeholder process.  Because at the root of AI is what is its purpose.  What are trying to solve?  It could not be a discussion just about the raw technology.  So if you’re doing the context of AI for healthcare, you need to be able to operate within the constraints of the healthcare profession, which are there also for good reasons.  And so there are many stakeholders that will be there.  They will be stakeholders—are the physicians.  Regulators are a part of it.  And AI needs to fit into that context, and the responsibility have to come across all of those stakeholders.

The creator of it has to take responsibility for what is the algorithm doing.  What data—who trains these systems has a lot of implications for the outcome of the system.  So I think that where we are entering a phase is that this sort of like narrow discussion about to have a neural network, and here I show you an example that it does a good job classifying this particular task in healthcare, or these other area, and it gets a lot of headlines; needs to move towards a responsible use of AI in professions, and within specific industry context.

So I think that we got to go to the point where we’re saying, “Don’t talk to me only about the technology.  Who are you as a company?  What trust do you have?  What responsibility are you taking into this process?  And do you understand the context in which AI is going to applied?”  And the principles have to be a set of principles—are based on—one of it is purpose.  What are you trying to do with it?  At IBM, we take the point of view that AI has to be augmenting human intelligence, not replacing human intelligence.  So purpose is one.

The second one has to do with trust.  And we basically, our position is, “Okay.  Whose data is coming into train it?”  We take also the position that the institutions that are providing the data are the owners of that data.  Or in the case of it’s of an individual, that it is their data, and therefore we don’t want to use it for other adjacent purposes, but only to what has been claimed for.  And how do we provide accountability to the algorithms that have been developed to train it?

And the third dimension is skills.  How do we also invest in the technology so that it’s complimentary to the skills of the professionals or the individuals who will be using it.

Harwell:          Right.  Let’s dig in on healthcare a little bit because I find that industry especially fascinating.  It touches everybody’s lives, and yet, you know, we’re still talking about moving from paper to digital records.  And there’s all sorts of questions about data privacy, really important HIPAA guidelines, and even just getting doctors to buy into that kind of system.  Where do we sit with AI in healthcare?  What are the opportunities and what are especially kind of the perils for moving too quickly into that?

Gil:                  It is extraordinarily early days.  I think you alluded to the fact that prior technologies advances have also—you know, we’ve seen the implications in healthcare that they’ve been adopted.  But we’ve also seen that sometimes the pace of adoption takes a long, long time.  And that’s the reality today.  I think undoubtedly—you know, I’ll speak more from a research side, which is my responsibility, we have a lot of evidence that AI will have a lot of profound implications in the practice of medicine.

It will have implications in the areas of discovery for life sciences.  You just look at the sheer amount of genomic information that is available, and how we will actually go and make sense, and connect it to disease progression—will be important.  You will have important implications—we don’t know in the areas, for example, like radiology, and so on, in oncology, in value-based care.  But to make progress, we’ve got to deliver evidence.  We have to be able to put the systems and validate them scientifically and rigorously.  And it is through the demonstration of the results and published results that we will see an increase adoption.

And that is just like the practice of science in general, right?  There’s no shortcuts to that.  You just have to do the heavy lifting and the work to prove that the technology’s affective, that it solves problems, and that it provides a value added to the institution, to the individual, to the physician.  But adoption of it, it also has to do with this element of trust.  If the physicians and the practitioners don’t believe that this is in the best interest, let’s say in this case, of the patient, but also they have a consideration of their financial and professional interest, it is very hard to get it adopted.

So those are all the barriers that have to overcome, but there is no doubt that it is a big consensus in the community that it will have a very profound and positive impact if we do it well.

Harwell:          I want to talk about that tension between the adoption and the marketing.  The marketing for the AI is very high; almost sometimes overlaps what the actual product can do.  You know, IBM with Watson, it was sort of pitched as this kind of revolutionary, cancer care solution.  But last year there was a partnership canceled with MD Anderson Cancer Center, where there was a big investment from IBM’s point of $60 million.  And there was a question of whether the technology was overpromised and that it underdelivered to the doctors or the patients.

Can you help me understand how we should be thinking about tension points like that, when there is a question of how effective the product is?  And help us understand IBM’s thinking about how to move forward on that.

Gil:                  Yeah.  At any time one introduces a technology that is going to have a very, very broad impact in the world, there is indeed always a tension between communicating sort of the broad impact that the technology will have in society, and getting everybody, all the stakeholders, to think about what does it mean; where is this going to go?  And I think we would not question the fact that AI, if you ask today, right—all the leading technology players, and not just technology players, companies—where would rate different technologies in the level of sort of impact to the world.  I bet most of us would rate AI at the number one position today.

If you look at that today, and then you say, well, what is the impact that it’s going to have across different professions, including healthcare, I think both the healthcare professionals and the technology companies would say also it’s going to have an extraordinary impact.  Then the question is, where are we in the progression, right, which you asked for.  And the honest answer is we’re in the early days.  And in the early days, you require pioneers to experiment with things.  And you try things out, and you demonstrate what works and what doesn’t.  And in the course of doing that is how you make progress.  And there’s no way to shortcut this.

We were chatting before about self-driving cars.  Well, what is the impact that that is going to have?  Are there going to be difficulties, yes, but do we believe that over time those difficulties can be overcome?  The answer is yes.  So today, right, we have, you know, a very thriving and very successful business applying AI to health, where there’s tens of thousands of patients are being impacted and benefited by partnering artificial intelligence—in this case, our Watson solutions in health—with practitioners in doing this.

So I don’t think we can sort of like judge the level of progress by a specific project where there’s a lot of convoluted facts that take place there.  But what is the actual progress that is being made in the field?  And in this narrow form of AI that I was alluding to, undoubtedly, even with the current capabilities to understand documents limited, to be able to process images, to be able to segment a new classification, I would say in the life sciences, there’s enormous value to be created.  And that is being recognized, and that is the reason why in healthcare—and we’re not the only ones saying this.  I mean, I think it was brought up here in the panel.

When I think the community speaks about what is the likelihood of the biggest impact that we’re likely to see in AI in industries, kind of everybody’s saying healthcare is very, very important.  It has a lot of implications.  And the reality is we’ve been pioneers on this, and we have a very successful business, and growing that.

Harwell:          And just to riff—I appreciate your point on that—do you feel like the companies do overpromise sometimes?  I mean, do you feel like that is a problem for not just you all, but a lot of companies—[OVERLAPPING]

Gil:                  Sure.  I would say in just a general statement, AI right now is at an extraordinary level of hype.  And again, over the long run, I do think that AI will have that transformative effect on society.  But you know what’s happened too, and I’m sort of seeing in the discussion here today, but broadly, is I like to say now that AI is the new IT.  And what I mean by that is that all the projects that we used to ascribe to information technology, and analytics, or automation, now they’re being lumped into AI.

So if you take a more like narrow definition of AI, and you go and say, well, within AI, we have a subfield called “neural networks,” and within that we have techniques like deep learning.  And then you ask for all the categories that people are saying this is AI solving this and that, even if you take that narrow form and you say, well, are you using?  Are you employing in this solution, say, neural networks, as an example of deep learning?  I bet you would filter the claims that are being made by 98%; right, of what people are claiming is AI will be much narrower.

So what happens is that the word “AI” has become a substitution for a lot of other fields, right, that have been lumped in, just because of its excitement.  And as a result of that, you have this tension of when you’re speaking as a scientific, you know, as a researcher, you say, “Well, what is the actual rate of progress of what AI is doing?”  But when we popularly talk about AI, we are lumping all sorts of other things into it.  So you have this difficult tension, right?

Well, on the one hand, the public is experiencing, it seems like AI is everywhere.  It seems AI is touching everything.  And that leads to a lot of excitement.  And then when you go into the detail and say what is the actual state of the art in AI, you recognize the fact that even though amazing progress has been made, it is still narrow in its capability.  And our mission is to advance that capability to keep up with the desire to apply it.

Harwell:          Late last year you made the joyful march to Capitol Hill to talk with the Senate committee on commerce about AI regulation.  You said, “Regulatory issues should not stand in the way of AI.  It’s the most important technology industry in the world today.”  We’ve seen the industry itself police over the last couple years.  We’ve seen the issues that have come from that.  Where do you feel like the barrier should be erected, or the line should be drawn for creating a regulation that could prevent some of these problems?  Or do you feel like it should be a question for industry, a question for consumers?

Gil:                  I think that each company that is creating AI and putting AI services and capability out there needs to be accountable and needs to be responsible.  So the theme needs to be responsible AI, right?  And there are companies—and I don’t want to lump everybody in the same basket because I’m very proud of the company I work for, and our stance, and we’ve been doing information technology for over a century.  And we have built that reputation based on trust, and doing the right thing as far as data is concerned, and the products that we put out, and standing behind them.

So I think if there are actors who are not properly addressing bringing forth AI products or managing data with the same level of responsibility, I think that that is something that both the consumers, and the users, and the partners, and regulators, you know, will have—they’ll have to stand behind what they do.  So I think that we have to continue to study the implications.  And when there’s bad actors, there needs to be pressure to correct the bad actions.  But we cannot lump that everybody’s in the same basket, and this is the Wild West, and everybody’s doing and their responsible, because it’s not the case.

Harwell:          Right.  We are running out of time, but I want to close with an example of a little bit of a tiff at an AI conference recently—and you can imagine how rowdy those get—where there was talk of this predictive policing algorithm that was designed to take in data and establish which members of the community were potential gang members based off crime, and social networks, and that sort of thing.

People began asking about kind of the unintended side effects, including potentially falsely labeling people as gangsters, even when they weren’t.  And one of the engineers on that said, “Those aren’t my problems.  I’m just an engineer.”  Help me understand where the responsibility should lie in those questions.  Should the engineers be thinking about those, even back in the preparation of these algorithms, or should those be questions for someone else?

Gil:                  Of course they have to be thinking about them, right?  I mean, to me, I find comments like that kind of funny because it’s as if like, you know, some of these technical folks have landed on Earth with no other connection of understanding, right, [LAUGHTER] of like we live in a society, and we have responsibility to others.  And there’s fields like moral philosophy that has been with us for a long, long time.

We have to continue to demand that practitioners in the field be human beings and are educated and sophisticated in understanding that we live in a society with rules and we depend on each other.  So, of course, that is like a totally irresponsible sort of like way of thinking, in my opinion.  And in the course of creating technology, then you have to do it in the context in which it’s going to be applied.  So if you’re going to apply AI in the criminal justice system, you have to involve all the other stakeholders who have thought about these issues.

So this kind of like, in a way, you know, even though I’m a scientist and a technologist and I like it, we cannot give undue power or undue mythical capabilities to the scientists and the technologies.  We are part of a much broader set of principles and an ecosystem.  We have to have teams that have all those dimensions.  And technology is neither our savior nor our demise, right?  It has to be in the context of what we, as humans in a democratic society with the rules that we impose on ourselves through government and so on, practice how we create technology and for what purpose.

So I take the view that we have to have a very humanist perspective on how we develop technology, and not glorify technology for its own sake.  It’s a means to an achievement, to what we want to do.  So, no, yahoos saying, “I’m going to solve the criminal problem because I have an algorithm.”  It’s a very sort of—in a way, it’s a form of illiteracy, but of a sort of broader humanist understanding of what we’re here to do.

Harwell:          All right.  Well, on that note, thank you so much for having us.  I’m going to turn it over to three other very smart people and Jeremy Gilbert for The Post.  Thank you.

Gil:                  Thank you.

AI and Ethics: People, Robots and Society

Gilbert:            All right.  Good morning.  I’m Jeremy Gilbert.  I’m the director of strategic initiatives here at The Washington Post, and I’m thrilled to moderate our last panel of the morning on the ethical and social implications of artificial intelligence.  I’m joined by a real esteemed panel of experts.  Jack Clark is the director of communications and strategy at OpenAI, which is a non-profit AI research company.  Meredith Whittaker is the co-founder and executive director of the AI Now Institute at NYU University.  And Milind Tambe is the founding co-director of the Center for Artificial Intelligence in Society, and the director of Teamcore Research Group on Artificial Intelligence and Multiagent Systems at the University of Southern California.  Thank you all for being here.

If you have any questions for our speakers, please tweet them to us using the #Transformers and I’m going to get the discussion started right now.  So this is an interesting group with some very different perspectives.  But one of the things that seems to unite you is the importance of the impact of artificial intelligence.  The kind of question of how we weigh, for example, what large institutions might have as business interests using artificial intelligence, versus societal good.

How do we decide who gets to be the beneficiary of the gains of artificial intelligence when we’re weighing those two things?

Clark:              So you’re starting with the easiest question?

Gilbert:            [LAUGHS] Of course.

Clark:              Well, I guess I’ll start and we’ll go down.  But OpenAI was founded a little over two years ago.  And the goal of OpenAI is to ensure that the benefits of advanced AI accrue sort of widely to all of humanity, rather than just a few.  I think that everyone on this stage shares that idea.  It’s interesting to me that such an organization was formed at this time because it suggests that there is huge anxiety that it’s going to happen in the other way and we’re not going to see those benefits widely distributed.  And I think that there’s a responsibility for the AI community centers to try and play a larger role in the kind of governance and the defining of norms around this technology so that it can go well.  And there’s huge anxiety, but the default setting is currently one whether a private industry gets to define the rules of something, which affects everyone, which I think we should be somewhat nervous about.

Whittaker:       Yeah, I would hard agree on that.  The AI Now Institute at NYU, I co-founded that with Kate Crawford in part because we want to look at what is AI doing now?  What are the impacts that are happening right now as early AI systems are being rolled into the infrastructures of our daily lives?  So in response to this question, I want to bring it back down to the practical.  I would agree with what Jack said, but at this point, we don’t even have an accounting for where these systems are integrated into the backends of core decisions.  And this is why the AI Now Institute called for an elimination of black box systems.  So this is sort of unaccountable, obscure systems not subject to oversight, as used in core government agencies.

Gilbert:            How many of these black box systems are there?  What’s the scale like?

Whittaker:       We don’t know.  Like full stop, it’s a fundamental challenge to accountability.  We don’t have an accounting for where these are being used or what they are.  Is this an overburdened spreadsheet or is this a neural net being applied to determining somebody Medicaid disbursements, right?  So where we do see problems and we see them frequently from policing heat maps to the case I’m referring to in Idaho where Medicaid disbursements dropped by 30% and people didn’t know why because they didn’t know how the system worked.

A number of these cases crop up.  These are kind of the tips of the iceberg.  This is where investigative journalists or researchers are able to get access, are able to get information where a whistleblower, say, comes forward and talks about these systems breaking down in a specific way.  But we’re in a situation where these are making determinations about people’s access and opportunity.  Oftentimes, or most times, I would say, without the individuals who are affected, even knowing that the system had a rule.  So auditing and accountability are core issues that we need to first build the social framework to accommodate.  I would say before we continue rolling these systems into core decision-making.

AI Now just published an algorithmic impact assessment framework for the New York City algorithmic accountability bill task force kind of beginning to put some structure around these ideas and suggest ways of starting to audit and assess and account for these systems.

Tambe:            So the University of Southern California Center for Artificial Intelligence in Society is something I co-founded with Professor Eric Rice.  This is a collaboration between AI researchers and social work.  We’re very proud of this sort of collaboration that’s interdisciplinary.  But really, with social workers who are out there in the field.  And so the kind of grand vision is AI to address sort of grand challenges of the American Academy of Social Work and Social Welfare, mainly homelessness, achieving equal opportunity and justice, and so forth.  There are 12 grand challenges or grand challenges of the National Academy of Engineering.

So within those, I guess our focus has been on concrete problems where we can really assist and make a difference in working with low-resource populations, such as with homeless youth or in conservation, trying to protect engendered wildlife or in public safety and security.  And so we are focused on augmenting human decision-making.  So things that are already being done by humans in these organizations, but for a system of a software decision age.  So that’s the focus of the work that we’ve been doing.

Gilbert:            It sounds like this is a very impact-focused kind of artificial intelligence.  Do you feel like the field at large has focused on impact, even whether it’s business goals or social good or do we run the risk of AI as really working towards novelty, trying to solve an interesting, but not useful problem?

Tambe:            So this is a question that’s very near and dear to my heart.  Speaking to the research community, I feel that in AI, we need to focus more on impact.  And it’s because when we publish and so forth, novelty is given a higher weight.  And the impact is not and so as a result—but it’s up to us as researchers, as senior researchers in AI to redefine what are important ways to measure progress for younger researchers.  Because if the reward system is such that what counts as novelty and not impact, then essentially, that’s where people will go.  But impact and if you want to see the societally beneficial impact, then that’s something that we as researchers need to define as an evaluation criterion so that people speak to that and do research in that area.

Clark:              I’d just like to jump in on this quick.  You have a tangible example.  So along with OpenAI, I also do a project called the AI Index, which is about tracking progress in AI.  And what you discover there is there’s this premium placed on novelty.  If it means that we think AI is progressing in some way faster than it is.  And I’ll give you a kind of specific example.  Last November, Alibaba and Microsoft claimed they had reached human performance on question answering, on Stanford’s data set called SQuAD.  You don’t need to know too much about it other than the fact that they issued press releases saying they’d reached a human level and machines could now understand power graphs of text as well as a human.  Everyone got very excited and thought, “Well, I guess that’s done now.  We can move onto other stuff.”  And the Allen AI Institute, which is from Microsoft co-founder Paul Allen, just released a new data set called “ARC,” which also tests common sense reasoning over natural language.

And every single existing technique, including the ones that do well on SQuAD, totally failed to do anything interesting on ARC.  And their performance is maybe they get it right 25% of the time.  So that’s subhuman.  Human level is meant to be 90%.  And so I think that by prioritizing like the new, new thing, you risk making us think that A, AI is progressing faster than it is, and B, it sort of subverts the effect of doing useful stuff that actually works in the real world.  And I think it’s important, especially for you in your position here at The Washington Post to always be pushing companies to say, “But what does it actually do that’s useful?”

Tambe:            And I guess we are not saying that novelty has no value.  Obviously, it should complement or add additional weight on social impact so there’s more balance and not just sort of pure novelty.  And there is research going on, but I feel not enough and we could do more to encourage that.

Whittaker:       Just a quick addition there.  I would agree with both Milind and Jack around sort of novelty driving, a kind of warped understanding of AI progress.  I do maybe want to take it back to this term “impact” and pick that apart a little bit because I actually see the recent AI boom being a recognition that you could impact bottom lines by commercializing this technology, right?  And there’s a lot of people who have been working since around 2012 that sort of an increasing exponential ark to figure out how do we market AI products?

So when I talk about black boxes deployed in core government agencies, the chain of title there is often to vendors who are selling extreme promises based on the capabilities of systems that probably haven’t been tested in the populations they’re going to impact.  So there’s an issue here, I think, not with AI as having no impact, but with the impact as being measured by bottom lines in sleek tech conference rooms.  And actually, there’s kind of an air gap between that and the environments that are actually being significantly shaped by the determinations of these systems at scale.

Gilbert:            So how do we close that gap?  How do we get from saying there are specific researchers or engineers who are designing these algorithms that they are then either placing in a black box or maybe in some cases, exposing—to having a voice from the people who are being impacted by those algorithms?  What’s the way that we close that?

Tambe:            So at least for us, one major way of trying to do this is through emersion.  So our students or myself or others, we actually want to go to the location where the impact is to be measured and work with the people there.  It’s problem-centered.  It’s coming from the domain.  And so when it comes to conservation, we’ve actually patrolled in forests in Malaysia, and so forth, to really understand what is really the problem or go to the forests in Uganda.  With working with the homeless shelters in Los Angeles.  So be there, we are working with social workers.  It’s fundamentally interdisciplinary.  It just can’t be AI researchers figuring this all out by ourselves.  And I always try to think about showing humilities of working with public safety or security, working with the Coast Guard, for example.  That we can’t be sitting in LA at USC and say, “We know better how to drive your boats, so we are going to tell you.”

We really need to go beyond the boats in New York and understand how they work and so we can devil up algorithms that are more appropriate to the location.  And are done in an interdisciplinary fashion, are responsive to the needs of those who are using it, and so on.

Gilbert:            Is there also a question of—you touched on Meredith, transparency, but also regulation.  So who gets to say whether these—the algorithms being used, the automated approach being selected is fair, and fair to whom?

Whittaker:       Yes, this is the multitrillion-dollar question.  I think we do need shared standards across the industry.  I think regulation could absolutely be helpful in some of these cases.  I think we need to have sort of a clear-eyed understanding that we need to validate before we try in high-stakes domains.  Now, we’re in a situation where the acceleration of commercialized tech has had resources behind it for a couple of decades.  We are not in the same situation where the research field that is looking at the nuances of measuring lived-impact across dynamic contextual domains has had the same amount of acceleration.  So there’s a real need to center fundamental research on these questions.

I think transparency, I would say, is part of the equation.  We need to know where these are.  We need to know the justifications for their use.  We need to know by which standards they were audited, and we need to have clear democratic processes to either accept or reject that use over time.  And that’s part of what we proposed in the Algorithmic Impact Assessment framework, which is being led by our law and policy research team.

Clark:              And this is an area where I think the government has a clear role to play.  Earlier today we heard from the senators and we regularly hear from policymakers about how they want to do something about AI.  I think measuring and evaluating and assessing how AI is deployed is what government really should be in the business of doing.  I would like the government to tell me that it has tested all of the autonomous cars on the road and could give me reports on their performance.  And that’s not based on the tragic accident that happened yesterday.  This is a long-term problem.  Like if I want to understand self-driving car progression today, I need to go and look at the reports filed with the California DMV, which tell me the number of disengagements for each autonomous vehicle makes per vendor.  And then I need to build my own spreadsheets and then I get a view of performance in that state.

All of the companies have since moved their testing out of California because they don’t like me being able to find this out.  So actually now, it’s spread around the country and no one knows.  And this is terrible.  This is like a technology that is going to dramatically affect our economy, affect people’s lives, and influence safety.  And beyond assessing the rate of progression in it because the private sector is saying, “Well, that’s too commercially sensitive.  You can’t really do that.”  Which is observed.  So I think that having the government take more of a role here would be beneficial and it would also force information into a domain—of a public domain, which isn’t.  Like as Meredith said earlier, if I want to find out about where government is using AI today, I need to do like Freedom of Information Act requests or I need to work with the ACLU, or I need to call up people I know in government and go and have a beer and ask them about what machine learning system broke today.

None of that feels particularly healthy, so yeah, I think is a pretty serious issue.

Gilbert:            Well, and the question of, for example, different states having different standards is particularly interesting, but AI has been applied internationally.

Clark:              Mm-hmm.

Gilbert:            On what level are we talking about?  Are we talking about a municipal level, a state level, a federal level or an international level?  Do we need international standards?

Clark:              All of them.

Whittaker:       I’ll offer a quick comment and then turn it over.  I think we’re oftentimes talking about massive global systems at a huge scale.  So as we’ve seen with GDPR, as we’ve seen with other regulations, the high watermark for regulatory intervention matters because you’re not going to be bespoke that system a number of times.  So I think all of them make sense because context varies.  Norms vary, et cetera.  But we should attend to the power of smart regulatory intervention.

Tambe:            So I agree on all that was said earlier with respect to government help in regulation and so forth.  But in addition to that, as we discussed earlier, it’s also within the research community.  This is now sort of evolving into more an interdisciplinary science and therefore, the need for us as researchers to—in AI to reach out to people in other disciplines and really force ourselves, to encourage ourselves to do these kinds of measurements and I’m in full support of what Meredith was saying earlier with respect to really doing a better job of measuring things in the field and really measuring the impact and assessment.  But these sorts of things today are difficult because if I’m a researcher publishing these sorts of things, getting encouragement from the impact and so forth is a little bit harder.  So there is a role for government, but there’s also a role for AI researchers to do something about this.

Clark:              So specifically for the AI Index, which does this measurement initiative, we were trying to find a new person recently who can join the kind of core team, which includes myself that works on it.  And we were talking to a really smart young pre-tenure track professor or about-to-be professor.  And we were saying, “Do you want to join the index?  It’s a chance to do something where you have some impact and it’s a chance to like set norms.”  And they said, “No, because I am pre-tenure.  So this would not be evaluated positively.  I need to be doing technical contributions.”  And that is kind of frightening to me.  The incentives are set up in such a way that academics who want to work on impact find it challenging to.

Whittaker:       We have a little trope we throw around AI Now to explain why we are so constitutively interdisciplinary across six faculty at NYU.  And it goes a little something like, “You wouldn’t expect a doctor to tune a deep neural net; you shouldn’t expect a computer scientist to make complex decisions in fields like medicine, et cetera.”  So we’re really looking at this sort of expert drift, where in the computer science field, which sort of dominates the big tech industry that is driving a lot of this innovation has sort of taken it upon itself and I think many people are deeply uncomfortable with this, within and outside of these companies to make really significant decisions that affect domains that are outside their realm of expertise.

So I think we need to look at a fundamental restructuring of what we call “product development,” of what we call “research,” of how we define AI research to include sociologists, ethnographers, legal scholars, et cetera, et cetera.

Gilbert:            How are those decisions getting made?  Are the computer scientists identifying problems they want to solve outside of their own disciplines?  Are people coming to them?  How do you make that a more inclusive process?

Tambe:            So I think what should happen, and to some extent, we strive to make it happen in our center is to—as I saw saying earlier, user-centered problem-solving.  So you start from the problem.  and for some people, it’s kind of confusing; why do you start from a problem?  Why not start from a solution?  Which is not necessarily the best approach.  So I can give you kind of this concrete example, where we were trying to come up with better patrolling methods for protecting endangered wildlife in Malaysia, for example.  And we could sit in LA and come up with, “Oh, this is the best patrolling route.”  And the people that we were talking to on Skype were saying, “No, no, this doesn’t work.  This absolutely doesn’t work.”  The shortest distance between two points is not a straight line.

And to us, it was like, “How can that be?”  So we flew down to Malaysia.  We actually patrolled in the forest and suddenly realized, “Yeah, you have to walk along ridge lines, you have to walk along river beds.  You can’t just walk in a straight line in a forest to patrol.  It doesn’t make sense.”  These are the types of things that can only be gained by being on the spot in real-life rather than just sitting in the lab and saying, “This is it.”  Now, I want to be sure.  There’s a lot of wonderful theoretic work researchers are doing.  This must continue.  Pure, basic AI research has to continue.  But there’s space for this additional kind of work.  And this requires us to get out of the lab and get out in the field.  And rather than focusing on what is really available as a data set, like ad auctions and so forth.  To focus on new kinds of problems where data sets are not necessarily easily available; where it is harder to do that kind of work.

Clark:              And some of it is about changing norms so that researchers think that they should intentionally be multi-disciplinary.  At least for some projects.  We did a project recently on malicious actors in AI.  So how you expect really unpleasant people to take open-source AI technology and do unpleasant things with it.  And so to do that, we ended up posting a workshop about a year ago in the UK and we had people from the police come along, people from intelligence agencies, people from the AI research community, sort of black hat hackers, who had, in a past life, been one of these nasty people using open-source technology to do unpleasant things, but it was very helpful because then you’re there with people with a huge spread of skills outside of this, like narrow technical domain, who can tell you, “No, this is like the real problem or this should be your real threat model.”

And I think the more we can do with that, the better, and AI Now has been sort of leading some of these initiatives already.

Whittaker:       So I would agree.  [LAUGHTER] I’ve been in the tech industry for over 11 years now, so I’m what they call a veteran.  And I think we really need to examine the culture of tech because I will testify from my own experience, it feels like it’s gotten less diverse and more homogeneous as its sort of power has ascended.  And I think as Jack was saying, you can have the best intentions, but if you’re in the room with people who fit a very homogeneous demographic who have shared the same experiences, whose problems involve the laundry delivery maybe didn’t come on time that day.  You shouldn’t be expected to have these sort infinite, imaginative capacity to put yourself in the position of everyone who your technology is going to influence.  So we urgently need to diversify the voices that are informing the development of these technologies, and I think to figure out how we do that with clear incentives, and with sort of respect for the people whose voices and kind of time we’re asking for because the kind of open-source model of like, “Let the community to do,” is asking for a labor that is often not compensated.  So I think this is something we need to conscientiously and very clearly kind of set up structurally so that it works and isn’t an unfunded mandate on the most marginalized populations.

Gilbert:            I mean, is this essentially a part of the challenges if the people you put in the room are codifying their own biases into the algorithms that they create, you’re going to A, get back the results that they probably expect.  But B, probably not recognize all of the problems.  Inclusion is certainly one technique.  Is there an issue of awareness as well?

Whittaker:       Yes, yes.  And there are many other issues, right?  We’re talking about one issue that is sort of problematic.  There’s an issue with not examining the data sets you use, right?  Are you using data collected by the Baltimore Police Departments gun-tracking task force, which is not under investigation for criminal conspiracy, planting drugs on suspects?  This is one of the largest kind of police misconduct cases in the country.  Are we then deploying predictive policing that would use that data as ground truth to determine who looks like a criminal or not, right?  There is a lot more investigation we need to do on what are the fundamental claims being made in the data?  What is the history, who builds the classifiers and the algorithm, what is the viewpoint that is imparted there?  Then what is the context in which this would use, and what are the sort of power asymmetries and other issues that might attend sort of a cop getting a score on a tablet and then applying that to policing work, for one example.

Tambe:            So I guess in our work, the focus is more on augmenting humans for making decisions in very complicated situations.  So if you’re trying to spread information about HIV prevention, which is one of the problems we are working on.  Who are the right youth to select in a homeless shelter to spread that information?  This is a task where you’re looking at the social network, trying to figure out all the right people.  These kinds of tasks are already being done by human beings.  They’re augmenting their capabilities.  But we want to achieve the right kind of balance, not all were prescribed.  So the right level of autonomy.

If we are trying to tell rangers in Uganda that this is where you’re going to find snares.  You can give them a certain area, say 500 meters by 500-meter areas, this is where you’re going to find it.  We don’t have the ability to tell them, “Well, you’re going to find it under this tree.”  They are experts in that domain and we should harness their expertise where possible and leave AI to do what it knows best.  And understanding this right level of balance between the teamwork between humans and AI is also an important aspect of research as we go forward.

Clark:              And I think this is from sort of what we’ve been talking about the narrowness of AI communities sometimes.  I frequently have conversations with perfectly nice people who are talking about an AI research project and they were like, “We must remove the human from this entire process.”  Yeah, well, that’s dubious and I think it’s because we aren’t developing enough respect for the fact that people are incredibly smart and incredibly good at doing lots of things.  And the temptation within AI is to automate the entire process.  Whereas, you’re probably going to have something much smarter in aggregate if you find a way to have the person use their skills and just remove some of the dull or tedious work.  And 90% of the time, that seems to be the better course of action.

Gilbert:            So the last question I would pose to you, and I think you were very much hinting at it here is if there is fear in the community, and you wrote about this recently.  And by community, I mean outside of the AI community, honestly.  It’s at this sort of replacement of humans for specific tasks.  How do you balance the fact that work and a sense of purpose is really important for lots of people against the idea that there are lots of rote tasks and things that can be done to either augment or to avoid certain actions?  Where’s the balance?

Whittaker:       I would start historically and just mention that before the mid-nineteenth century and the Industrial Revolution, we really didn’t have the same structure of work that we have now in the West.  So the nuclear family with somebody who went out to do wage work at a factory and came back and the household as this sort of separate realm was not the norm, right?  You had family economies.  People participated in productive labor.  There was a blending between life and work and these distinctions didn’t apply.  So that’s one example we have across cultures and histories, many, many, many different examples of what meaningful productivity and interaction looks like for human beings.  So I might push back a little bit against the truism that without a timesheet to clock in with, we will be adrift with no meaning.  And I would also then offer that we’re a ways from total replacement.  I think in keeping with the spirit of AI now, we should look at what’s already happening.  We have AI-driven hiring.  We have AI-driven management.  We have precarity economies like Uber and other things that effectively run large machine-learning systems to instrument their labor.  So what are the impacts of these small encroachments?  What are the power asymmetries that are emphasized there, and how do we draw on what we know today to understand possible trajectories and not sort of look for this bright line between humans with work and humans adrift without meaning?

Clark:              I just want to pile onto what Meredith said quickly, in two kind of specific points.  One is that, yeah, we need to broaden our definition of work.  Like emotional labor is a significant amount of work but is broadly kind of uncompensated across society.  We have like an aging population who need good social relationships.  Maybe I’m strange, but I would like to go and get the chance to go and talk to people and have that be seen as my work, just along with my work at OpenAI.  There’s lots of jobs that people would like to be paid for, but we just choose not to compensate as a society currently.

And the second point is—and I think Meredith touched on this.  That where AI gets deployed into sort of middle-class or lower-class jobs, there’s not much evidence that it makes those jobs more pleasant.  It actually seems like it makes those jobs unpleasant.  And I think that we should remember that because it’s very easy to go to these conferences and think like, “AI is making tremendous strides in productivity.”  But it’s usually doing that at the cost of a sense of human agency.  And I think that we can be really complacent to the tech industry about this, but eventually, that’s going to like come home.  And the previous ways this came home were like the Luddites, which was a reaction to the Industrial Revolution being mostly terrible during the time it occurred.  And also the French Revolution, which was like fairly unpleasant for the people that are being complacent about things.  So let’s not kind of sit on our laurels and say everything is fine.

Tambe:            So I want to agree with a lot of what’s said, but I also want to say that there’s a lot of immediate benefits that we can accrue from AI by deploying them in low-resource communities for domains like suicide prevention or substance abuse prevention or the kind of domains that are important for low-resource communities, for conservation, for public safety.  And these are things we can do today, and we should not lose sight of the benefits that AI is providing there already.  And not get so fearful that we will stop that kind of work.  So that is important for us to continue forward.

Gilbert:            Thank you so much.  Unfortunately, that’s all the time we have for today.  I’d really like to thank Jack Clark, Meredith Whittaker, and Milind Tambe for joining us.  If you’d like to watch video clips from any of today’s discussions or past Washington Post Live programs, please head over to Washingtonpostlive.com.  I’m Jeremy Gilbert and thank you so much for watching today.

[APPLAUSE]