“We won’t be like a pet labrador if we’re lucky,” Elon Musk said. (Rebecca Cook/Reuters)

Elon Musk has already ignited a debate over the dangers of artificial intelligence. The chief executive of Tesla and SpaceX has called it humanity’s greatest threat, and something even more dangerous than nuclear weapons.

Musk publicly hasn’t offered a lot of detail about why he’s concerned, and what could go wrong. That changed in an interview with scientist Neil deGrasse Tyson, posted Sunday.

Musk’s fears lie with a subset of artificial intelligence, called superintelligence. It’s defined by Nick Bostrom, author of the highly-cited book “Superintelligence,” as “any intellect that greatly exceed the cognitive performance of humans in virtually all domains of interest.”

Musk isn’t worried about simpler forms of artificial intelligence, such as a driverless car or smart conditioning unit. The danger is when a machine can rapidly educate itself, as Musk explained:

“If there was a very deep digital superintelligence that was created that could go into rapid recursive self-improvement in a non-algorithmic way … it could reprogram itself to be smarter and iterate very quickly and do that 24 hours a day on millions of computers, well–“

“Then that’s all she wrote,” interjected Tyson with a chuckle.

“That’s all she wrote,” Musk answered. “I mean, we won’t be like a pet Labrador if we’re lucky.”

“A pet Lab,” laughed Tyson.

“I have a pet Labrador by the way,” Musk said.

“We’ll be their pets,” Tyson said.

“It’s like the friendliest creature,” Musk said, then letting out his lone chuckle of the segment.

“No, they’ll domesticate us,” Tyson said.

“Yes! Exactly,” said Musk, sounding serious again.

“So we’ll be lab pets to them,” Tyson said.

“Yes,” Musk said. “Or something strange is going to happen.”

“They’ll keep the docile humans and get rid of the violent ones,” Tyson theorized.

“Yeah,” Musk said.

“And then breed the docile humans,” Tyson said.

Musk then stressed the importance of what the superintelligence is programmed to optimize. It might seem appealing to have a computer figure out how to make us happier, but that could backfire:

“It may conclude that all unhappy humans should be terminated,” Musk said. “Or that we should all be captured and with dopamine and serotonin directly injected into our brains to maximize happiness because it’s concluded that dopamine and serotonin are what cause happiness, therefore maximize it,” which brought another chuckle from Tyson.

“I’m just saying we should exercise caution,” Musk concluded. You can listen to the entire interview here.

Related: The 12 threats to human civilization, ranked

Google’s Eric Schmidt downplays fears over artificial intelligence