Machines and humans learn differently. This has been a central fact of Artificial Intelligence research for decades. If you cram enough data into a machine, and let the algorithms grind away tirelessly, the computer can detect a pattern, produce a desired outcome and perhaps beat a grandmaster in chess.

Human intelligence is faster, quirkier and more nimble. We take mental shortcuts. We have a knack for discerning the rules of a game, the dynamic of a situation, who's mad at whom, where to find the keg, and so on. The human mind -- the most complex piece of matter in the known universe -- is adept at getting the gist of things quickly.

Now researchers report a breakthrough in Artificial Intelligence: A machine-learning program that mimics how humans learn.

The report, published online Thursday in the journal Science, is being described as a small but significant step in closing the vast gap between machines and humans when it comes to generalized, all-purpose intelligence.

"For the first time we think we have a machine system that can learn a large class of visual concepts in ways that are hard to distinguish from human learners," said Joshua Tenenbaum, the senior author of the new paper and a professor at M.I.T., in a teleconference with reporters.

The computer program, developed primarily by lead author Brenden Lake, a cognitive scientist at  New York University, used statistical probabilities to infer the basic rules behind the formation of letters in alphabets.

Among humans, visual recognition of a concept can often be achieved with a single example. "You show even a young child a horse or a school bus or a skateboard, and they get it from one example," Tenenbaum said.

The new computer program, which goes by the rather clunky name of Bayesian Program Learning (BPL), performed well in inferring rules behind the representation of letters in different alphabets. The researchers judged this performance by conducting a "Turing test," a kind of contest between humans and the computer program. Both the computer program and the humans were given a single example of a letter, then asked to find a match to that letter among 20 handwritten representations. The humans made errors only 4.5 percent of the time, but the computer program actually did slightly better, with a 3.3 percent error rate.

Turing tests are named after the British mathematician and computer pioneer Alan Turing. In 1936, Turing devised some of the fundamental concepts for a general-purpose computer. In 1950 he proposed that machines could someday match human intelligence. He conceived of something he called the Imitation Game that would be played at some point in the future when computers had become more advanced. In Turing's scenario, an interrogator would ask questions and, unseen in an adjacent room, a human and computer would provide answers. If the interrogator couldn't reliably distinguish the human answers from the computer answers, the computer would pass the test and have the status of a thinking machine, Turing argued.

[What "The Imitation Game" didn't tell you about Alan Turing's greatest breakthrough]

Still, in their new paper the researchers noted their system's limitations:

Although successful on these tasks, BPL still sees less structure in visual concepts than people do. It lacks explicit knowledge of parallel lines, symmetry, optional elements such as cross bars in “7”s, and connections between the ends of strokes and other strokes.

In the teleconference with reporters, Tenenbaum was asked if this kind of computer technology could be used in satellite surveillance. He said the military helped fund the research and is interested in potential applications.

"In some ways there's a huge leap that has to be made because, you know, it's one thing to talk about writing characters. It's another thing to talk about moving around on the ground if you're an individual or a military unit or whatever," he said.

[Here’s the argument for banning killer robots before we’re swarmed by them]

The breakthrough comes during a period of great excitement in the A.I. community, but also some anxiety about whether there are sufficient safeguards to ensure that machine intelligence doesn't somehow run away from its human creators. Entrepreneur Elon Musk has given $10 million for A.I.-safety research. Stephen Hawking, Bill Gates and many other boldface-name folks in science and technology have expressed concern that A.I. could pose an existential threat to humanity.

But Tenenbaum said this new work doesn't come anywhere near being something to worry about. Machines, he said, are not close to achieving general intelligence.

“Intelligence, at least to me, has a general, very flexible capacity. I don’t think any machine has any level of general intelligence," Tenenbaum told The Post. "Our programs have a sense of the program that generates characters, but they don’t have any real deep sense of what they’re doing, or any drive to do it.”

This kind of machine intelligence isn't the same thing as "thinking," he said.

“I wouldn’t say our system thinks, but it's made a significant advance in capturing the way that people are thinking about these concepts.”

It took two years to write this new learning program, he noted.

"Our work shows how hard it is to build something like intelligence in a machine," he said.

Read More:

Achenblog: This is the way the world ends.

New MIT algorithm rubs shoulders with human intuition in big data analysis

Self-proclaimed ‘experts’ more likely to fall for made-up facts, study finds