Marvin Minsky at his home in Brookline, Mass., in April 2015. (Joel Achenbach/TWP)

Marvin Minsky, a founding father of the field of artificial intelligence and an innovative explorer of the mysteries of the human mind during his long tenure at the Massachusetts Institute of Technology, died Jan. 24 at a hospital in Boston. He was 88.

The cause was a cerebral hemorrhage, according to a statement from MIT. He was a professor emeritus at MIT’s Media Lab, which has a broad, interdisciplinary mandate to explore technology, multimedia and design.

Dr. Minsky devoted his professional life to the astonishing hypothesis that engineers could someday create an intelligent machine. He flourished as a professor and mentor even as the field of A.I. endured discouraging results and eruptions of pessimism.

He lived long enough to see A.I. ambitions flourishing anew, with attendant concerns about killer robots and rogue computers.

Although Dr. Minsky was an inventor — as a young man, he developed a special microscope for studying brain tissue that eventually became a standard tool for scientists — his greatest contributions were theoretical. He developed a concept of intelligence as something that emerged from disparate mental agents acting in coordination. No single agent is intelligent when operating alone.

Marvin Minsky early in his career. (Courtesy of the MIT Museum )

If a single word could encapsulate Dr. Minsky’s career, it would be “multiplicities,” his MIT colleague and former student Patrick Winston said Tuesday. The word “intelligence,” Dr. Minsky believed, was a “suitcase word,” Winston said, because “you can stuff a lot of ideas into it.” Other such words include “creativity” and “emotion.”

Along with fellow A.I. pioneer John McCarthy, he founded the artificial-intelligence lab at MIT in 1959. Dr. Minsky’s 1960 paper, “Steps Toward Artificial Intelligence,” laid out many of the routes researchers would take in the decades to come.

He wrote that “we are on the threshold of an era that will be strongly influenced, and quite possibly dominated, by intelligent problem-solving machines.” Anyone trying to mimic intelligence in a machine, he wrote, had to solve five distinct categories of problems: search, pattern recognition, learning, planning and induction.

He also wrote seminal books — including “The Society of Mind” (1986) and “The Emotion Machine” (2006) — that colleagues consider essential to understanding the challenges in creating machine intelligence.

Upon Dr. Minsky’s death, his colleague Nicholas Negroponte wrote by email to the MIT community:

“The world has lost one of its greatest minds in science. As a founding faculty member of the Media Lab he brought equal measures of humour and deep thinking, always seeing the world differently. He taught us that the difficult is often easy, but the easy can be really hard.”

Marvin Lee Minsky was born in New York City on Aug. 9, 1927. His father, Henry, was a noted eye surgeon who served as director of Mount Sinai Hospital’s ophthalmology department in Manhattan. His mother, the former Fannie Reiser, was active in Zionist causes.

As a child, he told the New Yorker, he was “physically terrorized” by schoolyard bullies, and a lack of academic support in the classroom led his parents to enroll him in the progressive Fieldston School. His interest in electronics and chemistry blossomed, and he won a spot at the prestigious Bronx High School of Science in 1941.

He spent his senior year at the private Phillips Academy in Andover, Mass., to bolster his college options. After graduating in June 1945, he enlisted in the Navy in the final months of World War II and served in an electronics program.

He earned his bachelor’s degree in mathematics at Harvard University in 1950 and a PhD in mathematics at Princeton in 1954.

At Princeton, and with funding from the Office of Naval Research, Dr. Minsky co-built a primitive “electronic learning machine” with tubes and motors. He was also exposed to some of the greatest minds of the day, including John von Neumann, a pioneer of computers.

Back at Harvard as a junior fellow in the mid-1950s, Dr. Minsky invented the confocal scanning microscope that would eventually find many uses in science.

“Minsky’s invention disappeared from view for many years because the lasers and computer power needed to make it really useful had not yet become available,” Winston wrote in an account of Dr. Minsky’s career. “About ten years after the original patent expired, it started to become a standard tool in biology and materials science.”

In 1956, when the very idea of a computer was only a couple of decades old, Dr. Minsky attended a symposium at Dartmouth College that is considered the founding event in the field of artificial intelligence.

Dr. Minsky said in 2015 during an interview with The Washington Post that Alan Turing, the British mathematician who had worked on World War II code breaking, was the first person to bring respectability to the idea that machines could someday think.

“There were science-fiction people who made similar predictions, but no one took them seriously because their machines became intelligent by magic. Whereas Turing explained how the machines would work,” Dr. Minsky said.

The MIT educator explained the problem further, and in 1969, the Association of Computing Machinery had given him the highest honor in computer science, the A.M. Turing Award.

Dr. Minsky and his wife, the former Gloria Rudisch, a pediatrician, enjoyed a partnership that began with their marriage in 1952. Their home became the regular haunt of science-fiction writers, including their friend Isaac Asimov. Richard Feynman, the Nobel Prize-winning physicist, would play the bongos at their parties.

Besides his wife, survivors include three children, a sister and four grandchildren.

Gloria Minsky recalled her first conversation with the man she wound up marrying: “He said he wanted to know about how the brain worked. I thought he is either very wise or very dumb. Fortunately, it turned out to be the former.”

Dr. Minsky acknowledged in the 2015 interview with The Post that he was disappointed that A.I. research had yet to create human-level intelligence in a machine. He said early A.I. efforts at large companies, such as IBM, failed to appreciate the complexity of the problem and how incremental progress would have to be.

“It’s interesting how few people understood what steps you’d have to go through. They aimed right for the top, and they wasted everyone’s time,” he said.

Are machines going to become smarter than human beings and, if so, is that a good thing?

“Well, they’ll certainly become faster,” he said. “And there’s so many stories of how things could go bad, but I don’t see any way of taking them seriously because it’s pretty hard to see why anybody would install them on a large scale without a lot of testing.”