Jae Ho Sohn, a radiologist at the University of California at San Francisco, is adapting and working with an AI algorithm to analyze thousands of positron emission tomography (PET) scans to search for early signs of Alzheimer’s. The algorithm looks for irregular levels of glucose in the brain that may point to the disease years before its most severe symptoms appear.
Sohn’s research is still developing, but with the early success of the algorithm to detect these subtle changes, AI may have an important role to come in the health industry.
While AI’s practical applications continue to be evaluated, our understanding of natural intelligence and what makes for accurate pattern recognition is being studied as well. Jeff Hawkins, co-founder of Numenta, decided early on that if machines were to start behaving more intelligently, he was going to have to study the brain.
One of Hawkins’s recent areas of focus in the brain is cortical columns, which are structural units in the neocortex of the brain where, it is theorized, models of the objects we encounter in the world are constructed and stored. Hawkins said that one of the key components to knowing about how the brain understands objects and recognizes patterns is movement. The information we get from our senses as well as the location information of where we experience them over time is essential to our ability to recognize those patterns that define objects. Hawkins believes AI will continue to be limited in its emulation of the brain if we do not consider movement in the equation for intelligence.
In the video above, we take a closer look at a bit of the history of AI and where it stands today in its abilities and limitations as an “intelligent” tool.