Google revealed at the I/O event how a new peripheral app that uses your camera to spot contextual requests revolutionizes artificial intelligence. Google Assistant is now available on the Pixel and iPhone. (Google)

MOUNTAIN VIEW, CALIF. — Google kicked off its annual developers conference Wednesday by outlining a broad vision of how it thinks artificial intelligence will shape the way we communicate, travel, work and play.

Chief executive Sundar Pichai said that improving artificial intelligence is Google's top strategy in its continuing goal to organize the world's information.

Using AI, Gmail will now suggest phrases for your replies, based on its interpretation of your conversation. Google Photos will figure out which of your snapshots are best for sharing, and it will use facial recognition to figure who should get those photos. A program called Google Lens will analyze your photos and be able to remove obstacles, such as a chain-link fence, that obscure your shot. Google Assistant will also be more proactive, now nudging you to leave earlier if the traffic to your next appointment is bad, rather than waiting for you to ask about it. Google Assistant will also be showing up on the iPhone.

The differences are subtle but significant, said Gartner research vice president Brian Blau. “We're not going to see that many new features — maybe some new buttons and dials. But what will improve is how well these apps relate to the individual.”

With those personalized improvements, however, will come an even greater demand for data from Google services. For example, one big addition to Google Photos is the ability to auto-share your photos with a person of your choosing. That means that users not only allow Google to process their pictures, but also tell the company who their closest confidantes are.

“Many of these new features in Google Assistant, Photos, and Home add value but also require the sharing of a lot of personal voice, photo, video and location information,” said Patrick Moorhead, principal analyst at Moor Insight and Strategies. “Google has the most personal information, [and] does the processing in the cloud, so I think right now they have the richest consumer AI capabilities.”

Consumers looking for big gadget announcements out of the conference, however, may have been disappointed. Almost nothing was said about Android Wear, Google's play for wearables. Details about the next version of Google's Pixel smartphone line — about which Google has offered few sales details — aren't expected to come until later this year.

Google did release some brief details about its plans for stand-alone virtual reality headsets — which will not rely on a smartphone or a computer for power. Lenovo and HTC (which already makes a rival headset, the HTC Vive) are working on these products with Google; the company did not announce an official release date.

But Google, like Microsoft and even Apple, seems to be focusing developer attention more strongly on software and services rather than solely on gadgets, analysts said. Although devices are undoubtedly still important, priorities seem to be on making sure that consumers using any device will be able to use Google services. One of the only products actually announced at the conference was a new version of a chip known as a tensor processing unit, or TPU, which is custom-built for — what else? — artificial intelligence processing in the cloud.

“I think, in general, we'll start to see devaluation of individual devices over time,” Blau said. Instead, he said, companies are likely to focus on making “whatever screen you’re looking at more personal and a lot more meaningful to you as an individual.”