Researchers develop AI able to identify things by sight, touch

Source: Xinhua| 2019-06-18 15:02:27|Editor: Yamei
Video PlayerClose

A staff member of the booth of Festo demonstrates a bionic robotic arm during the 2019 Hanover Fair in Hanover, Germany, on April 1, 2019. With a total of 6,500 exhibitors from 75 countries and regions, the Hanover Fair shows the latest development of technologies for industrial use, including 5G network, artificial intelligence, light-weight manufacturing among others. (Xinhua/Shan Yuqi)

WASHINGTON, June 17 (Xinhua) -- A laboratory of Massachusetts Institute of Technology (MIT) has developed an AI that can process and learn from visual and tactile information, according to MIT news published online on Monday.

MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) used a tactile sensor named GelSight to collect data for the AI to identify objects by touch.

The research team recorded 12,000 videos of nearly 200 various objects being touched to create a dataset of more than 3 million static images.

The AI could use the dataset and machine learning system to understand interactions between vision and touch in a controlled environment.

A child views a robotic arm during the 63rd International Fair of Technics and Technical Achievements in Belgrade, Serbia, on May 21, 2019. The annual international exhibition of technical achievements, which opened in Belgrade on Tuesday, encourages Serbian companies to implement smart factory solutions, digital automation and technologies that will make them more competitive in the global market. (Xinhua/Shi Zhongyu)

"By looking at the scene, our model can imagine the feeling of touching a flat surface or a sharp edge ... By blindly touching around, our model can predict the interaction with the environment purely from tactile feelings," said Yunzhu Li, CSAIL PhD student and lead author of the research paper.

Future study will update the dataset with information in more unstructured areas, so that the AI could recognise more objects and better understand scenes.

KEY WORDS: U.S.,MIT,AI
EXPLORE XINHUANET
010020070750000000000000011103261381532381