SAN FRANCISCO, July 8 (Xinhua) -- Instagram CEO Adam Mosseri said Monday the social media platform is launching two new tools to warn users against posting offensive comments online.
Facebook-owned Instagram will use artificial intelligence (AI) technology to detect potentially offensive or bullying comments and remind users with a pop-up warning asking them whether they really want to put their posts online.
"For years now, we have used artificial intelligence to detect bullying and other types of harmful content in comments, photos and videos," Mosseri said.
The pop-up warning "gives people a chance to reflect and undo their comment and prevents the recipient from receiving the harmful comment notification," he added.
Moreover, Instagram is allowing users to "restrict" problematic followers and block their offensive comments from appearing publicly.
Nasty comments will only be visible to the users who compose them, which will help remove worries of the recipients who fear their overt reaction to those posts would aggravate the situation.
"We've heard from young people in our community that they're reluctant to block, unfollow, or report their bully because it could escalate the situation, especially if they interact with their bully in real life," Mosseri explained.
"These tools are grounded in a deep understanding of how people bully each other and how they respond to bullying on Instagram, but they're only two steps on a longer path," said the Instagram CEO.
He said Instagram is committed to leading the industry in the fight against online bullying, although online bullying is a complex issue.
The new features are part of Instagram's ongoing efforts to fight bullying and make the platform safer and more inviting to young users.