Instagram brings enhanced self-harm content detection tools to the UK 


Instagram has rolled out machine learning technology to its app in the UK and Europe that can better identify suicide and self-harm content.  

The content detection tool from the Facebook-owned social media platform automatically searches for suicide-related images and words on the platform. 

It then makes the content less visible in the app or, in some cases, removes it completely after 24 hours if the machine learning decides it breaks the app’s rules.  

The technology, which will work across both Instagram and parent site Facebook, already existed for users outside of Europe before today.

However, an issue with General Data Protection Regulation (GDPR), means that European users are still not able to get the fully-fledged version of the tool. 

Today’s roll-out follows the suicide of a 14-year-old London school girl Molly Russell, who viewed Instagram content linked to anxiety, depression, self-harm and suicide before ending her life in November 2017. 

Social media platforms are increasingly being pressurised to automatically remove content that could glamorise self-harm as quickly as possible, rather than relying on tip-offs from users.  

Instagram is using machine learning technology to help proactively find and remove more harmful suicide and self-harm content

An inherent issue with Instagram’s algorithms is that users are directed to a stream of posts related to a single post they’ve viewed. 

Just as a user who shows an interest in a particular footballer by clicking on their post sees more football-related posts, self-harm or suicide-related posts can end up bombarding a user’s feed.

Molly’s father, Ian, who now campaigns for online safety, has previously said the ‘pushy algorithms’ of social media ‘helped kill my daughter’. 

Instagram boss Adam Mosseri said today that fixing this dangerous loophole is an ‘important step’ but that the company wants to do ‘a lot more’. 

‘We want to do everything we can to keep people safe on Instagram,’ said Mosseri in a blog post announcing the roll out. 

Molly Russell, 14, from Harrow, north-west London, looked at social media posts about self-harm too extreme for lawyers or police to view for long periods before she took her own life, a coroner's court heard

Molly Russell, 14, from Harrow, north-west London, looked at social media posts about self-harm too extreme for lawyers or police to view for long periods before she took her own life, a coroner’s court heard

THE TRAGIC DEATH OF MOLLY RUSSELL 

Molly Rose Russell was found dead in her bedroom in 2017 after viewing disturbing self-harm images on Instagram.  

Her family later found she had been viewing material on social media linked to anxiety, depression, self-harm and suicide.

In a devastating note, she told her parents and two sisters: ‘I’m sorry. I did this because of me.’

Her father Ian accused Instagram of ‘helping to kill her.’ 

Ian has demanded that web giants take more responsibility for eradicating harmful material from the internet. 

Algorithms used by Instagram enabled Molly to view more harmful content, possibly contributing to her death.

The Molly Rose Foundation, dedicated to suicide prevention, was set up in Molly’s memory. 

‘We’ve worked with experts to better understand the deeply complex issues of mental health, suicide, and self-harm, and how best to support those who are vulnerable.

‘Our technology finds posts that may contain suicide or self-harm content and sends them to human reviewers to make the final decision and take the right action. 

‘Until now, we’ve only been able to use this technology to find suicide and self-harm content

‘Today in the EU, we’re rolling out some of this technology, which will work across both Facebook and Instagram.’ 

Between April and June this year, around 90 per cent of the suicide and self-harm content Instagram took action on was found by its own technology before anyone reported it to us. 

‘But our goal is to get that number as close as we possibly can to 100 per cent,’ Mosseri said.       

Instagram first rolled out the technology outside the US (except in Europe) to help identify when someone might be expressing thoughts of suicide in 2017. 

Outside of the Europe, a team of human reviewers can connect the Instagram user who post the content to local organisations that can help or, in severe cases, call emergency services. 

But Instagram told the UK’s Press Association news agency that this part of the tool is not available as part of today’s roll-out because of data privacy considerations linked to GDPR. 

Molly's father Ian (pictured in 2019) has been a vocal campaigner for reform of social media platforms and set up the Molly Rose Foundation in her memory

Molly’s father Ian (pictured in 2019) has been a vocal campaigner for reform of social media platforms and set up the Molly Rose Foundation in her memory

‘In the EU at the moment, we can only use that mix of sophisticated technology and human review element if a post is reported to us directly by a member of the community,’ said Instagram’s public policy director in Europe, Tara Hopkins.

Where an assessment would be made by a human reviewer on whether to send additional resources to a user, this could be considered by regulators to be a ‘mental health assessment’ and therefore a part of special category data, which receives greater protection under GDPR.

Hopkins said the company is in discussions with the Irish Data Protection Commission (IDPC) – Facebook’s lead regulator in the EU – and others over the tools and a potential introduction in the future.

‘There are ongoing conversations that have been very constructive and there’s a huge amount of sympathy for what we’re trying to achieve and that balancing act of privacy and the safety of our users,’ she said.

Instagram has received intense public backlash over the 2017 suicide of Molly Russell (pictured here aged 11)

Instagram has received intense public backlash over the 2017 suicide of Molly Russell (pictured here aged 11)

Mosseri said not having the full capabilities in place in the EU meant it was ‘harder for us to remove more harmful content, and connect people to local organisations and emergency services’.

This could help avoid tragic deaths like Molly Russell’s. 

Instagram is now in discussions with regulators and governments about ‘how best to bring this technology to the EU, while recognising their privacy considerations’.

Facebook and Instagram are among the social media platforms to come under scrutiny for their approach to and handling of suicide and self-harm material.

Concerns have been raised about self-harm and suicide content online, particularly how platforms handle such content and its impact on vulnerable users, especially young people.

Fears about the impact of social media on vulnerable people increased after the death of Molly Russell, who was found to have viewed harmful content online.

Instagram boss Adam Mosseri (pictured) said GDPR restrictions blocking a fully-fledged version of the technology in Europe makes it 'harder to remove more harmful content'

Instagram boss Adam Mosseri (pictured) said GDPR restrictions blocking a fully-fledged version of the technology in Europe makes it ‘harder to remove more harmful content’

Molly looked at social media posts about self-harm too extreme for lawyers or police to view for long periods before she took her own life, a coroner’s court heard. 

In September, Facebook and its family of apps were among the companies to agree to guidelines published by Samaritans in an effort to set industry standards on how to handle the issue.

Hopkins said Instagram was trying to balance its policies on self-harm content by also ‘allowing space for admission’ by people who have considered self-harm.

‘It’s okay to admit that and we want there to be a space on Instagram and Facebook for that admission,’ she said.

‘We’re told by experts that can help to destigmatise issues around suicide. 

‘It’s a balancing act and we’re trying to get to the right spot where we’re able to provide that kind of platform in that space, while also keeping people safe from seeing this kind of content if they’re vulnerable.’

SAMARITANS’ GUIDELINES FOR TECHNOLOGY PLATFORMS 

The following is a list of guidelines set out by UK suicide prevention charity Samaritans in September 2020 for tech companies to safely manage self-harm and suicide content online:

1. Understand the impact of self-harm and suicide content online

2. Establish clear accountability

3. Have a robust policy for addressing self-harm and suicide content

4. Put user friendly processes in place to report self-harm and suicide content

5. Effectively moderate all user-generated content, considering both human and AI approaches

6. Reduce access to self-harm and suicide content that could be harmful to users

7. Take steps to support user wellbeing

8. Communicate sensitively with users in distress, taking a personalised approach where possible

9. Find ways to work collaboratively and demonstrate transparency in approaches to self-harm and suicide content

10. Establish processes to support the wellbeing of staff exposed to self-harm and suicide content 




Source link