Must Know: Online harms

front cover for must know online harms publications
The purpose of this guide is to raise awareness about online harms and empower councillors by providing an introduction to online risks, an overview of the Online Safety Bill, key considerations, signposting to useful resources, as well as a checklist to support effective decision making.

Introduction

Safeguarding and the protection of children and vulnerable adults is the responsibility of everyone whether in a personal (moral) or a professional (statutory) capacity. But the global nature of the internet, ease of communication, the fast-paced nature of technology, including the fact that connected devices are integral to people’s lives means the online world has added complexity to safeguarding and protecting children and vulnerable adults.

Illegal and unacceptable content and activity is widespread online, the most serious of which threatens national security and the physical safety of children. Online platforms can be, amongst other things:

  • a tool for abuse and bullying
  • a means to undermine democratic values and debate, including mis and disinformation
  • used by terrorist groups to spread propaganda and radicalise
  • a way for sex offenders to view and share illegal material, or groom and live stream the abuse of children
  • used by criminal gangs to promote gang culture and incite violence.

Additionally, platforms can be used for other online behaviours or content which may not be illegal but may be detrimental to both children and adults, for example:

  • the potential impact on mental health and wellbeing
  • echo chambers and filter bubbles driven by algorithms; being presented with one side of an argument rather than seeing a range of opinions.

It is widely recognised that the internet and the world wide web were never designed with children in mind; many protective measures are reactive and inconsistent rather than proactive. Historically, tech companies have largely self-regulated in relation to content, contact and conduct of users, seemingly only responding when there is public outcry. A large proportion of the blame is often attributed to legislation from the United States, specifically The Communications Decency Act (CDA) 1996 Section 230 and whilst this is US legislation, the effects are worldwide given that many tech companies are based in the US.

Often cited as, “The 26 words that made the internet”, the intentions behind CDA s230 were good, allowing users freedom of speech whilst protecting the platforms  user generated content is published on. Whilst there is no protection against illegal content, without CDA s230 there would be no Amazon reviews or Facebook comments, YouTube videos would be severely restricted and much more.

What is CDA s230?

Section 230, The Communications Decency Act 1996 is legislation in the United States which provides immunity to owners of any ‘interactive computer service’ for anything posted online by third parties. 

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

But this creates a challenge: whilst there are statutory measures to prevent or remove content that is illegal, what about content that is legal but potentially harmful? Most interactive online services have age restrictions, commonly 13, to comply with advertising laws (the U.S. Children's Online Privacy and Protection Act, and in the UK the General Data Protection Regulations, GDPR), yet very few have effective age-verification processes or parental controls. Furthermore, potentially harmful content doesn’t just relate to children, it can significantly affect adults too. For example, misinformation and disinformation related to COVID-19 or the efficacy of vaccines, election campaigns and much more.

The UK has started to lead the way in this area with the introduction of the Age-Appropriate Design Code. Often called The Children’s Code, this is a statutory code of practice under the Data Protection Act 2018 brought into legislation in September 2020 which places a baseline of protection automatically by design and default.

But the Children’s Code is UK legislation; the internet is global. Whilst everyone is at some level of risk, front of mind for any service delivery should be protections for those who are vulnerable, children and adults alike. Evidence is clear that those with a real-world vulnerability are not only more likely to experience online risks but suffer more than their non-vulnerable peers. 

But what is meant by ‘vulnerable’? In the context of online harms in general, vulnerability is widespread, the term is often used with children and/or adults with additional needs, children in care, young people in pupil referral units and much more. But anyone can be vulnerable, for example consider election periods and a prospective elected member who uses public social media as part of the campaign process. That person is now vulnerable to abuse, harassment and much more. During the 2019 General Election campaign 4.2 million tweets were sampled in a study; candidate abuse was found in nearly 4.5 per cent of all replies compared to just under 3.3 per cent in the 2017 General Election.

Purpose of the guide

The purpose of this guide is to raise awareness and empower councillors by providing:

  • an introduction to online risks and harms with examples
  • an overview of the Online Safety Bill
  • signposting to helpful resources and legislation
  • key implications and considerations for councillors
  • a checklist for councillors to support them in effective decision making.

Online harms

If you were asked to make a list of real-life risks, it would be a never-ending list. This is similarly the case online. One of the key messages to understand is: almost any behaviour can be enacted online.

For example, can self-harm be enacted online? The answer is yes; it can be carried out by an individual sending vile or derogatory messages to themselves that others can see. These messages are often sent using other accounts, sometimes anonymised and there is emerging evidence that this is a growing problem, particularly amongst teenagers.

This gives rise to questions, for example how many professionals are aware of this? Is this taken into consideration during interventions? How widespread is it within your communities? Anecdotal evidence suggests it isn’t widely known about and a recent review of children and youth services shows there is poor awareness of the breadth of online risks by professionals and frontline staff, with a narrow focus on child sexual exploitation.

The example above gives you an understanding of the enormous scope of online risks that may lead to a harmful situation. To help our understanding of risk it is useful to simplify into categories, commonly referred to as the 3C’s, which are:

  • content (person as recipient)
  • contact (person as participant)
  • conduct (person as actor).

Whilst these categories are often spoken in the context of children, they are equally attributable to adults and can be further sub-divided into commercial, aggressive, sexual and values, for example:

 

Commercial

Aggressive

Sexual

Values

Content

Advertising

Hateful content

Sexual content

Misleading information

Contact

Tracking

Being bullied or harassed

Being groomed or exploited

Self-harm

Conduct

Hacking

Bullying or harassing others

Stalking

Image-based abuse, health and wellbeing

 

These examples only scrape the surface of the risks and harms that we currently know about; as time passes by we learn about new risks and harms, particularly related to the fast-paced nature of technology and how this technology can diversify in terms of reach and impact.

Risk and harm

Offline and online, risk is inevitable in all of our lives, none more so than when growing up. Without taking risks, children and adults would not know how to recognise, risk-assess and mitigate situations where there is a likelihood of harm and therefore build resilience. Over the years there have been a number of campaigns to increase the resilience of young people in particular. One such example would be the Rise Above Campaign from Public Health England and the Rise Above for Schools programme which gave schools the resources to help build crucial life-skills, boosting resilience and improving mental health and wellbeing across a number of different aspects such as bullying and cyberbullying, positive relationships and friendships, body image in a digital world and more.

These campaigns and programmes are vital in order to build awareness, as risk doesn’t necessarily mean harm; based on a number of different factors, risk is a probability of harm and different factors can make a person more resilient or vulnerable. For example, a 2020 UK study of 6,000 young people aged 13-16 showed that some had shared nude images of themselves because they wanted to, either within a relationship, for fun, or because they thought they looked good. The majority stated that nothing bad happened and therefore will ignore online safety advice given at school or at home. In contrast, those with one or more vulnerabilities, in eagerness to be accepted, are far more likely to be pressured or blackmailed into sharing nudes, often with terrible consequences such as further blackmail or being bullied. Within this study, among those who shared nude images, 18 per cent were pressured or blackmailed into it.

Child exploitation

What is it?

There are various forms of child exploitation which follow the same general pattern: an offender takes advantage of an imbalance of power to coerce, deceive or manipulate a child. In this part of the guide, we will briefly look at child criminal exploitation (CCE) and child sexual exploitation (CSE). Child exploitation often (but not always) involves grooming, which is when an emotional connection is built to gain trust.

Child criminal exploitation

This includes being forced into shoplifting or threatening other young people. Increasingly it includes forcing young people and vulnerable adults into moving drugs, known as county lines, where gangs and criminal networks export illegal drugs into other areas of the UK and use dedicated mobile phone lines, called deal lines. Children and young people can be contacted and groomed online before taking part in ‘real world’ activity.

Child sexual exploitation

As technology advances new variations of CSE emerge; over recent years there has been a huge rise in a new form of CSE via live streaming, in which victims are coerced into taking and sharing indecent images/videos (known as self-generated) using apps or online services with webcams. Although all forms of CSE are of significant concern, live streaming is fast becoming the most significant area of concern and, according to the UK-based Internet Watch Foundation April 2021 report, sibling self-generated abuse is on the rise, where a child is groomed and exploited to abuse their brother or sister on camera.

What does the research say?

Knowing real figures is impossible - often victims won’t tell anyone for a number of reasons such as shame, guilt, or victims may not be aware they are being groomed or exploited. The National Crime Agency estimates there are over 1,000 county lines in the UK and there are between 550,000 and 850,000 people in the UK who pose a physical and online sexual threat to children. But the internet is global, communication with a child is easy. In 2020 the UK’s Internet Watch Foundation dealt with 68,000 reports of illegal self-generated images/videos, a 77 per cent increase on 2019. In reference to sibling abuse, their April 2021 report showed that between September and December 2020 511 self-generated images and videos were determined to involve siblings.

Bullying and intimidation

What is it?

Bullying and intimidation involves the repetitive, intentional hurting of one person or group by another person or group, where the relationship involves an imbalance of power. It can take many different forms, from the relatively easy to spot, for example abuse and threats on public social media or websites, to the more difficult to uncover such as private messaging or anonymous apps. It can also happen within online gaming, for example continually being targeted and killed early in the game (referred to as ‘griefing’ by young people).

Sometimes it will take more indirect forms, such as the passing around of gossip and rumours, isolating people from their online social groups, for example leaving them out of a conversation amongst friends, e.g. WhatsApp groups. Motivations for bullying and intimidating behaviour can be widespread, but commonly include attitudes towards:

  • appearance
  • sexuality
  • race
  • culture
  • religion.

What does the research say?

The overall trend is one of a problem that is increasing. Some studies indicate online bullying has overtaken the traditional real-world bullying, while other studies indicate most bullying is face-to-face with ‘online’ used as an extension.

The annual Ditch the Label study of 13,387 young people indicates that 25 per cent of young people have been bullied and 3 per cent have bullied. Of those who had been bullied:

  • 44 per cent said they felt anxious
  • 36 per cent said they felt depressed
  • 33 per cent had suicidal thoughts
  • 27 per cent had self-harmed
  • 11 per cent had attempted suicide.

Extremism and radicalisation

What is it?

Defined as ‘the vocal or active opposition to fundamental British values, including democracy, the rule of law, individual liberty and mutual respect and tolerance of different faiths and beliefs,’ extremism refers to an ideology considered to be outside the mainstream attitudes of society.

Radicalisation is the process where someone changes their perception and beliefs to become more extremist.

Extremists use the online space to target and exploit vulnerable people, and to spread divisive propaganda and disinformation.

There are no typical indicators that point to a risk of radicalisation, but vulnerabilities are often exploited which would include:

  • low self-esteem or social isolation
  • being a victim of bullying or discrimination
  • confusion about faith or identity.

Equally, radicalisation can be difficult to spot but indicators would include:

  • isolation from family and friends
  • unwillingness or inability to discuss their views
  • increased levels of anger
  • talking as if from a scripted speech
  • sudden disrespectful attitude towards others
  • increased secretiveness, particularly around internet use.

Beyond radicalisation, online extremist narratives can stoke division and sow mistrust between communities, impacting on local cohesion and helping to fuel hate crime and other forms of criminality.

What does the research say?

The annual research from Hope Not Hate (State of Hate 2021) concludes that the pandemic has quickened the demise of many traditional far-right groups whilst younger, more tech-savvy activists have thrived, often using unmoderated platforms or gaming sites, including a new extreme-right group called the National Partisan Movement, which is an international Nazi group made up of 70 teenagers from 13 countries, eight members in the UK. Whilst this may seem like a low number it is worth noting that activists on some platforms have a considerable number of followers. Furthermore, the Commission for Countering Extremism study into how extremists exploited the pandemic shows that extremists used it to engage in disinformation to incite hatred and divide communities, creating conditions conducive for extremism.

Misinformation and disinformation

What is it?

Misinformation and disinformation are widespread online, often circulated via social media or YouTube videos and can cover every conceivable or inconceivable topic. 

The terms misinformation and disinformation are generally quite similar and are often used interchangeably, but there is an important distinction:

  • misinformation refers to false or out of context information, which is presented as factual, regardless of an intent to deceive
  • disinformation is false information where there is intent to deceive.

It can sometimes be difficult to discern between what is true or false, misleading or an opinion, up-to-date or out-of-date information and even, particularly online where there can be a lack of emotional contact, a joke or malicious intent.

The consequences can be varied but include mistrust, confusion, fear and bias which lead to political polarisation, undermining democracy and much more.

The pandemic is a good example of the spread of mis and disinformation but there are many other examples where sharing can be heightened, for example elections and not-quite-truths, or bad actor interference, such as foreign states spreading disinformation by targeting particular groups on social media.

What does the research say?

 

It is impossible to know the scale of mis and disinformation. However, there are triggers, such as elections and the pandemic which give a greater understanding. In week one of lockdown Ofcom reported nearly 50 per cent of people were seeing information online they thought to be false or misleading about the pandemic, with this figure at almost 60 per cent for 18–36 year olds. An analysis of the most viewed YouTube videos related to coronavirus found that over 25 per cent  of the top videos contained misinformation with views totalling 62 million.

In relation to children, misinformation and disinformation is commonplace, often enacted through so-called online challenges or enticements such as gifts. A relatively common example of this is the enticement of free in-game currency, eg Fifa coins, Robux and V-Bucks. These often circulate on YouTube and other social media channels where a link is shared for the child to enter their username/password details in order to receive the free gift. However, this is phishing, a means whereby a false link is used to deceive a person into revealing user credentials.

Addiction

What is it?

Addiction is most commonly associated with aspects such as drugs, nicotine, gambling and alcohol, but it has also become a commonly used term to describe a broad range of online behaviours, such as online gaming addiction (internet gaming disorder), online gambling addiction, social media addiction, mobile phone addiction, or even just internet addiction in general, which then spans into other areas such as screen-time.

It could be argued that online exacerbates an existing addiction or leads to addictive behaviour such as gambling, but currently the science is contradictory. More often than not, the term addiction is used in the colloquial sense, particularly by concerned parents.

What does the research say?

Online addiction is an area with many different arguments and little agreement amongst scientists, for example does online (eg social media, smartphones) cause addiction, or is addiction correlated to online?

The causation/correlation argument is an important one. With the exception of internet gaming disorder, which has been criticised by some scientists due to a lack of robust evidence, there are no recognised online disorders. In the words of leading UK psychologist, Dr Amy Orben, “There is very little evidence and even less high quality, robust and transparent evidence”. However, there is widespread concern in relation to the use of social engineering tactics by tech companies, such as nudge techniques and persuasive design to keep users within apps and games for the purpose of making money. A number of countries, such as Belgium, have banned the use of ‘loot boxes’ in games which are thought to promote gambling-like behaviours, particularly with children. In the UK the Culture Secretary has launched a review to bring the Gambling Act 2005 up-to-date to ensure the Act is fit for the digital age, which includes loot boxes and many other areas.

Fraud and identity theft

What is it?

Regardless of age, your identity is one of your most important assets. Put simply, ‘your name, address and date of birth provide enough information to create another you’. These details can be used to open bank accounts, loan applications, mobile phone contracts, order goods and much more.

Identity theft is when your personal/private details are stolen. Identity fraud is when those stolen details are used for fraudulent purposes.

Criminals are increasingly using technology in more complex ways, often using social engineering tricks such as fear or urgency to lure people into revealing personal and private information, for example phishing scams. But whilst many people are aware of the basic safeguards to protect their identity, such as storing documents safely, shredding or destroying old documents, etc. this information can be relatively easy to find online. For example, you may not publish your birthday celebration on Facebook or Instagram, but a friend may wish you a happy 40th birthday within a public account, meaning your date of birth is now public. Perhaps that image taken in the restaurant having a lovely meal, with a credit card sitting on the table ready to pay the bill. These are innocent, everyday examples, yet the consequences can be significant. Using very simple search techniques, in many cases criminals can find personal and private information with relative ease. Equally, company data breaches can be a key enabler of fraud, something which individuals have little control over.

What does the research say?

Fraud is the most commonly experienced crime in the UK. The Crime Survey for England and Wales estimated that there were 3,863,000 fraud offences in the year ending June 2019, but the number reported and collated by the National Fraud Intelligence Bureau was 740,845. According to Cifas, the UK’s fraud prevention service, identity theft accounts for the majority of fraud cases with the most common method being via email or text (known as phishing and smishing) pretending to be a bank or service provider.

Online Safety Bill

In April 2019, the Online Harms White Paper was published proposing that all technology companies, big or small, will have a duty of care to their users commensurate with the role those companies play in our daily lives. After a period of consultation, the Government released their full response to the white paper on 15 December 2020 and on 12 May 2021 the Government released the draft Online Safety Bill.

What does the bill propose?

The Online Safety Bill establishes a new regulatory framework encompassing plans for a system of accountability and oversight for technology companies which moves beyond self-regulation and with the aim of preventing harm to individuals in the United Kingdom.

This framework will make clear to companies their responsibilities to keep users in the UK safer online by imposing duties of care in relation to illegal content and content that is harmful to children, whilst also imposing duties on providers to protect rights to freedom of expression and privacy. Providers of user-to-user services, which is a broad range of businesses including social media platforms, dating apps, online marketplaces etc, which meet specified thresholds will have additional duties imposed specifically in relation to content that is harmful to adults, content of democratic importance and journalistic content.

How will it do it?

Whilst there are many different aspects within the Online Safety Bill, the main ones are:

Statutory duty of care

All companies in scope will have a statutory duty of care towards their users, requiring those companies to prevent illegal content and activity and ensure that children are not exposed to harmful content. Broadly, there are a number of duties:

  • illegal content risk assessment and content duties
  • the duty of rights to freedom of expression and privacy
  • the duty of reporting and redress
  • record-keeping and review.

For services likely to be accessed by children there are two additional duties:

  • children’s risk assessment
  • duties to protect the online safety of children.

Finally, for category one services (which will be defined in secondary legislation but are likely to be the largest global platforms):

  • adults risk assessment
  • duties to protect the online safety of adults
  • duties to protect democratic content
  • duties to protect journalistic content.

Codes of practice

Produced by Ofcom, these statutory codes will outline the systems and processes that companies need to adopt in order to fulfil their duty of care. There will not be a code of practice for each category of harmful content.

Independent regulator

Accountable to Parliament, Ofcom will oversee and enforce compliance, funded from industry fees placed upon companies above a threshold based on global annual revenue. The primary duty of Ofcom will be to improve the safety of users of online services and within that duty there will be a number of functions, including:

  • setting out what companies need to do to comply
  • establish a framework for transparency, trust and accountability
  • effective reporting and redress mechanisms
  • commission research to improve understanding of online harms
  • enforcement, the aim of which is to encourage compliance and positive cultural change including civil fines up to £18 million or 10 per cent of annual global turnover, whichever is higher, irrespective of where that company is based in the world.

What is in scope?

Once produced, the legislation will set out a general definition of harmful content and activity. There will be no exhaustive or fixed list as this would prevent the ability to respond quickly to new forms of online harms. The general definition will apply to content and activity where there is a ‘reasonable, foreseeable risk of significant adverse physical or psychological impact on individuals’. A limited number of priority categories will be set in secondary legislation which will cover:

  • criminal offences, including child sexual exploitation and abuse, terrorism, hate crime and sale of drugs and weapons
  • harmful content and activity affecting children, eg pornography, violent content
  • content and activity that is legal when accessed by adults, but which may be harmful to them, eg content about eating disorders, self-harm, suicide.

What isn’t in scope?

There are a number of aspects that are not in scope due to existing legislative, regulatory and other governmental initiatives in place:

  • harms to organisations
  • harms resulting from breaches of intellectual property rights
  • harms resulting from breaches of data protection legislation
  • harms resulting from breaches of consumer protection law
  • harms resulting from cyber security breaches or hacking.

Implications

All councils are committed to ensuring the best possible access to their services to ensure parity, economy of scale and democratic freedoms. This means that consideration needs to be given to how all users are able to access services, that they are protected and that risks are mitigated.

But as you have seen within this guide, the scope of online risks and harms is enormous and the potential impact on individuals, communities, public authorities and others is significant. Meaningful intervention requires a collaborative approach and whilst the Online Safety Bill should have a positive impact, we cannot rely on technical solutions to wholly prevent what are largely behavioural issues. Furthermore, whilst the Online Safety Bill initially targets those tech companies which have the most impact on our daily lives, any service provider, including public authorities, should give due consideration to a number of aspects to ensure they are delivering the most appropriate and cost-effective services, including:

  • Awareness – do councillors, officers and frontline staff such as social care, children’s and adult services and other professionals have a good, up-to-date understanding of online risks and harms? This is the cornerstone, the one aspect that affects all others and is fundamental to any service delivery, without which priority areas cannot be established and meaningful interventions put in place.
  • Scrutiny and challenge – do councillors offer scrutiny and challenge in relation to programme and project development and online harms? Are the right questions being asked and are there effective risk mitigations in place?
  • Collaboration – is there effective collaboration between teams including multi-agency teams? As mentioned previously within the guide, almost any behaviour can be enacted online, is this taken into account and recorded appropriately to inform future initiatives and interventions?
  • Funding – is funding allocated to those priority areas and services? Do initiatives and interventions have a positive impact and are they cost effective?
  • Service delivery – when providing access to information or engaging with constituents via third-party platforms, for example. Facebook, Twitter, Instagram, the platforms will become bound by the statute in the Online Safety Bill. However, it would be best practice for councillors and officers to understand the Bill and to risk assess the use of these platforms. Equally any in-house platforms should be risk-assessed in relation to the content, the method of engagement and importantly, moderation to ensure that engagement is not counter-intuitive to the Bill.
  • Corporate parenting – looked-after children will get access to not only services online but will most likely be given one or more devices (this could be owned by the local authority or could become owned by the child). How are children taught about keeping themselves safe online? What settings have been used to prevent access to unsuitable content?

Personal responsibility and online harms

Though the Online Safety Bill does not specifically tackle issues around councillors’ personal responsibilities and behaviours there are significant reputational risks that all councillors will want to be aware of. Councillors should be accountable and adopt the behaviours and responsibilities associated with the role and this also applies in the online context.

Modelling behaviour in the online world is crucial to promote best practice and to protect elected members’ individual reputations and that of their councils. Councillors will want to avoid promoting or assisting in any of the risks mentioned above ensuring that they are not for example promoting mis or disinformation or deploying any bullying tactics.

Councillors should seek support from social media or communications professionals if they need to and should promote and signpost credible sources of support and information to others.

The LGA have developed a range of guidance to support councillors in their online communications.

Checklist

These questions are designed to help you to support your organisation in developing best practice

Awareness 

Raising awareness for online harms and the risks that they can inflict on individuals is crucial. Therefore, all stakeholders need to be aware of the risks and their impact and this needs to be considered within the development of project and campaigns.

  • Are relevant stakeholders aware of the risks of online harms including councillors/members of staff?
  • Have identified stakeholders been trained in online risks?
  • Are residents aware of the risks of the broad range of online harms and how they can report incidents?
  • Is the council promoting and exemplifying best practice?
  • Has the council considered the implications of the Online Safety Bill and the compliance issues?

Scrutiny and challenge

Scrutiny and challenge is a key role of all councillors. Online harms need to be factored in and challenged in the same way as other projects and programmes.

  • Do councillors have confidence that any project or programme of work have considered the risk of online harms and put in place effective mitigation? (For example, do communications plans consider risks around misinformation, or do financial inclusion plans consider the risks of identity fraud?)

Collaboration

Lessons learnt should be shared across multi-disciplinary and multi-agency teams so that the most effective responses to online harms or the risk of online harms can be identified.

  • Are there cross-council and multi-agency approaches in place to mitigate risks and tackle online harms where appropriate?
  • How is learning shared to ensure effective approaches?

Funding 

  • Is funding available to tackle online harms, for example through communications campaigns, youth services or offline support to deal with the impacts of harms?
  • Where programmes to tackle online harms are introduced, what evidence is available that these are effective?

Service delivery 

Where funding is assigned to digital projects or projects that involved technology it is essential that risks are mitigated for users.

  • Do digital projects factor in mitigating online risks, eg adopting the Age Appropriate Design Code as a method of best practice?
  • Is digital the most effective way of delivering the service?
  • Are users protected?
  • Are partners and factoring in online risks and has there been due diligence on their arrangements?

When deciding on how to deliver services to constituents and users it is important that digital service delivery is considered especially where a third-party provider will be used.

  • Has the platform been risk assessed?
  • Are there plans in place to help to mitigate online risks?
  • How will any online incidents be handled?
  • Are there additional safeguards that need to be in place to support adults at risk and children?

Governance 

Online harms and mitigating risks should be factored across all of the established governance arrangements.

  • Are leaders aware of their responsibilities around online harms?
  • Are online harms factored into project, campaign and programme initiation?
  • Are online harms risk assessed?
  • Do online harms feature in the risk log?
  • Are online harms concurrent with offline safeguarding arrangements?

Annex A: Resources

Annex B: The law

Extremism and radicalisation

The Prevent duty refers to Section 26 of the Counter-Terrorism and Security Act 2015 which states that specified authorities, which includes colleges and universities, adult education providers and sub-contractors, should have due regard to the need to prevent people from being drawn into terrorism.

Child exploitation

Child criminal exploitation is covered the Modern Slavery Act 2015. Child sexual exploitation is covered within the Sexual Offences Act 2003.

Bullying and intimidation

Intimidation may constitute an offence under the Protection from Harassment Act 1997, but unlike in some other countries there’s no specific crime of bullying. Perpetrators may be prosecuted under a number of pieces of legislation, for example:

  • Protection from Harassment Act 1997
  • Malicious Communications Act 1988
  • Computer Misuse Act 1990
  • Defamation Act 2013.

Misinformation and disinformation

There is currently no legislation in relation to mis and disinformation, but this is what the Online Safety Bill hopes to tackle, by imposing a legal duty of care on companies, ensuring disinformation is tackled effectively while respecting freedom of expression and promoting innovation.

Addiction

There is little in relation to the law in the context of this guide and addiction, however in relation to gambling-like behaviours within online games, the Digital, Culture, Media and Sport (DCMS) Select Committee inquiry into Immersive and Addictive Technologies launched a call for evidence in June 2020 to understand the impact of loot boxes as part of a commitment to review the Gambling Act 2005. This consultation ended on 22nd Nov 2020 and at the time of writing this guide the feedback is under review.

Fraud and identity theft

There are numerous laws which cover various aspects of fraud and identity theft, but the main Act is The Fraud Act 2006 which has two relevant pieces of legislation relating to identity crime.

About the authors

This guide was produced by Charlotte Aynsley and Alan Mackenzie.

Charlotte Aynsley – Rethinking Safeguarding

Charlotte has a broad range of experience in the field of digital safeguarding, spending the last 10 years supporting Government, local authorities, charities and schools to keep children safe online. Her work has included high profiled initiatives such as the NSPCC’s Share Aware campaign, the It Starts With You online safety campaign from Walt Disney’s Club Penguin, and national safeguarding advice on sexting in schools and colleges.

More recently Charlotte has been working with high-profile organisations including the National Cyber Security Centre (NCSC), NCA – CEOP (National Crime Agency - Child Exploitation and Online Protection Centre), the Princes Trust, Girlguiding, the Mayor of London and the NSPCC, to develop leading edge safer platforms and advice and resources for professionals working with children to keep them safer online.

Alan Mackenzie – E-safety Adviser

Alan is a consultant who has extensive experience working in the public, private and third sector specifically in relation to online safety and the use of technology by children, young people and adults. With a local authority background, after retiring from the Royal Navy in 2005 he was the Service Manager for 367 schools on behalf of Children’s Services, which included responsibility for county-wide online safety, working in partnership with the Safeguarding Children’s Board, the Police, the third sector and others to fulfil national and county council priorities in relation to policy, education and awareness. In 2011 Alan became an independent consultant with a focus on the education of children, young people, staff, parents, governing bodies and trustees, as well as ensuring that schools are fulfilling their statutory obligations by conducting comprehensive audits.

Alan is regularly commissioned for projects from charities such as the NSPCC and organisations related to online safety and online harms, including writing position papers and white papers, risk assessments, educational resources and briefings to a wide range of organisations,