U.S. Surgeon General Dr. Vivek Murthy released a report in May calling for various stakeholders, including policymakers, technology companies, parents, researchers and youths themselves, to talk more about the adverse effects social media can have on children and teens. What that exactly looks like is still being debated, but the consensus is tech companies need to do more to provide transparency about the possible harm their products can have on youth and provide more safeguards.
“The most common question parents ask me is, ‘Is social media safe for my kids?’. The answer is that we don’t have enough evidence to say it’s safe, and in fact, there is growing evidence that social media use is associated with harm to young people’s mental health,” said Dr. Murthy in the report.
Dr. Murthy highlighted children are experiencing harm on social media that ranges from exposure to violent and sexual content to cyberbullying and harassment. He also noted social media usage is linked to disturbances in children’s sleep and time spent with family and peers. However, the key factor is the amount of time children spend on social media compared to social media usage in general.
Youth are spending more time online than ever before. A 2023 Pew Research Center survey found one-third of U.S. teens ages 13 to 17 report using at least one of the big five social media platforms (YouTube, TikTok, Snapchat, Instagram and Facebook) almost constantly, and more than nine in 10 say they use the internet at least daily.
According to the 2023 C.S. Mott Children’s Hospital National Poll on Children’s Health, the overuse of devices/screen time (67%), social media (66%) and internet safety (62%) ranked among the top concerns reported by parents regarding their children’s well-being. Though these concerns became prevalent during the COVID-19 pandemic, they have not decreased.
In January, the Senate Judiciary Committee’s hearing with CEOs of Meta, X, TikTok, Snap and Discord addressed concerns about children’s safety online. Though the hearing was focused on child sexual exploitation online, it opened a bigger conversation about the overall safety of children online and what tech companies are doing to address the issue.
Since that hearing, the Committee has reported the following bipartisan bills to prevent online exploitation of children:
- The STOP CSAM Act supports victims and increases accountability and transparency for online platforms.
- The EARN IT Act removes tech’s blanket immunity from civil and criminal liability under child sexual abuse material laws and establishes a National Commission on Online Child Sexual Exploitation Prevention.
- The SHIELD Act ensures that federal prosecutors have appropriate and effective tools to address the nonconsensual distribution of sexual imagery.
- The Project Safe Childhood Act modernizes the investigation and prosecution of online child exploitation crimes.
- The REPORT Act combats the rise in online child sexual exploitation by establishing new measures to help strengthen reporting of those crimes to the Cyber Tipline.
Tech companies are still grappling with this issue. What decisive actions they will take to protect kids online remains to be seen. These businesses are dealing with many other issues, such as the U.S. ban of TikiTok. Hopefully, safety for youth will remain a top initiative for tech companies. Some of the proposed solutions different stakeholders want tech platforms to adopt include exercising more control over algorithms so inappropriate content is not presented to underage users, improving content moderation and enacting more user age-verification capabilities.
Age verification has been at the center of debate regarding access to sites presenting adult content. As of March, nine states including Texas, North Carolina, Virginia, Indiana and Louisiana have passed laws mandating age verification for accessing adult content. Several other states, such as Florida, Idaho and South Dakota, are legislating over similar bills.
As technology continues to advance, other considerations must be made regarding children’s safety online. Emerging technologies, such as generative AI, are opening up additional dialogue on handling the potential dangers these tools can bring.
Generative AI opens a whole new Pandora’s box of technology issues to grapple with on civil and ethical grounds. It is a challenge almost all of society is scrambling to deal with. Concerns around AI’s capabilities include cybercriminals and scammers using it to carry out illicit activity more easily. Criminals can use AI voice-cloning technology to ask for money or personal information or use deepfake technology to alter pictures or videos to impersonate loved ones. These concerns affect young people in the distribution of sexualized images of minors or use of the technology to coerce minors into doing illicit acts.
One thing is certain — legislators, families, researchers and technology companies have a Herculean task ahead of them as laws and people must adapt to the rapid pace at which technology advances. Everyone involved has their own responsibilities to perform. Our leaders must continue to pass legislation that holds tech companies accountable and advocates for children’s safety online. Caregivers must continue to stay tech-savvy, know what their kids do online and talk with them about cyber safety. Tech platforms must continue to build child safety measures into their products. But we can all agree that for children’s sake, any effort is better than none.
Featured image: Photo by Creative Christians on Unsplash
Edited by: James Sutton & Steven London










