In July 2014, the Ice Bucket Challenge raised both global awareness and over $220M in the fight against ALS. Since that summer, several other businesses and charities have tried to replicate its success, but no challenge has harnessed quite the same viral attention.
In the years since the ALS Ice Bucket Challenge, the “social media challenge” has strayed from its initial concept of lighthearted fun to include meaningless and sometimes dangerous dares and pranks. Many are harmless, but a growing number of challenges place social media users—many of whom are kids—in real danger, which can result in harm to themselves and others. Some of these “challenges” have seen willing participants coat their heads in superglue, ingest dangerous substances, or shoot Orbeez at strangers.
As we saw with the successful ALS challenge, these opportunities to work with causes and brands are supported by users in a big way. The seemingly “rules-free” social media environment, however, presents a liability for brands and deters them from fully utilizing the potential power of the online challenge to authentically connect with their target audiences.
How do we better ensure the safety of our online community while fostering an environment that supports creativity and self-expression through video and content creation? The current strategy used by social media platforms is not working. Apps and websites tend to rely on self-policed environments where community members tag or flag questionable content which is then reviewed by their moderators. This approach, however, is fundamentally flawed. While social media platforms have taken strides to improve their enforcement of community guidelines by removing offending content, there is still an influential possibility of harming videos making it to users. TikTok, in their most recent enforcement report, shared that they removed more than 113M videos from April to June 2022, which accounts for less than 1% of the total number of videos published. Approximately 43% of the videos taken down were reported due to concern for minor safety. Considering the inconceivable number of videos published every minute, even TikTok’s success of removing videos leaves room for dangerous videos to be updated and viewed (TikTok’s proactive removal rate is 89.1% with 74.7% removed at zero views). The dangerous “Blackout Challenge” has resurfaced time and time again under new names, bypassing security measures. The rapid spread of online content can quickly turn a single post “ember” into an online brushfire that can’t be controlled or extinguished.
With this in mind, it is understandable that brands avoid entering the challenge area for fear it will become warped and harmful. While the company had no part in the creation of the challenge, Tide saw the risk when social media users started daring each other to eat Tide pods. The risk to the brand and the concern for the safety of their customers and audience outweigh any potential reward from the virality of the challenge. Similarly, NyQuil is now responding to reports of its medicine being used as a marinade for cooking chicken in a new social media challenge known as “Sleepy Chicken.” Brands have left social media in recent years due to the toxicity and bullying that occurs on these platforms. Cosmetic company Lush refocused its use of social media due in part to the destructive effects of the sites, leaving four major platforms: TikTok, Facebook, Instagram, and Snapchat.
These dangerous challenges and other harmful behavior occur on platforms that are policed by algorithms and self-reporting functions. While these work to remove the videos after the fact, the only way to stop the spread of malicious and dangerous content is to review all content that is submitted—by both the challenger creators and the challenge entrants—before it is available to the general public.
What consumers want to see is a social media challenge app offering solutions for brands seeking to connect safely with their thriving online community of fans. This would be a platform where challenges and video submissions are human-reviewed prior to their exposure on the app. To ensure authenticity and the safety of others, these apps must have human-led reviewers to ensure that the content meets predetermined and defined community standards before any viewers see it. This gives the brand more control over their online experiential marketing campaign and helps insulate them from any potential reputational damage.
While it will never be possible to safeguard all users from bad actors and trolls on social media, simple fundamental actions can restore the reputation of the online challenge. This would unleash its experiential marketing potential to companies in a protected environment that is free of potential brand damage, creating a much safer (and more fun) environment for online community members of all ages.