Contents
Alt-tech social networks have exploded in popularity in recent years. And thanks to lax content moderation rules, these communities have become havens for criminal groups and other bad actors.
This creates new challenges for security teams.
On the one hand, many professionals may not even be aware of some of the new channels where people conjugate online. So as a result, organizations gathering open-source intelligence online could overlook serious threats to their operations.
On the other hand, alt-tech social networks present new opportunities to discover and investigate threat intelligence. This is especially true in forums and communities with relaxed content moderation guidelines.
So where should investigators and security practitioners get started?
In a recent webinar, our very own Director of Strategy and Partnerships Neil Spencer hosted a panel to explore how to collect intelligence, investigate threats, and mitigate security risks facing your organization using new alt-tech social networks. In particular, the group decided to focus on some of the largest and fastest-growing of these communities, namely Telegram, Chan Boards, and TikTok.
He was joined by Head of Intelligence at Securitas SIU, Mike Evans; Assistant Deputy Director, Intelligence Operations at Secure Community Network, Nathan Otto; and the Director of Research and Analytics at 2430 Group, Lindsay Wright.
Let’s dive in.
Learn More: 5 Cognitive Biases That Could Affect Your OSINT Investigations
1. Security teams should pay more attention to TikTok
TikTok is a relative newcomer on the social media scene. But at the time of the webinar, over 1.0 billion people use the platform worldwide. And it ranks as the number one most downloaded app of 2022 – well ahead of mainstream rivals like Facebook, Instagram, and Reddit.
Such rapid growth, as you might expect, has attracted the attention of OSINT analysts and security teams. Like any other social media app, TikTok has become a home for problematic activities like bullying, racism, misogyny, conspiracy theories, potentially dangerous acts, identity-based hate, misinformation, criminal activity, and coordinated inauthentic behavior. But the nature of the app makes it difficult to search for and locate relevant content for an investigation. So how do analysts get around this problem?
“For each topic that I like to investigate, I have a different phone.” Director of Research and Analytics at 2430 Group, Lindsay Wright explains.
“This is because on TikTok they use a special algorithm based on your interests that puts more posts in your feed that you will interact with.”
“So if I’m looking for COVID misinformation, for example, I will have one phone dedicated to just querying COVID and vaccine themes. Then I will interact with any videos that could start populating more of that type of content into my feed.”
Once you have identified a target of interest, then you can review their profile to review relevant information such as hashtags, followings, previous videos published, etc. And from there, you can use more traditional OSINT collection techniques, such as keyword queries, to uncover threat indicators.
2. Chan boards represent havens for extremists
Chan boards are the wild, wild west of the internet. Occasionally referred to as image boards, these anonymous forums exist for users to share images and discuss topics of interest. Additionally, these communities often feature limited to virtually non-existent content moderation or post guidelines.
While chan boards are not necessarily malicious in and of themselves, they have evolved far beyond what they were originally designed for. Anonymity and limited moderation have created breeding grounds for hate speech, child sexual abuse material, and other illicit activities. Chan boards also represent popular places to dox individuals or publish stolen information.
“We have found [doxxings of] congress members across the United States with information like home addresses, family members. Information that is definitely put out there for further harm,” explains the Secure Community Network’s Assistant Deputy Director of Intelligence Operations, Nathan Otto.
“We’ve been able to pass that information along to federal authorities that are not aware of it at that time and that information has actually been removed.”
Perhaps even more concerning, chan boards have also become hotbeds for violent extremist activities.
“Another success story we had on one of the larger chan boards a couple of months ago, there was a board that was dedicated to committing mass attack plots and talking about the steps leading up to that,” Nathan explains.
“We were able to get in there and locate some pretty concerning information that was in reference to the Jewish community – conspiracy theories, beliefs about the Jewish community, anti-semitism, anti-Iseral – being the main blame for a lot of this stuff.”
“We were able to gather all of this intel and information and pass that along to authorities. Shortly later, that entire chan board was taken down. That was a huge win for us!”
So where should security teams get started with monitoring chan boards?
Most analysts have probably heard of the largest of these platforms: 2Chan and 4Chan for instance. And keeping tabs on these sites certainly represent valuable sources of threat intelligence. But increasingly, extremists on such sites are getting pushed out of these communities and migrating to smaller chan boards. Dozens of these alternative platforms exist, so teams will want to learn about and start monitoring these sites, too.
Additionally, practicing good tradecraft is especially important when monitoring chan boards. These sites are not as anonymous as you might think, so practice smart operational security when visiting these communities: use a VPN, work for a virtual or cloud-based machine, disguise your device setting with a managed attribution service, use platforms that protect you against counter-surveillance (like Navigator) etc. And given the ephemeral nature of the content on these sites, be sure to take screenshots of relevant content before it disappears or gets deleted - especially if your suite of tools don’t have file or image saving capabilities.
3. Telegram is a gateway drug to extremist activity
Telegram has seen a surge in popularity over the past 18 months. In 2021, the instant-messaging platform WhatsApp announced a change in its terms of service, revealing the app would start sharing user information with its parent company Meta. Shortly after that, other social media platforms announced stricter content moderation guidelines in response to the January 6th attack at the U.S. Capitol. These dual catalysts triggered a mass migration of new users to Telegram, which promised better privacy and more free speech.
Such features, however, have also attracted less scrupulous characters to the platform. Criminals love Telegram’s enhanced privacy features, such as end-to-end message encryption. Limited content moderation has also brought in a wave of ideological extremists, which exploit the site to recruit new followers and plan their activities.
“In regards to what sort of intelligence you can find on Telegram? In short: everything.” Mike Evans, Head of Intelligence at Securitas SIU, explains.
“Telegram is full of criminal activity (both physical and cybercrime), crypto scams (particularly with people posing as legitimate companies), and of course extremist views.”
Perhaps most concerning about Telegram is the proximity ordinary users have to criminals and extremists on the platform. It only takes a few clicks to go from a channel discussing birds or local news to an underground community discussing conspiracy theories or selling stolen credit card information. That makes it easier for leaders of these fringe groups to recruit new members that they might not otherwise have access to on more mainstream social networks. And as a result, some analysts have started describing Telegram as a gateway drug for malicious activity.
4. Employ both passive and active collection techniques
Users upload terabytes of content to alt-tech channels each day. So finding the tiny pieces of relevant intelligence amid the mountains of irrelevant data can present a challenge for security teams. So to maximize the effectiveness of your collection, OSINT analysts need to employ both passive and active intelligence gathering techniques.
Active collection, for instance, provides near real-time collection capabilities. But such manual monitoring techniques are expensive and difficult to scale. Active techniques can also result in your team becoming overwhelmed in the collection phase of the intelligence cycle with the time and resources to actually process and analyze the data collected.
Likewise, passive collection allows you to cast a wide net when gathering data. And in this respect, using social media monitoring tools such as LifeRaft's Navigator platform can be quite helpful. But if you rely exclusively on passive collection, you can miss new and emerging threat indicators or find yourself relying on out-of-date collection requirements.
Additionally, don’t just look for standard key phrases like ‘crime’ and ‘terrorism.’ Malicious threat actors, if they have much sense at all, typically don’t use these words when discussing their activities. So savvy analysts must consider what other terms or codewords they might use and monitor for those.
Also, users on alt-tech social channels now increasingly use different media formats to discuss their activities, such as videos, images, and audio clips. This is especially true on Telegram and Chan Boards. Such non-text formats, however, may not get picked up by most social media monitoring tools.
“Just because your keyword search returns a nill result doesn’t mean there isn’t a valuable intelligence indicator tucked away in a non-text-based format,” Mike explains.
“Spend the time getting to know your collection sources and agencies and learn how to best collect from them. Don’t overlook a nill return.”
5. Don’t overlook mental health
Whether it involves watching graphic imagery or reading extremist content, open source investigators often have to review large quantities of unfiltered, upsetting content. Content from open source investigations on alternative social media platforms, especially Telegram and Chan Boards, can be especially graphic and disturbing.
While the work we do plays an important role in keeping people and property safe, our community must understand the impact viewings of traumatic media can have on the mental health of analysts. Repeated exposure to extremist content and graphic imagery can lead to secondary, or vicarious, trauma. And such mental distress can lead to a variety of mental health problems for both ourselves and our team members.
To address this problem, Mike Evans provided a collection of techniques analysts can use to reduce the impact upsetting content can have on our mental health:
Create a safe environment to conduct your work. Try to only view graphic and upsetting content in a well-lit room with colleagues around you. Additionally, view such imagery in black-and-white with the audio at a low volume on a smaller screen. Studies show these techniques can minimize the impact such content can have on our mental health.
Have a safe process to conduct your work. Have a plan for how you’re going to conduct collection and process upsetting material. Be sure to schedule a specific time to do that so it doesn’t dominate your day. And put a time limit on how long you’re going to review this with breaks in between where possible.
Create an organizational safety net. Learn how to spot the signs of vicarious trauma in yourself and your colleagues. Consider using whatever mental health support programs are available to you in your organization. As a team lead or manager, it’s especially important to check in with team members. This can be done by simply asking the three most important words in the English language which are, “How are you?”
Mike provides three resources to learn about mental health further:
- The DART Center for Journalism and Trauma
- Vicarious trauma and OSINT – a practical guide
- How to Prevent, Identify and Address Vicarious Trauma — While Conducting Open Source Investigations in the Middle East