Skip to main content

The advent of Artificial Intelligence (AI) and AI-generated content has created a new set of challenges for open-source intelligence (OSINT) research, as the availability of user-friendly AI tools are increasingly employed to manipulate search engine results and, by extension, due diligence platforms and even human-led research and investigations.

 

Dragos Becheru

Intelligence Analyst

A critical component of customer due diligence (CDD) today depends upon search engine results obtained by compliance via manual keyword searches or tools that conduct similar searches at scale. This reputational check relies heavily on media coverage, database indexing, and the efficiency of search algorithms to provide the connection between what the user is searching for and what the content creator was trying to transmit.

That said, over the past decade, this research has been complicated by “filler content”, especially ads, paid content posing as “articles”, broken links, dead landing pages, fake and misleading content, but also – and particularly important for due diligence professionals — amateur journalism that can also appear as social media posts, blog posts, as well as AI-generated content pieces.

Search engine results use a combination of perceived relevance (due to the prevalence of keywords) and popularity of sources (due to their “click-rate”) in order to decide on the hierarchy of listed results.

This situation creates a feedback loop of self-reinforcing rankings, as higher results obtain more clicks, thus remaining highly ranked. In response to the Search Engine Optimization (SEO) business, due diligence specialists have to leverage their OSINT expertise in order to filter through to less elevated content, while simultaneously identifying when individuals or businesses are generating positive content to misrepresent themselves or other red flags.

What are the Red Flags for Manipulated Media?

In the past, curating an online persona or manipulating one was time-consuming and costly, with those seeking to do so often using PR companies with the expertise to monitor sentiment, make a realistic persona, and ensure it be at the top of the search results.

AI tools for generating content becoming widespread at a low cost has made this process easier while allowing for increasingly complex attempts at manipulating KYC screening tools and methodologies. Indeed, AI offers the possibility to create large quantities of tailor-made content in increasingly plausible tones — as their imitation of humans improves with every prompt from their users — something that human-led PR companies could previously do only with extensive resources and costs.

AI-created “news” or social media posts can serve to raise doubts as to the true identity of the individual. For example, when investigating connections between a wealthy individual and their discreet investments in the Persian/Arabian Gulf, abundant AI-generated content could “flood” the internet with reports of their passion for “Persian cuisine” the individual’s review of “Prince of Persia”, or even their purchase of a Gulfstream private jet, thus curating the results . This might sound bizarre but such simple approaches are not uncommon and the fast generation of content can easily outsmart the capacities of screening tools or even smaller compliance departments with more limited resources and funding.

Fake personas

The majority of private financial and government institutions use screening tools designed to automatically filter out individuals based on identification data, name variations, and declared jurisdictional exposure (countries of residence, company registration information, business sectors, etc.). Such tools rely on pattern recognition and, therefore, can often be misled by even expected variations in identification information, including variations on names and name spellings, including transliterated versions and date, such as differing formats or slight adjustments to the month or year, as well as by the above discussed purpose-driven content intended to mislead the algorithms.

One clear indication of suspicious activity manifests as results involving several individuals with identical or similar profiles, including name variations, professional backgrounds, country and/or city of origin, and so on. The purpose of what can be largely fake profiles tailor-made to match any negative online content is to create plausible deniability for the real subject of the search query.

One example would be Irish citizen James Bernard Smith, who was wanted in the US in 1990 for drug trafficking. AI could be used to create personas with the same or similar name, which would make it unclear who was the actual person on the wanted list. In other words, instead of being James Bernard Smith, it could perhaps be Jim Smith, Jimmy Bernie Smith, or other variation mentioned in in intentionally misleading blog posts, social media accounts, forums, or even less legitimate news articles criticizing Washington’s “War on Drugs”, calling for legalization, alluding to their criminal records, or even pretending to solicit the purchase of drugs online.

Such entirely fabricated posts can and are used to confuse KYC checks, which is obviously also much easier when one has a more commonly used name. Thus, someone like James Bernard Smith can use fake online personas to create doubt and claim misidentification in the database or negative news, which, if sufficiently convincing, could prevent further digging.

Moreover, in extreme cases, the above type of media manipulation can be taken one step further and can be used as part of the legal justification for why there is sufficient doubt vis-à-vis the allegations levied against the individual. In other words, evidence from manipulated media can be utilized as part of an affidavit claiming, for example, that a person was never in a particular country despite evidence or allegations to the contrary. Indeed, such affidavits can sometimes be deemed sufficient to overcome good faith KYC checks in jurisdictions with less stringent regulations and requirements.

Unflagged Promotional Content

Another way of passing through the cracks of automated screening and the “first line” of defense is via legitimate positive or neutral publicity of business activity or other innocuous reputational context, including related to philanthropy, family news, and interviews by an individual with an otherwise negative media profile. This is meant to skew the proportion of search results in favor of positive coverage, as well as to create doubt about the objectivity, accuracy, or credibility of the negative information. In this context, the positive publicity need not be false, as many mainstream or specialized outlets offer the possibility of paid marketing space for promotional or “op-ed” content.

Moreover, the credibility or popularity of such outlets lends credence to the promotional content disguised, with the exception of an often-time small disclaimer at the end, if it is present at all, as news and outweighing or at least balancing out the negative. Most targeted by such efforts are business and economic news outlets . Taking an analytical look at the tone of the “article” and the identity of the author can help highlight paid material, which is useful both to recognize bias in a day-to-day setting and for analysts digging into the media profile of a particular individual.

Who might undertake these activities and why?

Capital flight has always been common in the context of fluctuating geopolitical fault lines, leading high and ultra-high net worth individuals (HNWIs and UNHWIs) to move their assets and funds towards jurisdictions deemed safer or more tax friendly, such as Switzerland, the UAE, the British Virgin Islands, Cyprus, and Panama. Thus, HNWIs and UNHWIs from places facing political instability, insecurity, or even increasing regulatory pressure and public scrutiny look to move their funds to more opaque but “safer” jurisdictions.

Such individuals who also have questionable activity in their pasts are the most likely to engage in such disinformation in order to reinvent themselves as transparent and legal and increase the likelihood that they will be onboarded. . Indeed, regulatory frameworks across the globe, including in the common and above-mentioned destination countries, are tightening. Thus, individuals who recognize that they may be rejected sometimes look to manipulate their reputational profile, ranging from the above-discussed tactics to even changing their names, using alternative spellings, or employing aliases in order to put distance between themselves and their past.

Nonetheless, the compliance and due diligence industries have also adapted to tackle these and other attempts at tampering with reputational verification, as well as the innovative and increasingly complex methods used. Keeping up-to-date with the tactics employed and challenges faced, as well as the best ways to identify and address them is essential. Combined with a risk-based approach, compliance and due diligence professionals should aim to balance screening tools with human expertise and in-depth analysis to identify the kind of red flags that tech can continue to miss. Indeed, with increasing such examples seen in recent years, professionals rely on Sqope Intelligence with these precise scenarios, trusting our team to help identify the facts and support their KYC process.