Facebook’s Mark Zuckerberg has probably not had the year he had hoped for. In late spring, New York Times reporters Sheera Frankel and Cecilia Kang made waves with the release of their new book, An Ugly Truth, which took readers deep inside Facebook’s cutthroat corporate culture and revealed that internal concerns about the spread of hate and misinformation on the social media titan’s massive platform were routinely sidelined in the pursuit of pure profit. A torrent of worse news ensued: A Wall Street Journal series on the company based on internal documents leaked by whistleblower Frances Haugen revealed, among other things, that Facebook was well aware of the fact that Instagram, which it had acquired in 2012, was linked to a slew of mental health issues in teenage girls—a revelation the company strained to downplay.
Facebook’s terrible 2021 culminated in the congressional testimony of ex-employee Haugen’s revelations about the social networking conglomerate’s role in stoking the January 6 Capitol riot and its turning a blind eye to harmful content. Suddenly, Facebook’s culture was being debated on Capitol Hill.
But Zuckerberg is nothing if not an innovator. Reeling from fever-pitch whistleblower drama, the company’s vanguard responded to all the allegations of harm he’d done to this world by announcing his intention to build a shiny new one to conquer. Facebook’s opportunistic rebrand as Meta, a “social technology company” focused on what’s been popularly anointed as “the metaverse,” a nebulous concept christened 30 years ago in the cyberpunk fiction of American novelist Neal Stephenson, may signify a threshold moment for virtual reality and the internet as we know it. That Facebook went so far as to change its name to Meta highlights the vast potential for innovation presented by metaverse technologies. But whoever invents the metaverse is also simultaneously inventing all the things that could go wrong inside this brave, new virtual world. And Zuckerberg’s audacious pivot into the metaverse also portends a similar dynamic for organized crime.
As an official Meta statement published during Facebook’s Connect 2021 conference last month put it, “The metaverse will feel like a hybrid of today’s online social experiences, sometimes expanded into three dimensions or projected into the physical world. It will let you share immersive experiences with other people even when you can’t be together—and do things together you couldn’t do in the physical world.” That is to say, the key to Facebook’s metaverse is meshing its core social applications and those of its subsidiaries for virtual reality–enhanced activities like gaming, chatting, video-conferencing, entertainment, content creation, learning, shopping, and, of course, digital advertising, given its established and oligopolistic monetization model.
The primary technologies and platforms comprising this mesh are zazzy stuff: 3D VR, augmented reality, open-source development, artificial intelligence, the Internet of Things, edge computing, e-commerce, programmatic advertising, the blockchain, peer-to-peer payments, nonfungible tokens, and creator hubs all play an interrelated role in the world building that’s afoot. There’s money to be made and Fifth Industrial Revolution, or 5IR, technologies to be summoned into being.
But virtually every one of these components presents significant risks for malign exploitation, as well. Some of the threats that immediately stand out include fraud, money laundering, child exploitation, disinformation, and cyberattacks, just to name a few. That the virtual world Zuckerberg wants to invent might open this Pandora’s box of deviant digitization should come as no surprise. In fact, the metaverse was literally born out of a criminal conceit: Stephenson used the metaverse as the backdrop for the plot of his breakout novel, Snow Crash, in which a mobbed-up, pizza-delivering hacker hero navigates a dystopian, virtual reality–fused world to unravel the mystery behind the synthetic drug after which he named his book.
Dr. David Utzke, who is now the senior director of cryptoeconomic technology at Mastercard, started raising alarms about the emergence of a malevolent metaverse with his previous employer, the IRS Criminal Investigations Cybercrimes HQ, a year ago. His concerns have only grown since then: “Companies like Facebook—now Meta—Microsoft, Roblox, Epic are all planning their version of the metaverse,” Utzke tells The New Republic. “But this concept, a term originated by Neal Stephenson … is the idea of a singular virtual reality–based environment with the idea of escaping reality—much like what was portrayed in the movie version of the novel Ready Player One.”
“Because this environment is completely unpoliced, just as with A.R./V.R. environments today, terrorists, cartels, and other bad actors have the capability to act on real-world events,” Utzke said. “This is just the tip of the spear.”
While several media pundits have cast aspersions on the company formerly known as Facebook’s pivot, it isn’t the only firm gearing up to go spelunking in virtual worlds. Microsoft is also plunging into the metaverse, according to a company press release and accompanying marketing video. Unlike Meta, which appears to be prioritizing social connectivity, gaming, and entertainment, Microsoft is focusing on purposing its vision of the metaverse for enterprise use cases involving “collaborative and shared holographic experiences, with the productivity tools of Microsoft Teams.”
The key word here is “enterprise.” As Covid-19 has virtualized the office and made Zoom and Teams the standard for coordinating employee meetings, Microsoft wants to enrich workplace engagement. The company has branded its metaverse product as Microsoft Mesh.
Meta spokesperson Jennifer Martinez noted that her company is “not going to build, own, or run the metaverse on its own. We are starting conversations about our vision for the metaverse early, before some of the technologies even exist. Many of the things we’re envisioning will only be fully realized in five to 10 years. We’re discussing it now to help ensure that any terms of use, privacy controls, or safety features are appropriate to the new technologies and effective in keeping people safe. This won’t be the job of any one company alone. It will require collaboration across industry and with experts, governments, and regulators to get it right.”
But while the innovation potential presented by the holistic mesh of metaverse tech—and all embedded value transfer—is tantalizing for digital entrepreneurs, investors, developers alike, so is its appeal to cybercriminals, who “quickly adopt and integrate new technologies into their modi operandi or build brand-new business models around them,” according to an unclassified Europol report from 2017.
But how will next-generation “metacrime” threats manifest exactly? Mark Zuckerberg’s vision for Meta lays out an intuitive road map for how the underworld might exploit his social network’s reincarnation in the virtual realm.
In a related video posted by Meta at this past September’s Connect conference, Zuckerberg identified gaming, an annualized $178-billion-and-growing industry, as a central focus of his vision for deploying Meta’s metaverse. “The people who actually follow the space would say the metaverse is about gaming,” said Zuckerberg in the promo video, which is just over an hour long. The gaming industry, which currently counts over 2.8 billion users globally, according to market researcher Newzoo, is at the heart of the metaverse, because it “provides the most immersive experiences and it is the biggest entertainment industry by far,” added Zuckerberg.
Meta is well positioned to pilot its ambitious virtual reality initiative, having acquired V.R. headset manufacturer Oculus, which released its fifth-generation V.R. headset in October. Last week, Meta also announced its “progress in researching and developing haptic gloves,” which are wearable hand gear that simulates the sense of touch with virtual objects.
But gaming platforms are wired for next-generation crime, as well. Video games have already proven to be an “easy channel for money launderers,” thanks to the growing use of in-game virtual currency, according to various media reports. The problem first gained mainstream attention in 2019 when an investigation by The Independent revealed that credit card scammers were using fraud proceeds to buy “V-Bucks,” a type of in-game currency used by players of the Epic Games–produced, multiplayer shoot-’em-up Fortnite.
These so-called “carders” were then reselling their V-Bucks at a discount on dark-net markets, or DNMs, effectively cleaning the illicit traces of their scores, The Independent found. Investigators have observed similar virtual currency scams perpetrated by state-sponsored, Chinese cyber-espionage groups, and by criminal fraud networks reselling Counter-Strike: Global Offensive “container keys” on underground forums.
Most recently, gaming news site Kotaku exposed credit card scammers who laundered some $10 million by transferring fraud proceeds to little-known gamers on the highly popular live-streaming platform Twitch. After receiving the dirty money, these live streamers, who happened to be domiciled in Turkey, then refunded 80 percent of the payment by wiring funds to bank accounts controlled by fraudsters.
Live streaming and competitive gaming, defined as e-sports, are another fast-growing subsector of the video game industry—pegged by consultants Juniper Research to be a $2.1 billion market. By 2025, this market will top $3.5 billion in turnover and engage an audience sized at one billion people globally, according to Juniper.
Amid this hypergrowth, the jurisdiction in which complicit Twitch streamers were based, there is another dimension of gaming risks: terrorism. Turkey was highlighted earlier this year by Office of Foreign Asset Control sanctions as being a key regional stronghold for Islamic State–linked “money services business operators … that allow ISIS to obfuscate its involvement in transactions.” While the recent Twitch investigation has not yet revealed any sign of a terrorism-financing nexus—and is unlikely to yield one—the explosive growth of the video game industry and the growing appeal of relatively frictionless cyber-fraud funding streams create potential opportunities for traditional threat actors like criminals and terrorists to use these virtual networks as a medium for transferring money down the line.
“Given ISIS’s and other terrorist groups’ documented use of video games to facilitate covert communications, recruit vulnerable youths to their cause, and to move illicit funds,” said Ron Teicher, the founder and president of EverC, a tech firm that identifies money laundering in e-commerce transactions, “AI-driven content moderation, user authentication, and transaction monitoring will be essential for ecosystem safety.”
“The resurgence of extremist networks emboldened by the chaotic Afghanistan withdrawal means the good guys can’t let adversaries entrench themselves in the metaverse before they do,” said Teicher.
The anticipated integration of blockchain-based platforms adds another dimension to the illicit-financing risks posed by Meta’s gaming and related marketplace applications. Amid a manic bull run that has seen the aggregate market cap of some 6,000 cryptocurrencies top $3 trillion, Meta is using its rebrand to relaunch its maligned stablecoin Diem and pilot a crypto-wallet application the company has branded as “Novi.”
Unlike the vast majority of highly speculative cryptocurrencies, stablecoins peg their value to relatively nonvolatile and predictable financial assets like the dollar or gold. Because they are currently unregulated and are not issued by banks, Securities and Exchange Commission Chairman Gary Gensler warned in a recent press release that they present risk in the form of depositor protection, money laundering, tax evasion, and sanctions violations. It follows that Meta’s proprietary virtual currency may evade regulatory oversight and facilitate these types of financial crimes when its virtual economy formally launches.
Moreover, as Barron’s reported earlier this month, many “online gaming platforms and marketplaces that use nonfungible tokens, or NFTs, as in-game tokens and collectibles” viewed Facebook’s metaverse pivot as “validation of their efforts.” An NFT is a unique set of data stored as a crypto-asset on the blockchain ledger, and which is used to represent the provenance and ownership of a tangible, real-world object. Unlike typical cryptocurrencies such as Bitcoin and Ether, NFTs cannot be traded for one another, as each one is specific to a unique physical asset.
In aggregate, the addressable NFT market across all art, collectibles, luxury goods, gaming, and gambling assets is projected to top $1 trillion as of 2020, according to research from venture capital firm Loup Ventures. Through the third quarter of this year, NFT sales have exceeded $13 billion so far, up roughly 40 times from all NFT sales in 2020. Loup projects that the NFT game space will total $4.3 billion in volume this year. The Barron’s report cites “Decentraland, an online community where users can create avatars of themselves and interact,” as an example of the type of NFT community that could integrate with Meta.
But just as non-crypto video game currencies have been exploited by criminals, blockchain-based assets have been increasingly used by threat actors across the spectrum to launder money, as well. Amid these growing concerns, regulators are beginning to scrutinize the skyrocketing, speculative, and often opaque trade in NFTs more closely.
Further illustrating NFT risks are recent Office of Foreign Asset Control sanctions against Chatex, a Latvia-based cryptobank, that were issued in conjunction with a global law enforcement operation against the REvil ransomware group. The sanctions announcement revealed malign addresses holding over $530,000 in NFTs. In a blog post discussing this OFAC enforcement action, blockchain forensics firm Elliptic said the NFTs collected by this sanctioned account “include digital magazine covers, superhero figures and powers, digital land parcels and relatively little-known digital art collections.”
Liat Shetret, a senior adviser for crypto policy and regulation at Elliptic, said that “builders of blockchain-based metaverse real estate or gaming platforms that make use of NFTs should consider getting ahead of inevitable regulatory attention that will come their way. OFAC, as a U.S.-based enforcement agency with extraterritorial power, has already demonstrated the NFT space will not be excluded from hefty sanctions penalties.
“New businesses venturing into the metaverse should take a page from the crypto-compliance book and incorporate compliance practices as part and parcel of their business model,” said Shetret.
Blockchain forensics firm TRM Labs, an Elliptic competitor, also noted the NFT market as an increasingly popular target for criminals preying on victims via sophisticated phishing attacks and exit scams, the latter of which entail platform operators absconding with investor funds while victims are left holding the bag.
While Meta is going all the way in on “open standards and
interoperability,” per Zuck’s letter, he also noted that “privacy and safety
need to be built into the metaverse from day one.” Yet Meta’s only
apparent safeguard to mitigate risks associated with the malign use of its
platform by various types of cybercriminals and predators was its previous
requirement that everyone using an Oculus device for the first time must log in
with a Facebook account before using the headset. Complicating matters further,
Facebook announced last month that it was retiring the Facebook account
authentication mandate for Oculus users.
Notably, even with user-account-powered authentication,
internal Facebook documents leaked by the whistleblower Haugen revealed that
as many as 56 percent of the social network’s accounts could be attributed to
preexisting users opening new accounts, according to a recent Wall Street
Journal report.
Single users with multiple accounts, or SUMAs, are considered inauthentic
users, per Facebook’s platform integrity standards. In 2017, Facebook reported that
it had purged its platform of 270 million fake accounts.
Beyond money laundering risks, Facebook’s inability to enforce robust user integrity and trust throughout its existing platform also raises concerns about pedophiles exploiting the metaverse to abuse children via “sextortion” schemes. A New York Times report from 2019 identified games as a “common target” for pedophiles, citing their increasingly integrated social features that make it easy for predators to chat up and exploit children, generally using fake accounts to obscure their real identities and ages.
Rochelle Keyhan, an attorney and the CEO of Collective
Liberty, a Washington, D.C.-based, anti-human-trafficking advocacy group, said, “Predators
befriend and groom their victims often and successfully across many virtual
platforms, including social media, gaming spaces, and any other interactive
space. This is often via video or text chat. These relationships are mediated
through technology until the predators are able to convince their victims to
meet in person.” But the metaverse, Keyhan says, “proposes to remove the mediation and
bring the digital into real life.
“This would allow for the ‘reality’ of a relationship to happen much sooner, increasing the speed with which bonds develop between potential victims and abusers. This means the abuse can happen more quickly, as well,” said Keyhan.
The British charity National Society for the Prevention of Cruelty to Children published a recent study that found that 53 percent of all cases where children were groomed by pedophiles occurred on Facebook platforms, including Instagram and WhatsApp, “amounting to 24 incidents per week.” Given that the study was geo-fenced within the U.K., the charity noted in a press release that its findings were likely just the “tip of the iceberg.” Last year, Facebook reported over 20 million child sexual abuse images across its platforms to the National Center for Missing and Exploited Children. That’s more than 35 times as many NCMEC reports as the next company on the list, Google.
Broader sexual exploitation networks discovered by Facebook censors have also been known to use fake profiles. This is illustrated by a 2019 case that exposed a “transnational human trafficking network that used Facebook apps to facilitate the sale and sexual exploitation of at least 20 potential victims,” according to a recent CNN report. This network used over 100 fake Facebook and Instagram accounts to recruit female victims from various countries, leveraging Messenger and WhatsApp to coordinate the transportation of trafficked women to Dubai, where they were forced to work in facilities disguised as massage parlors, according to CNN.
In Meta’s metaverse, “sex slaves could potentially be sold surreptitiously via e-commerce avatars involving NFTs for virtual merchandise or other fake postings for marketplace products,” Teicher warned. Thus, determining product authenticity and provenance will be just as crucial to Meta’s platform integrity as validating the legitimacy of user accounts.
Joseph Weinberg, the chief executive of Shyft Network, an opt-in data-exchange platform to help blockchain apps comply with financial services and privacy regulations, said there are existing solutions that could help mitigate illicit-finance and child exploitation risks. “In theory, decentralized data-exchange systems like Veriscope, which we launched last year to help VASPs [virtual asset services providers] comply with global anti–money laundering regulations, can also promote trust in the metaverse by facilitating the transfer of attributable identity elements from Oculus users transacting across Meta’s anticipated crypto-gaming and other decentralized marketplace applications,” said Weinberg. “But the effectiveness of any KYC [know your customer] transmission system hinges on how consistently and robustly Meta implements a virtual economy throughout its platform.”
Beyond gaming, these risks also prevail in another key dimension of Meta’s metaverse: the creator economy. True to Meta’s ethos of open standards and platform interoperability, Zuck’s founder’s letter also highlighted the role of “creators,” a nod to the $104 billion creator economy, per social media research firm Influencer Marketing Hub’s estimates.
More than the developers that Meta has spotlighted as key to building new apps for its open-innovation V.R. and A.R. ecosystems, the creator economy is comprised of social media influencers, marketers, filmmakers, visual artists, and journalists who sidestep traditional corporate media organizations and monetize their proprietary content on digital platforms. Over the last year in particular, this creator economy exploded, with the pandemic acting as a tailwind and inspiring underemployed young people to seek out new peer-to-peer platforms to monetize their “talents.”
But with the influx of some 50 million new creators since 2020, and the capture of $1.5 billion in venture funding, criminals ranging from human traffickers, pedophiles, ransomware actors, and carders have also flocked to these platforms to exploit victims and launder their illicit proceeds.
The primary risk that has emerged in the mainstream creator economy is “money muling.” Just like the Turkish Twitch streamers, this is a money laundering typology that generally entails young people being duped into receiving and transferring funds for criminal networks in exchange for a cut of the proceeds. According to an online notice from EU-wide police force Europol, “more than 90 percent of money mule transactions identified through the European Money Mule Actions are linked to cybercrime.” During the pandemic, researchers in the U.K. noticed a 5 percent uptick in mule scams, with over 17,000 cases involving young people aged 21 to 30, according to a report co-authored by UK Finance, a trade association for the banking and financial services sector, and Cifas, a British anti-fraud advocacy group.
Illustrating the harmfulness of this scheme, this past July a 22-year-old model, social media influencer, and university psychology student in Ireland pleaded guilty to letting a man she met on SnapChat use her bank account to transfer crime-linked funds. It follows that Meta creators across the popularity spectrum, streaming their gaming videos or other content, could be co-opted in similar mule scams.
Another particularly high-risk subset of the creator economy is influencer marketing. This industry entails popular social media stars partnering with brands and marketing agencies that seek to leverage influencers’ “platforms” and follower bases to sell their products or promote their campaigns. A report from Influencer Marketing Hub found that this creator niche will top $13.8 billion in total revenue this year. However, it can be difficult to correlate its sway over its throngs of followers to any legitimate commerce.
Perhaps the most prolific example of deviant influencing is Ramon “Hushpuppi” Abbas, a Nigerian social media star accused by the FBI of being “one of the leaders of a transnational network that facilitates computer intrusions, fraudulent schemes, and money laundering, targeting victims around the world in schemes designed to steal hundreds of millions of dollars,” according to a complaint unsealed following Abbas’s arrest in June 2020.
Abbas eventually pleaded guilty to money laundering charges in Los Angeles federal court this past July. But apart from reportedly receiving occasional gifts from high-end fashion brands like Gucci and Fendi, there is no indication that he actively used his account to promote any kind of brand-marketing-related commerce. The FBI declined to elaborate on any commercial cover story that HushPuppi may have used as an influencer.
Digital-advertising-fraud expert Dr. Augustin Fou said illicit-finance schemes involving influencers should be considered another emerging area of concern on the metaverse. Fou has previously called digital advertising, which is projected to become a $100 billion problem by 2023, according to consultants Juniper Research, “the mother of all money laundries.” Fou recently noted that “digital ads, purchased through programmatic channels, are directly funding fake and fraudulent websites.” The traffic flowing through these programmatic, or high-frequency and algorithm-based, ad exchange platforms, is actually artificial bot traffic, he says.
Regarding fraudulent websites’ ability to capture digital ad traffic and their ability to simulate human engagement, the key problem here is that it’s easy to exploit so-called retargeting scripts—digital advertising processes that broadcast website promotions to users based on their previous online activity. Fou says that fraudsters exploit retargeting applications “by sending their bots to e-commerce sites first, to collect cookies, and then visiting ‘cash-out’ sites to cause retargeted ads to appear on those sites.” That way, when e-commerce brand ads populate on these scam websites, fraudsters generate illicit ad revenue.
With Facebook’s 2020 advertising revenue topping $84 billion last year, or just over 25 percent of the digital ad market, questions linger about the legitimacy of this revenue capture, given the ecosystem’s inherent susceptibility to fraud. On Meta, the risk is amplified by the fact that Facebook previously reported having hundreds of millions of fraudulent users. Counting over 2.9 billion monthly active users today, it’s unclear how many illegitimate users Facebook has today, but internal documents leaked by Haugen do not inspire confidence that the social technology firm has corrected the problem. In the rapidly evolving metaverse, Meta could be unlocking a plethora of new channels to obfuscate illicit-origin commerce.
While this overview only covers the most significant and immediate metaverse attack vectors available, given Meta’s stated parameters since its rebrand, there are obviously other significant threats to consider. In the era of complex, SolarWinds-styled supply chain attacks, relentless ransomware threats, disinformation campaigns waged by nation-state cyber-espionage actors, and unanswered questions about the impact of V.R. headsets on brain and emotional health, what kind of platform resilience and safety can Meta’s platform assure consumers? And how soon will such assurances come? Facebook’s users may soon take those first, halting steps into an unknown virtual world. A small universe of seasoned and enterprising cybercriminals will be waiting for them.