Security and Privacy in the Metaverse: How to Keep Your Virtual Life Safe?
On 28th October, 2021, Mark Zuckerberg, the CEO of Facebook, introduced a radical change to the biggest social network and his other services at the Connect 2021 conference.
That change was the shift from the digital world and social media platforms to augmented reality and virtual reality, or the metaverse (the term coined first by Neal Stephenson in his 1992 sci-fi novel "Snow Crash").
Oh, and the company is no longer called Facebook. It is now Meta.
The announcement, clearly, has huge technological implications and, for many companies, it will be a big boon. Big tech companies are already salivating at all the data they can gather in virtual spaces, including new technologies, targeted advertising and more.
But at what cost will all this come?
One price that we might have to pay is our privacy.
Does Data Privacy Exist in the Virtual World?
So a company that already has its hands deep in data collection and has been at the forefront of several massive privacy scandals, including a data breach in which personal data of 1.5 billion Facebook users went for sale on a hacker forum in October last year, now wants to harvest more of your personal information?
What can go wrong?
Well, according to the privacy experts, a lot.
We can definitely expect companies and organizations in the metaverse to collect personal information for individual identification, advertisement, and tracking through multiple channels as we have been experiencing in the regular Internet activities we do today. Companies will surely gather information from things like wearable devices, microphones, heart and respiratory monitors, and user interaction to the extend that we have never seen before.
The good news is that people are already concerned about their data security and privacy if Zuckerberg's vision of Meta and the metaverse's potential becomes a reality (and we don't mean a "virtual reality").
According to a survey by NordVPN, 87% of participants said they have privacy concerns when it comes to the metaverse. with 50% worried about identity theft and impersonations, 47% that their identities will not be sufficiently legally protected and 45% that they will have to share even more personal data, which can then be abused.
And Facebook isn't the only big tech company that wants to dive into the virtual reality world.
Other organizations are also playing with the same idea.
For example, in November last year, Microsoft has introduced virtual meeting software called Mesh for Microsoft Teams. This allows organizations to build virtual spaces within Teams and users to build customized avatars.
The idea of Mesh for Microsoft Teams is to make online meetings "personal, engaging and fun" and it will work on smartphones, laptops and mixed reality headsets. The technology is expected to roll out early in 2022.
How Will the Privacy Regulators Respond to the Metaverse?
Right now, data collection and privacy laws differ greatly around the world. On one side we have the EU's GDPR, with strict fines to any company violating the data protection of EU citizens and on the other, in the US, we have different laws in each state, like California's CCPA.
And then comes the metaverse.
Speaking for Cybernews, the Director of Forensics at Secure Data Recovery Services, Allan Buxton said:
Regulators will have their hands full. The EU has been the most aggressive in enforcing any limits to Big Data's skirting of laws or even their own terms and conditions, yet no fine appears to curb their behavior. Big Data does not fear fines. What Big Data fears is the abandonment of its services by its users for new competitors.
And then, what about the children and consent in the metaverse? With hyper-realistic avatar creators like Union, it's possible for children to appear as adults (you just need to be over 14 to use it).
What Companies Need to Figure Out Before Venturing Into the Metaverse?
While they wait for the governments to wrap their heads around the virtual world (that may take a while), companies themselves will need to take steps of their own to protect their users' data privacy in the metaverse.
In particular, they will have to:
- Self-regulate (at least in the beginning)
Data protection and privacy laws still remain largely inconsistent around the world and the metaverse is bringing yet another problem into the mix.
That's why companies will have to self-regulate, especially when it comes to user consent and children, at least until we get some actual laws here in our physical world.
- Users must know if they are talking to a real person or artificial intelligence
Artificial intelligence technology today is at such a level that it becomes harder and harder to distinguish an AI bot from an actual human.
And things are only going to get worse as humans share data with AI bots, including their biometric information. The lines here need to be clearly defined.
- Read more about the pros and cons of biometric data authentication technology here.
- Developers must keep vulnerabilities to a minimum
Take Meta, for example. The whole idea is to have all of its users' data in one place. And that's a company that has already had some massive data breaches in its history.
That's why devs will have an extra responsibility to make their virtual worlds airtight when it comes to data security vulnerabilities.
The privacy implications of the metaverse are abundant and if companies like Meta, Microsoft and others are more focused on data collection than on data protection in the virtual reality world, we have a serious problem.