NARRATIVE/SYSTEMATIC REVIEWS/META-ANALYSIS
Kristin M. Kostick-Quenet, PhD
and Vasiliki Rahimzadeh, PhD 
Assistant Professor, Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, Texas, USA
Keywords: Immersive virtual spaces, metaverse, role-playing games, social implications, virtual spaces
Metaverse is heralded by some as the next iteration of the internet. It offers three-dimensional, immersive virtual spaces, where users, represented as avatars, can synchronously work, play, interact, and transact. Initially developed within the realm of massive multiplayer online role-playing games, the metaverse now extends into sectors such as music, entertainment, retail, real estate, and, more recently, healthcare. Users engage in the metaverse using augmented reality or virtual reality (AR/VR) headsets that interoperate with other sensory devices to integrate biofeedback and multimodal data streams.1 In this article, the authors offer a conceptual synthesis and anticipatory policy analysis grounded in a narrative review of current trends reported in public-facing news reports as well as interdisciplinary sources from bioethics, law, digital-health policy, and science-and-technology studies. There are seven key ethical, legal, and social issue areas for consideration in the delivery of metaverse-enabled healthcare. We additionally issue a call to action to explore the metaverse through a bioethics lens. The authors begin by providing a primer on the metaverse, including the various forces shaping its development. Next, the authors describe what distinguishes the metaverse from other digital-health technologies and illustrate emerging use cases for the metaverse. We then outline what we consider the most pressing ethical issues raised as the metaverse matures in parallel with (or in advance of) policy guidance and regulation. Finally, we conclude with a research agenda that treats the metaverse as a serious topic of normative and empirical inquiry and argue for sustained engagement from diverse user communities to support its ethical design and development.
Citation: Blockchain in Healthcare Today 2025, 8: 456.
DOI: https://doi.org/10.30953/bhty.v8.456
Copyright: © 2025 This is an open-access article distributed in accordance with the Creative Commons Attribution Non-Commercial (CC BY-NC 4.0) license, which permits others to distribute, adapt, enhance this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See http://creativecommons.org/licenses/by-nc/4.0. The authors of this article own the copyright.
Submitted: September 15, 2025; Accepted: November 10, 2025; Published: January 17, 2026
Corresponding Author: Kristin M. Kostick-Quenet; Email: kristin.kostick@bcm.edu
Competing interests and funding: Dr. Rahimzadeh holds personal investments in cryptocurrencies and other blockchain-based infrastructure tokens. These holdings are maintained as part of personal investment portfolios and are not directly related to the research presented in this manuscript. No author has received financial compensation, funding, or advisory roles from cryptocurrency issuers, blockchain companies, cryptocurrency exchanges, or related financial entities in connection with this work. The research was conducted independently, and no external party influenced the study design, data collection, analysis, interpretation, or the decision to submit this manuscript for publication.
Funding in part by NIMH Grant 3R01MH125958: “Ethical Perspectives Towards Using Smart Contracts for Patient Consent and Data Protection of Digital Phenotype Data in Machine Learning Environments”.
The focus of this article is on emerging trends and ethical issues raised by metaverse-enabled healthcare, or what we call the metaverse. The “metaverse” is anticipated to become a significant driver of the metaverse economy as more investors, users, and industries engage with and invest in this technology. The metaverse is being envisioned as a new frontier for virtual patient consultation and therapy, personalized healthcare,2 medical education,3 and biomedical research and testing.4 Developers are also seeking to shape the metaverse as a forum for gamifying health and wellness, delivering real-time care, and providing an immersive experience for patients, among other envisioned applications.
Valuable and novel features that separate the metaverse from existing digital technology systems include virtual immersion of the patient experience anytime and anywhere. In this way, the metaverse promises an immersive, shared, and synchronous experience that exceeds the two-dimensional (2D), screen-mediated experience of contemporary telehealth services. This distinction reflects what classic social presence theory identifies as the degree to which users feel “real” others are present and responsive within a mediated environment.5
Later work on the presence and co-presence in immersive virtual environments expanded this idea, describing how multisensory cues, spatial embodiment, and synchronous interaction enhance the feeling of “being there together.”6 In a healthcare context, these mechanisms could help recover aspects of empathy, trust, and relational immediacy that are often diminished in 2D telehealth encounters. Real-time multisensory social interactions can generate greater value and interaction quality than what was previously possible.7 A metaverse equipped with new spatial computing capacities8 might also be more impactful than telehealth in that it allows for deep phenotyping, a wider range of motion (e.g. during clinical examinations), and broader cognitive focus.9
Comprehensive governance for the metaverse is emerging. Existing frameworks, such as the U.S. Food and Drug Administration’s Digital Health and Software as a Medical Device (SaMD) guidance,10 the European Commission’s Artificial Intelligence (AI) Act11 and General Data Protection Regulation (GDPR), and the U.S. National Institute of Standards and Technology’s AI Risk Management Framework,12 offer partial scaffolding for regulating immersive, data-intensive technologies. However, these frameworks were designed for specific contexts (e.g. telehealth, AI risk, or data protection) and do not yet fully address the hybrid clinical, commercial, and experiential dimensions of the metaverse. More fundamentally, the metaverse raises core ethical questions about the nature of human health and disease (e.g. can patients be sick in the real world but healthy in the metaverse or vice versa?), the scope of the patient-provider relationship (e.g. what are the processes for licensure in the metaverse?), and how privacy rights, informed consent, and data protections translate to virtual healthcare spaces (e.g. who owns an avatar’s health data, and what protections, if any, do real patients have if their avatars are harmed in the metaverse?). These and other pressing questions compel embedded ethical, legal, and social implications research and anticipatory governance of the metaverse. Its ethical analysis benefits from science and technology studies and human-computer interaction frameworks, for example, sociotechnical imaginaries,13 value-sensitive design,14 and contextual integrity,15 alongside presence/embodiment theories from virtual reality (VR).16,17 Normative and empirical works are needed if the metaverse is to help us reimagine a state-of-the-art healthcare system that does not reproduce the systemic inequities and inefficiencies that persist in the real world. We intend for this article to launch such a research agenda.
Version 2.0 of the internet (Web2) was released on January 1, 1983. Since then, the internet has altered the course of human existence and transformed the way information is transmitted, managed, and exchanged. A comparable evolution appears to be underway as developers lay the foundation for a faster, three-dimensional platform for information exchange and processing by creating immersive virtual spaces. This next phase of socio-technical evolution reflects not only advances in computing and interface design but also the emergence of new sociotechnical imaginaries: collective visions of how technologies will shape and be shaped by society.
These imaginaries inform the design of immersive platforms, the values embedded in their infrastructures, and the forms of embodiment and presence they enable. Neal Stephenson first introduced the concept of the metaverse in his 1992 novel Snow Crash,18 which he described as a vast virtual environment that coexists with the physical world. Matthew Ball proposed a more technologically detailed definition of the metaverse as a “massively scaled and interoperable network of real-time rendered 3D virtual worlds that can be experienced synchronously and persistently by an effectively unlimited number of users with an individual sense of presence, and with continuity of data, such as identity, history, entitlements, objects, communications and payments.”19
A key feature of Ball’s definition is its focus on synchronous experience between many users in a shared virtual environment, but it fails to capture the nuances of more intimate interactions between, say, a patient and their provider in a clinical encounter. Herman Narula’s conception of the metaverse takes account of this experience as a “collection of realities, including the real world or ‘home reality’ and a series of other worlds that a society imbues with meaning,”20 and emphasizes how “events, objects, and identities can exist in and be modified by multiple worlds in the metaverse”20 where the “interplay between worlds, and the associated ongoing creation and transfer of value, is the foundation for virtual society.”20
Experts believe that the metaverse is on track to become a “corporate internet,” and the race to commercialize is underway. Web 3.0 development has been driven primarily by private equity and built using proprietary data ecosystems. Some reports estimate that private equity investments in the metaverse reached more than $38–120 billion (€ 33–103 billion) since 2020 and are expected to balloon to $5 trillion (€ 4.3 trillion) by 2030.21 Despite these massive investments, mainstream development and adoption of the metaverse remain on the fringe.
There are practical and policy reasons for this. The prohibitive costs of necessary compute and hardware to functionally scale Web 3.0 (e.g. graphics processing units and microchips) limit participation in metaverse development to the most powerful and largest tech companies. Indeed, Tim Sweeney, CEO and founder of Fortnite maker Epic Games, commented, “This metaverse is going to be far more pervasive and powerful than anything else. If one central company gains control of this, they will become more powerful than any government and be a God on Earth.”22
Participation in the metaverse also relies on users acquiring costly enabling technologies, such as augmented reality (AR) or VR devices, AI-based sensory systems, and robust broadband internet connections. These infrastructural requirements inevitably limit participation to those users able to afford the enabling technologies needed to engage fully in the metaverse (see Access and Equity below). These challenges contribute to concerns about the consolidation of infrastructural power in digital economies that potentially exclude large swaths of the population.23,24
New government regulations in the U.S. and European Union (EU) are also shaping how industry builds out the metaverse. Specifically, the EU Digital Markets Act (DMA) expands existing antitrust principles to restrict companies from controlling platform service delivery in ways that inhibit market competition.25 These actions reflect long-standing concerns about the concentration of infrastructural power in digital economies. As Zuboff26 observes, contemporary platform models derive much of their value from the continuous extraction and monetization of behavioral data, positioning users as data sources rather than active participants. In the context of the metaverse, such dynamics could give rise to new forms of “digital enclosure,” in which access to virtual health environments, data streams, and interaction standards is restricted by proprietary architectures. Ensuring an open and trustworthy metaverse will therefore require deliberate attention to interoperability, open protocols, and participatory models of governance.23,24 Absent these safeguards, the metaverse risks reproducing, if not amplifying, the monopolistic and opaque structures that characterize much of today’s Web 2.0 landscape. The DMA’s restrictions on Very Large Online Platforms will likely also affect how companies are legally permitted to develop the metaverse. In the U.S., the Federal Trade Commission (FTC) has introduced new merger and acquisition regulations among technology companies. Proposed changes to the Hart-Scott-Rodino Act, which mandates the reporting of large-scale transactions to the FTC and the U.S. Department of Justice, could challenge efforts by tech firms to monopolize metaverse infrastructure and its underlying data networks.
Despite these challenges, metaverse services are rapidly emerging. Paragraphs 1 to 4 detail four emerging use cases for the metaverse and comment on their common ethical, legal, and social issues in the sections later.
AR or virtual reality now enables immersive health and wellness experiences. For example, emerging platforms offer virtual exercise classes (see, for example, Play Innovation UK https://playinnovation.co.uk/), meditation and mindfulness sanctuaries, and therapeutic healing sessions.27 These spaces allow users to engage in health-promoting activities and to monitor health states, often with the aid of wearable sensor technologies. A wide range of consumer digital devices now effectively analyze real-time physiological data, including blood sugar, respiration rates, blood pressure, temperature, and pulse oxygen. Skin patches, rings and earbuds, and intelligent clothing (see, for example, Wearablex and Senoria Fitness) provide digital and haptic feedback used to support health and wellness monitoring.28,29 Furthermore, these devices allow diverse types of health information from the real world to be bundled and transmitted through digital representations to better understand how wellness activities impact health and wellness. As data streams from wearable sensors, intelligent clothing, and avatars become increasingly granular, they enable inferences about mood, cognition, and physiology that may extend beyond the data individuals knowingly share. Such inferential privacy risks highlight the need for governance mechanisms that limit or at least make transparent the kinds of conclusions that can be drawn from combined data sources.30
Gamification, or the application of game-design factors in non-game contexts, can be effective for promoting behavior changes,31 eating well,32,33 and adhering to medications.34 Studies have found that receiving points and badges,35 pursuing quests and challenges, storytelling,36 and cultivating social connections with other players37,38 can also positively impact individuals’ engagement in health behaviors. The metaverse can heighten these motivating elements.39,40 Individuals can also earn monetary rewards in applications that allow for “play-to-earn,” “exercise-to-earn,” or “move-to-earn” schemes,41 typically through the use of tokenized incentives or credits redeemale for cash prizes or products. This integration of “tokenomics” establishes not only a set of economic incentives for participation but also a set of micro-economies within the metaverse that serve to keep players earning and spending in circular fashion within games or platforms (see Gamefi.org). When properly orchestrated, these virtual economies may spur members to achieve health and wellness goals and potentially make better health choices in the real world. However, gamification has also been critiqued for its overreliance on extrinsic versus intrinsic incentives for health and behavior change.42 Insights from Self-Determination Theory43 emphasize that lasting behavior change depends on internalized motivation (autonomy, competence, and relatedness) rather than external rewards. Integrating evidence from frameworks like the Behavior Change Technique taxonomy44 or lessons from the persuasive technology literature45 may help designers identify mechanisms that support intrinsic motivation while avoiding manipulative or coercive design practices.
Developers envision that future patients will be able to pursue treatments and therapies for various mental health conditions with licensed professionals inside the metaverse.27 Virtual Reality Exposure Therapy (VRET) and Augmented Reality Exposure Therapy (ARET), for example, can be supported in the metaverse by simulating contact with fear-inducing stimuli through visual and aural sensory channels. V/ARET technologies have demonstrated promise in managing post-traumatic stress disorder (PTSD), notably combat-related trauma among military veterans, or other sources of distress, such as agoraphobic avoidance and social anxiety in patients with psychosis.46 There is growing recognition that people treat virtual experiences as real enough (akin to those produced by real-world scenarios47) to have observable effects on their emotional and physiological responses. This phenomenon aligns with the Proteus effect, in which users’ behaviors and self-perceptions are shaped by the attributes of their avatars48 and with neuroscientific evidence on body ownership and embodiment in virtual environments.49 This suggests that the metaverse, and its enabling technologies, may offer therapeutic mental health interventions for addressing paranoia and cravings,50 anxiety51 and fear52), among other conditions. Virtual technologies have also shown to have some benefits for pain management, for example using simulated environments to help manage chronic pain.53 AR also shows promise for enhancing patient education54 and motivation.55 As the metaverse continues to develop, these technologies offer immense therapeutic promise but hinge on collecting sensitive psychological and biometric data.
AR or virtual reality systems are also being used for preoperative planning, enabling surgeons from across the globe to collaborate on surgical cases using synchronous virtual imaging systems and real-time communication. Holographic software can help surgeons and patients visualize a procedure together.56 For example, the holographic stereotactic neurosurgery research tool56 integrates patient-specific brain models into virtual environments, where clinicians can collaboratively plan approaches to therapeutic neuromodulation. Some researchers57 have shown that importing personalized 3D models of organ systems into the metaverse (e.g. using https://www.anothereality.io/) can help surgeons consult with one another as avatars and collaboratively analyze a patient’s surgical needs. These applications reflect a growing integration of digital twin technologies (high-fidelity, data-driven models that mirror patient anatomy or device performance)58 and their convergence with in silico clinical trials that simulate therapeutic outcomes before real-world intervention.59 They foreshadow a metaverse capable of uniting personalized modeling, surgical rehearsal, and predictive analytics within a shared virtual environment.
The metaverse is also emerging as a forum for in silico clinical trials and simulation. Wang et al.60 present the metaverse as a virtual space “enriched by effectively unlimited data” and capable of driving significant innovations in medical technology and AI/machine language (ML). The authors envision that the metaverse will be populated with data based on patient avatars that will support dynamic evaluation of AI-based SaMD and innovations in medical device development tools. The metaverse may offer virtual environments in which clinical researchers can collaboratively learn via virtual simulation and in silico modeling and testing, using digital twins.61 Digital twins may eventually allow for seamless connection and real-time data exchange between digital and physical entities using data fusion techniques, high-dimensional data processing, big data analytics, and cloud computing to store and elaborate voluminous data for the purpose of monitoring, maintaining, and optimizing the performance of physical systems (including human systems). By integrating these multiple, diverse technologies, the metaverse promises to advance clinical knowledge and research as well as personalized medicine.
Healthcare education is another trending use case for the metaverse. Together with VR/AR, the metaverse can help training practitioners to improve their skills at lower costs and with more opportunities to practice risky or highly specialized procedures without causing physical or psychological harm to real patients. Applications like Medical Holodeck’s Dissection Master XR (https://www.medicalholodeck.com/en/) offer digital twin renderings of a human body to allow medical students to perform virtual dissections. Other systems (e.g. fundamental VR) add haptic feedback to facilitate motor skill acquisition like suturing, knot tying, and fine dissection, simulating the sensation of cutting into soft tissue, muscle, and bone.62 While still nascent, a systematic review63 recently showed that competency, user opinion and post-operative complication rates are improved when using AR-based surgical teaching over traditional techniques. The metaverse would allow for multiple trainees and educators to be present in the same virtual spaces at once, facilitating knowledge exchange.
Through gamification and realistic imagery, the metaverse also allows healthcare professionals to reverse roles and put themselves in the position of their patients64 as an empathy-building intervention. The immersive experience of the metaverse along with high-quality graphics using VR/AR can also enhance simulation-based training and pre-post intervention knowledge for different types of healthcare professional learners.65
We argue that the metaverse raises distinct ethical considerations that deserve focused attention from bioethicists and highlight seven high-priority areas for consideration: access and equity, avatar rights, data privacy and protection, data control and ownership, consent, medical licensure and liability, and medical billing.
The COVID-19 pandemic demonstrated that telehealth can effectively overcome many of the physical barriers to clinical care. Remote monitoring capabilities made possible through telehealth were also shown to improve both care quality and access for many geographically isolated populations during lockdown periods. The telehealth proof of concept ushered in by the pandemic primed the stage for the metaverse. However, new barriers to entry are emerging. Unequal access to costly technologies that enable individuals to access the metaverse (e.g. VR/AR headsets and powerful broadband internet) risks compounding existing inequities linked to a “digital divide.”66 Moreover, the lack of equitable access to privacy-preserving technologies, such as encrypted data sharing tools or dynamic consent models, could exclude vulnerable populations from meaningful participation in the metaverse. These health inequities may be further exacerbated if public and private investments in the metaverse compete with funding for public health initiatives in the real world.
On the other hand, treatment seeking in the metaverse in the form of an avatar might also offer novel opportunities to mitigate provider bias and other forms of discrimination linked to health inequities. A patient who experienced discrimination at the point of care based on their physical appearance may find appeal in the pseudonymous nature of healthcare interactions in the metaverse. The extent to which the pursuit of healthcare services might be liberated from bias or other forms of socioeconomic or cultural stigma will also remain constrained by practical considerations such as the ability to pay for virtual care (see section on Billing and Payment).
If the metaverse indeed emerges as a desirable and clinically effective platform for care delivery, corporations may selectively curate public engagement in ways that may not benefit all users equitably. For example, companies might offer free or discounted services in exchange for user data to expand metaverse services, raising significant ethical concerns. As Prainsack and Forgó67 have argued, paying for personal data puts poor communities at disproportionate risk for data disclosure and exploitation compared to those who can afford to pay more for greater data protections. This phenomenon is already observed in public Large Language Models such as OpenAI’s ChatGPT. Only paid subscribers are granted access to higher performance tools and the option to delete their data. These trends in open-source generative AI tools foretell possible equity issues for the metaverse. Policymakers and developers must therefore actively strategize avenues for responsible and equitable access to the metaverse, particularly in these early stages before financial incentive structures are established.
Altundas and Karaarslan68 suggest that in the future, avatars will be “our entry point of our data in the digital world,” helping to integrate and convey information from wearables, the Internet of Things, and other bioinformatics data streams. This convergence amplifies ethical concerns about the governance, ownership, and security of health data used to generate avatars using from real patient data. Photorealistic “deepfakes” further complicate these dynamics. Avatars, generated using real-world user data, blur the boundaries between personal identity and virtual representation. An avatar empowers users to operate under the guise of a persona other than themselves. Avatars can provide access to otherwise inaccessible spaces and introduce a degree of distance between the user and their virtual identity. However, this distance does not absolve users—or the platforms hosting them—of the responsibility for protecting user data. For instance, avatars that conceal the identity of their operator may still enable unethical actions, such as unauthorized data collection or breaches of medical confidentiality. It may be argued that avatars representing certain roles in the metaverse (e.g. doctors) should never be fully anonymous. This might be especially important in the domain of healthcare, where a code of medical ethics is well established.
Many avatars will serve as digital representations of real patients, tethered to real-world individuals via their health data. Extant patient and consumer rights, protections, and due processes owed to patients and consumers in the real world should serve as default guidelines for how to treat avatars in the metaverse. For instance, breaches of privacy or misuses of health data used to generate avatars used in metaverse applications could result in significant real-world harm for individuals. Our position that basic rights should be extended to avatars in the metaverse and that these rights in turn create ethical responsibilities is further supported by Martinez and Cho,69 who specifically caution, to the extent that digital simulacra are seen as posing low or minimal risk because they operate on data as opposed to humans, and the moral distance potentially afforded by digital simulacra could reduce researchers’ sense of responsibility to the people and groups behind the data. While the use of digital simulacra provides a benefit in minimizing physical risks to individuals from medical research, there remains a concern that enduring responsibilities to promote social benefit and minimize group-level risks may become less visible to researchers.
As a general point, we propose that ethical obligations toward avatars should carry over to the metaverse, including obligations to act in their best interests, to do no harm, and to ensure fair treatment and just allocation of goods and healthcare services. The degree of twinning between actual patients and their avatars through bioinformatics data amplifies these ethical obligations. What happens to an avatar might increasingly be felt in physical and psychological ways by real human users. Indeed, reports of harms experienced in virtual worlds (e.g. Second Life)70 offer evidence that distress inflicted in the metaverse is likely to have real-world negative impacts.
While a primary goal of extending rights to avatars is to protect their human counterparts, even avatars created from entirely simulated data deserve basic protections. At least two types of avatars fit into this category: avatars that are fully simulated (digitally designed, as with other forms of digital art) versus those synthesized using data from real-world patients. This distinction has important implications for moral status, given that they are tethered by varying degrees to real humans whose data should be protected. Avatars developed from simulated data may have symbolic links to real humans but do not contain any patient-related data. Synthesized avatars, on the other hand, represent inferences derived from real-world patient data and are not just composed of a surface image but of both data points and models. Wagg and colleagues provide a helpful explanation and schematic of the differences.71 While the question concerning ethical treatment of simulated versus synthetic avatars deserves further normative exploration, we propose the scope of rights and ethical obligations afforded to avatars should correspond to the degree of “tethering” to humans and their bioinformatic data. In particular, the greater the tethering, the stronger the ethical obligations to safeguard privacy and data integrity. Given that avatars operate as informational bodies, that is, digital composites that materialize personal, biometric, and behavioral data,72 breaches of these representations can produce both individual and group harms, as patterned inferences reveal collective traits or vulnerabilities.73 Addressing such risks requires revisiting how moral standing and accountability apply to hybrid digital entities that represent persons, even when not sentient.74 Furthermore, maintaining verifiable attribution for professional avatars is essential to prevent impersonation, misinformation, and harm. Technical measures for auditability and traceability within socio-technical systems,75 as well as privacy-preserving design strategies in augmented and mixed reality,76 can help balance transparency with appropriate protections for personal identity and contextual privacy.
A second justification for extending rights to “non-tethered” or “distantly tethered” avatars is that codes of conduct serve to protect those who experienced harm as well as to preserve a social order. The lack of moral standing of avatars in the real world does not justify treating them with violence or contempt in the virtual world.77 Permitting dehumanizing behavior in the metaverse may invite negative or harmful habits that can undermine respectful interactions among entities with moral status in the real world. Similar moral reasonings have been adopted in robotics. Encouraging respectful behavior toward robots (entities without recognized moral status) is supported,78 not only for the robot’s sake but also for modeling respect toward other humans.77 These issues raise compelling questions about distinctions between patient-as-avatar and avatar-as-patient and their candidacy for rights. While many of the philosophical debates around the digitized ‘self’ are beyond the scope of this paper, they are discussed in a robust body of legal (Day 2009) and medical internet (Graber and Graber 2010) literature.
The metaverse frustrates at least three regulatory mechanisms we depend on in the real world to ensure patient safety, protect health information, and maintain affordability of care. These mechanisms include legislation and policies around data privacy and protection, medical licensure, and healthcare payment and billing. For-profit entities principally involved in building out the metaverse will likely establish rules and standards that best serve their interests and may not be easily overturned by post-hoc regulations, policies, or advocacy if challenged in the future. We therefore advocate for anticipatory governance of the metaverse, broadly, in ways that prevent nefarious actors from exploiting future users.
Efforts to scale the metaverse introduce new risks to patient privacy and data security. It remains unclear whether data generated in the metaverse are protected in the same ways that data are governed by privacy regulations in the real world. The multisensory, real-time data collection capacities and new spatial computing capacities of the metaverse’s8 enable continuous behavioral and physiological monitoring, supporting highly personalized care and deep phenotyping. However, these same features also pose ethical challenges related to the storage, ownership, and secondary use of highly sensitive health data.
Entirely simulated or synthetic data can be generated in the metaverse (e.g. using Nvidia’s Omniverse platform) alongside real-world patient data for analysis, exchange, or training of clinical models. In both instances, the metaverse functions as a jurisdiction-less computing environment, complicating the application of existing privacy laws and enforcement mechanisms. Although synthetic data are often assumed to be de-identified, recent research79,80 shows that the identities of real people can still be inferred through data triangulation and linkage. Avatars (embodied, interactive representations of individuals within immersive environments19) and digital twins (i.e. data-driven, computational models that replicate patient anatomy, physiology, or device performance for simulation and decision support61) maintain bidirectional ties with real-world patients. As such, insights from virtual care can directly shape clinical decisions, compelling regulators to clarify what ethical and legal protections ought to extend to data generated in virtual environments and who bears oversight responsibility.
Traditional data protection models assume clear boundaries between contexts of data collection and use. The principle of contextual integrity15 is instructive, emphasizing that privacy depends on maintaining appropriate information flows relative to social roles and expectations. As metaverse data traverse national and regulatory borders, transborder data governance frameworks81 will be essential to harmonize protections and prevent jurisdictional gaps in user rights and accountability. Adapting Europe’s GDPR82 principle of applying protections where data are processed, rather than where they originate, could provide an early operational model. Comparable precedents incloud-based data sharing suggest that transparent encryption standards, secure storage verification,and informed consent across points of virtual care will be key to maintaining user trust.
Beyond protection, the metaverse also redefines ownership. Avatars in the metaverse might be twinned to varying degrees with real patients, where multimodal datasets are sourced together from molecular, phenotypic, and other human behavioral data that are personally identifying. As avatars, which are the partial or full product of synthetic data, can contain data that preserve some characteristics of real patients, unwanted disclosure can result in individual or group harms. Continuous, multimodal data collection supports clinically relevant information, drawing insights from the field of digital phenotyping83 and the creation of “digital twins”61 that can inform real-world decision making.84 Improved data provenance and permissions tracking invite alternative ownership models, for example based on blockchain-enabled data licensure or “data loans,”85 in which users temporarily grant others access to their data while retaining underlying ownership rights—analogous to lending artwork to a museum. These proposed governance mechanisms align closely with emerging Web 3.0 principles of decentralization, verifiability, and user-centric control. Distributed-ledger technologies could technically enable these models by providing immutable audit trails for data provenance, programmable smart contracts for dynamic consent management, and tokenized licensing frameworks that allow data sharing without centralized intermediaries. These models rely on dynamic consent and create opportunities for users to benefit materially from data sharing.
However, paying individuals for their data risks creating exploitative or inequitable markets67 (as discussed under Access and Equity). Emerging governance mechanisms such as data trusts86,87 and data commons88 offer collective approaches to ownership that balance individual control with shared stewardship. Embedding these principles into metaverse design could shift data from being private commodities toward shared public goods that support innovation, transparency, and equity.
Despite the potential sensitivity of data generated in the metaverse and the associated need for protections, obtaining informed consent for the collection or use of avatar data is complicated.
First, it is unclear whether avatars should provide this consent or their human controllers. There are no legal precedents to support the enforceability of consent contracts signed by avatars (pseudonyms) unless also signed with a person’s legal name. There might be exemptions under contract law if an avatar has a verifiable public affiliation with a legal entity, but further consideration is needed. Second, in cases where spatial computing or ambient intelligence89 enables passive collection of avatar data from collective spaces without direct consent from avatars or their human controllers, what consent processes, if any, should apply in such cases or before entering shared virtual spaces? For example, a commercial retailer might collect eye-tracking and other behavioral data from avatar shoppers and use these data for marketing, sale, or exchange. Human users might be required to accept the retailer’s terms and agreements before their avatar can enter the virtual store. A metaverse equivalent might be that a human patient is required to give broad consent to the use of any data collected during their virtual clinical visit before receiving care. Examples include actively or passively collected data, with varying levels of sensitivity and enabling inferences that may not be directly related to a patient’s care. As such, should patients be given the opportunity to offer distinct or granular consent to sharing passively collected data? If they decline to share, will they be denied service?
This is one example of how informed consent procedures employed in real-world clinical visits may not easily transpose to consent practices in the metaverse. Consent agreements provided to patients during clinical visits in the physical world do not distinguish among the different types of health data collected nor the means through which they were collected. However, ambient health data collection and analysis are likely to be embedded in the very fabric of the metaverse and raise considerations about whether new approaches to consent are more appropriate to cover metaverse visits. Ensuring privacy and safety when involving ambient technologies will also depend on aligning emerging metaverse standards with existing governance frameworks such as the NIST AI Risk Management Framework (2023) and ISO/IEC 23894:2023 outlining concrete processes for identifying, measuring, and mitigating AI-related risks across the system life cycle, as well as the American Medical Association’s (2022) Privacy Principles for Health Data Outside The Health Insurance Portability and Accountability Act of 1996 (HIPAA) highlighting accountability gaps in non-HIPAA contexts.
Indeed, the metaverse and other Web3-enabled platforms present opportunities to develop novel consent models.90 New data tagging capabilities will be mainstream in the metaverse, meaning users could have more granular control over the scale and scope of their data sharing permissions. In so doing, the metaverse could help to catalyze a shift toward dynamic consent as the default model over explicit or broad consent that currently provides only narrow choices for data contributors.
According to a 2023 report by PricewaterhouseCoopers,91 the Emirates Health Service became the first healthcare authority to scale a metaverse platform called MetaHealth.92 A fully functional metaverse hospital has been under development by the Thumbay group, whose leaders promise a metaverse-enabled hospital where the patients can seek care for their avatars.93 With the potential for hospital-based service delivery on the horizon, proper licensing and translation of clinical standards of care in the metaverse are highly consequential issues for patient safety. However, credentialing bodies that mandate appropriate licensure of all clinicians are likely to be (at least initially) inoperable in the metaverse in the near term.
Requirements for medical licensure are jurisdiction-specific and unlikely to apply uniformly to virtual spaces that are geographically (and geopolitically) unconstrained. Determining whether licensure is required and how it is verified in the metaverse are at present ambiguous, heightening concerns about medical negligence and patient safety. Regulators and licensing bodies must consider new standards that encompass not only clinical competencies but also adherence to data protection protocols, ensuring that digital patient records are maintained with the same rigor as traditional health records.
Ethical principles of beneficence and nonmaleficence rest on treating patients based on the best interest standard. Healthcare professionals are thus held to a standard of care that would need to be adapted to the metaverse environment, given that no evidence-based standards of care exist for virtual environments and accommodated to meet the specifications of virtual patient treatment. Ensuring equivalence in these quality-of-care standards will be important for metaverse users to avoid liability for injury or medical negligence.
Many questions surrounding licensure and accountability in the metaverse mirror those in telemedicine. The Interstate Medical Licensure Compact established by the Federation of State Medical Boards (2023) offers a partial model for cross-jurisdictional practice, though variation across states continues to obscure standards of care. As telemedicine scholarship94 notes, malpractice in virtual settings often hinges on whether clinicians meet the professional expectations appropriate to the mode of delivery. Following the expansion of telehealth during COVID-19, some95 have called for adaptive standards that reflect how digital environments mediate clinical judgment, consent, and documentation. Standards may have to adapt to changing notions of harm (bodily, psychological, and dignity-related) in line with new understandings of human-avatar relationships and avatar rights in the metaverse. This evolution of standards must explicitly integrate data ethics concerns, addressing the risks associated with unauthorized data access or the ethical implications of cross-border data flows inherent to a decentralized virtual healthcare environment.
The metaverse also invites new payment models and approaches for assessing actuarial risk for virtual patients, assets, and services. Coverage and reimbursement for metaverse-based care will likely follow the precedents set by telehealth policy. The Centers for Medicare & Medicaid Services continues to expand and refine the list of covered Medicare telehealth services through annual updates to the physician fee schedule. However, valuation frameworks have not kept up with technological capability, as most payment models were designed for episodic, in-person encounters rather than continuous or hybrid virtual interactions. Evidence from recent telemedicine studies96–98 suggests that while remote care can achieve comparable quality outcomes, cost savings depend on integration with existing clinical pathways and appropriate triage mechanisms. New reimbursement models for digital therapeutics and virtual interventions are beginning to emerge,99 but translating these precedents to the metaverse will require more granular definitions of “virtual service” and clearer standards for documenting time, supervision, and medical necessity.
As virtual health assets, such as digital medical equipment and virtual clinics, become valuable and routinely used in the metaverse, insurers are likely to need new tools to protect these assets against loss, damage, or cyber threats. To mitigate financial risk, insurers (either government or private) might evaluate encryption methods and access controls in an effort to safeguard transaction data. Metaverse-enabled hospitals could, like in the real world, require protections against financial liabilities. This might involve using dedicated secure platforms for managing financial records alongside clinical data to prevent data-related discrepancies in billing. Insurers might also consider adapting their coverage models to include virtual healthcare services, virtual diagnostics, and other forms of remote medical care, as well as developing new methods for assessing provider error in virtual diagnostics or treatment provided within the metaverse. Drawing on lessons learned in telehealth during the COVID-19 pandemic, health services sought out of state or out of network might not be covered, even if accessible to patients in the metaverse. This situation suggests that evaluating cross-jurisdictional data sharing protocols, such as verifying secure data transfer agreements between providers, may be useful in mitigating risks.
As Narula describes, a “true” metaverse can only be realized if and when the activities that take place in virtual spaces generate value for those in the real/physical world.20If treatment seeking in the metaverse becomes routine, insurers will undoubtedly want to evaluate whether health services rendered in the metaverse demonstrate measurable cost savings, value, efficacy, and/or necessity, similar to how coverage of telemedicine services was negotiated during the COVID-19 pandemic.
Metaverse-enabled healthcare constitutes a new virtual paradigm for healthcare delivery that evokes both technological optimism and skepticism. The metaverse is in part powered by technologies like AI, virtual/AR, and computational modeling and simulation (e.g. data synthesis), which, separately, have garnered empirical and policy attention from bioethicists but not in the aggregate in the context of the metaverse. In this article, we have examined what we view to be pressing ethical considerations raised by a convergence of these enabling technologies for pursuing healthcare and research in the metaverse (Appendix A). Our synthesis is not, however, exhaustive. We outlined aspects that distinguish the metaverse from other digital-health technologies available to date and highlighted promising applications and ethical challenges associated with designing, experiencing, and operating in these new virtual spaces. We recommend developing standardized protocols for data handling in virtual environments, including measures such as regular audits of data usage and strict verification of data provenance to better protect real and synthetic patient data from misuse.
In our view, some of the most pressing ethical issues pertain to the protection of patients in the real-world who may turn to the metaverse as an alternative to existing forms of care. A better understanding of these patients’ needs and why they may be better met in virtual rather than real-world clinical environments will be essential for designing patient-centered services in the metaverse. This could involve pilot studies that assess patient outcomes in virtual care settings and the use of secure data collection tools to monitor patient feedback in real time.
In parallel, bioethicists and policymakers should view the metaverse as an opportunity to build more effective and equitable systems of care, integrating innovative models of consent, data ownership, and protection that improve upon rather than replicate long-standing inequities and inefficiencies of many healthcare systems in the real world. Concrete recommendations include designing dynamic consent frameworks that allow patients granular control over how their digital-health data are shared and used, as well as implementing interoperable data standards that ensure secure, cross-platform data transfers. Importantly, we issue a call to action for bioethicists to proactively engage in both normative and empirical research needed to address these evolving ethical issues as the metaverse landscape is actively being constructed.
The authors equally conceptualized the article, conducted background research and literature reviews, and drafted earlier versions of this manuscript. Dr. Kostick-Quenet responded to peer review comments and prepared the article for journal submission. All authors approved the final draft for publication.
For information regarding aspects of this publication, contact the corresponding author.
Generative AI was used to develop an initial outline of relevant ethical and legal obligations and to improve language clarity in response to peer review comments.
We would like to acknowledge the helpful editorial comments and conceptual contributions of faculty peers from the Center for Medical Ethics and Health Policy at Baylor College of Medicine.
Copyright Ownership: This is an open-access article distributed in accordance with the Creative Commons Attribution Non-Commercial (CC BY-NC 4.0) license, which permits others to distribute, adapt, enhance this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See http://creativecommons.org/licenses/by-nc/4.0. The authors of this article own the copyright.