ORIGINAL RESEARCH
Lipsa Sadath, MSc, MCA1, Deepti Mehrotra, PhD2, Anand Kumar, PhD3
1Computer Science Engineering Department, School of Engineering, Amity University, Dubai, UAE; 2Computer Science Engineering Department, Amity School of Engineering and Technology, Amity University; Uttar Pradesh, Noida, India; 3Electronics Engineering Department, School of Engineering, Amity University, Dubai, UAE
Keywords: blockchain technology, database management, hyperledger, Hyperledger Caliper, network performance, privacy, security, scalability
Blockchain technology has become crucial in improving the privacy and security of enterprise applications in the cyber world. However, scalability has become a significant concern for researchers in large organizations, especially those with complex hierarchies and access privileges. As a result, the existing models and consensus algorithms suffer from various issues. Medical centers and healthcare providers are particularly affected by this problem due to the vast amount of data, making it a critical weakness of traditional database management systems. To address this issue, the authors propose a hierarchical model within the Hyperledger Fabric enterprise application, focusing on the healthcare sector as a use case. This model includes multiple organizations at different levels of the hierarchy, such as hospitals, hospital governance, and insurance companies. The initial implementation of this model includes two levels of hierarchy, demonstrating networks of hospitals joining an insurance company. The primary objective of the experiment is to test and improve the network’s performance using this model. The model’s performance is evaluated by manipulating and scaling environmental factors such as the number of organizations, transaction numbers, channels, block intervals, and block sizes. The benchmarking tool used for this assessment is Hyperledger Caliper, which measures indicators such as success and failure rates, throughput, and latency. Currently, the research focuses only on testing the model’s scalability using patient data.
Citation: Blockchain in Healthcare Today 2024, 7: 295 - https://doi.org/10.30953/bhty.v7.295
DOI: https://doi.org/10.30953/bhty.v7.295
Copyright: © 2024 The Authors. This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, adapt, enhance this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0.
Submitted: December 5, 2023; Accepted: January 20, 2024; Published: April 17, 2024
Funding: The was no funding for the preparation of the article.
Financial and non-Financial Relationships and Activities – no Financial Relationships: None are reported by the authors.
Corresponding Author: Lipsa Sadath; Email: lsadath@amitydubai.ae
The healthcare sector suffers from data privacy, security, and integrity issues as each patient’s data flows through different hospitals under a specific hospital network (HN). The biggest challenge is faced when insurance claims are rejected due to mismatches in data. The prime reason could be tampered data submitted to insurance companies by the HNs. Such database registries can seriously affect claims submitted to insurance companies.
When a private organization’s application works with blockchain, a guarantee for better transparency and security exists, but issues related to scalability and latency arise. This is seen mainly in supply chain systems, healthcare applications, etc. One of the main reasons is the number of organizations participating in transactions and different contracts running through the network. Many enterprise applications are considered sensitive to network delays because of the latency caused by the vast number of transactions. Current studies revolve around changing the batch time or the network’s block size in a permissioned environment. This problem in the healthcare sector is more complex and requires more attention due to substantial patient data.
The primary interest of research in this paper is to understand scalability performance and create a model in the healthcare sector using Hyperledger Fabric enterprise-grade open-source on the Linux platform. The contributions of the research are listed here:
The flow diagram in Figure 1 shows the order of topics discussed. A glossary of key terms used throughout the article is presented in the Appendix.
Blockchain combines technological features, including cryptography, peer-to-peer networking, distributed system, transparency, access identity permissions, open source, autonomy, and immutability. This makes the technology secure as the transactions cannot be tweaked or tampered with, and all the entities/parties that make transactions are kept anonymous.1 Thus, blockchain offers integrity, protected distributed ledgers, and transparency of the entire process by upholding a set of global states.
The participating nodes agree upon the existence, value, and histories of all states. Each state contains multiple transactions. Hence, blockchain is a way to manage distributed transactions. The concerned entities preserve replicas of the data and jointly agree on a transaction execution order. This high-performance blockchain is a means to monitor a system completely as if under the surveillance of a legal third party. This is very important, especially in the healthcare systems, as the amount of patient data that floats from laboratories to insurance claims is immense. This data transparency is required to understand genuine patient claims. The network does not provide an opportunity for data tampering by any organization.
At the same time, concerns about data replication2 exist as immutable records are stored with every peer. This affects the throughput of the blockchain network.3 Many frameworks are designed to handle this at production levels and industry applications. However, research is underway to understand the effect of different consensus mechanisms and their role in maintaining the immutability, stability, privacy, and security of the system.4
Block size causes performance issues when changed, resulting in a change in the maximum message counts the blocks can hold.5 Whereas, the scalability use case in the Ethereum network discusses sharding (the process of separating large databases into smaller, faster, more easily managed parts)6 as a solution that reduces the load in the entire network.
Machine learning through proof-of-information consensus is discussed by Kuo et al.,7 but the model lacks a discussion on scalability. In contrast, a model by Zhang et al.8 discusses data transparency between owners and users of clinical data addresses and, to an extent, scalability and security issues. Another model published by Ylonen and Lonvick9 that discussed scalability has an architecture with the Secure Shell Protocol (SSH)9 that has an identity preservation method with an absolute model view controller pattern with pointers for data access from the organization’s data pool. Smart contracts in web and data immutability are the highlight in a genomic study by Glicksberg et al.10 that attempts to understand the identification of late cancer stages using blockchain, but they lacked clear scalability research evidence. Other research by Lee et al.11 using blockchain to handle patient records studied data sensitivity, privacy, integrity, and authorization using blockchain technology, but it didn’t address scalability issues. This challenge of scalability issues exists, especially when dealing with large clinical data records,12 and other types of records that need privacy and consistency. All research speaks about careful data handling with reliable smart contracts.
The Hyperledger Fabric framework has peer machines with the same data. Technically, there is no administrator in a blockchain system, but enterprise applications already have authenticated peers. The blocks hold the data, which are otherwise known as immutable ledgers. The respective chaincode, or the smart contract, is common for a particular channel and the peers who join that transaction. The world state stores the other updates of the block data.13 Scalability issues are discussed in various models, like the hierarchical model through various abstract models.14–20
This basic network architecture is shown in Figure 2, where the organizations are part of the data layer, which includes the fabric. The client application part is the business layer, which interacts with the fabric blockchain and hosts the application for the user. The Membership Service Provider (MSP) are credentials used by the peers (P) and committers to participate in the Hyperledger Fabric network as they are the organization’s identity. The genesis block contains all the MSPs and the policies. Clients authenticate transactions, and peers authenticate the results or endorse them using these credentials. The orderers have a shared communication channel with peers and clients.
Fig. 2. Hyperledger Fabric Network Architecture. MSP: Membership Service Provider; O: orderers; Org: organization 1,2…n; P: Peers.
The broadcast service for messages and transactions takes place here. For scalability, they are implemented as docker containers. The peers maintain the read/write operations. The ordering services are collective nodes that order transactions into a block. The service is common for the overall network and supports pluggable implementations. Each member has a cryptographic identity material that is contained within orderers.
Whenever the client has a transaction request (Figure 3), an endorsement by the respective endorsers takes place according to the policy. Then, the orderer orders the block and copies the committing peers so all peers get a copy of the latest transaction.
Fig. 3. Hyperledger Transaction Flow. MSP: Membership Service Provider; O: orderers; P: peers (1, 2,….n).
This section describes the use case in a healthcare sector that involves an insurance company (IC) handling patient data from multiple HNs.
When ICs need data on patients during a claim, there is a lot of missing information or gaps in the data provided by hospitals. This is due to the traditional relational database management system used to store information in hospitals and related networks. Data tampering and human-caused data errors are common problems in such situations. Most literature thus far has conceptual ideas that could be implemented in the blockchain. Another related issue is scalability as more organizations or hospitals join the network. Certain experiments in healthcare21 have only a very basic structure of blockchain implementation with a maximum of two medical organizations joining a single channel. Therefore, there is no proof of absolute latency test performed in networks implemented in this sector.
Hence, the main aim of this paper is to design a model to test the integration of medical records between insurance companies and a large network of hospital groups where patient data mount quickly.
The key terminologies of the fabric framework are described through the use case. The experiment consists of five organizations with four HNs (HN1, HN2, HN3, HN4) and one IC (Figure 4). The benchmarking used for the tests is in Hyperledger caliper. Ideally, many data transfers take place between the insurance companies and the hospitals independently.
Fig. 4. Healthcare sector use case diagram.
We consider the basic details of the patients that are needed from hospitals by insurance companies in the following way (Figure 5). The basic data structure consists of patient ID, department, age, patient name, address, phone number, and bill for services.
Fig. 5. Presentation of patient data structure in smart contract.
For ease of use and populating the data, the chaincodes are renamed for deploying into the channels. The patient data are duplicated (Figure 6) to handle up to 15,000 transactions at times to check the scalability and performance of the network.
Fig. 6. Presentation of data types and data in smart contract.
3. Endorsement Policy: Only a few peers are eligible to endorse the transaction proposal. The endorsement policy specifies the required peers for endorsement. In our business logic, we have implemented “OR” endorsement where not all peers are required to endorse the request. Figure 7 shows that policy as either Org1 or Org3 can be endorsed through Channel 2.
Fig. 7. Presentation of endorsement policy in smart contract.
4. Channel: Four channels connect the peers of the five organizations (the IC and the four HNs). Each channel has a chaincode deployed on it. Hence, each channel is governed by the respective pre-agreed business logic or policy as per the smart contract. Then, the ledger functions are initiated on these smart contracts. When a specific organization wants to conduct business transactions privately, Its peers join the channels separately.
5. Ordering Service: The model uses three orderers for ordering services to give more capacity to the network. This is to prevent any unforeseen situation where the ordering service of one orderer stops. In such situations other orderers take over to avoid any delay in the network.
6. Certificate Authority (CA): Each peer of an organization (IC and HNs) will have their certificate authority. It is the certificate authority service that is first up in the network to establish all the organizations.25
7. Ledger: The data is stored in CouchDB for the respective organizations. According to the GDPR (General Data Protection Rule)25 personal data should not be revealed in transactions, so the advantage of blockchain is that it is the hash of the data that is stored in the blocks. Other data that the transactions can pass on are the treatment and diagnostic details of the patient that need to be checked by the IC.
8. Workers: The current experiment considers workers as the clients. The clients can be from the IC or any of the HNs. They could be doctors or agents of insurance companies, agents who try to create a transaction, or read a transaction as the business logic or the endorsement policy.
9. Fabric Gateway: They are a set of libraries that help to invoke the business transaction through smart contracts and the peers for endorsement. After endorsement, the transaction is saved and distributed to other peers through the fabric gateway and the channels.
There are two ways to handle scalability: vertical scaling and horizontal scaling. In horizontal scaling, the load is distributed at peak hours to different temporary servers from the application process interfaces, whereas in vertical scaling, we assume that the capacity of the network must be increased as more organizations join and more transactions are incorporated (TPS or transactions per second). However, here in this experiment, we use vertical scaling to understand the performance of the network.
The fabric blockchain test bed is set up with five organizations carrying one peer each. The Raft consensus algorithm is used with three orderers to avoid any delay even when one orderer is down. The endorsement policy requires at least one peer to be an endorser from each organization. The network is set up on a virtual machine with Linux Ubuntu 20.04 with 6 CPU cores at different instances to check the performance with 16 GB RAM.
To check scalability and performance, our fabric experiment focuses on the number of organizations, channels, number of TPS, number of clients or workers, block intervals, and block sizes checked against the latency and throughput (number of successful transactions). The benchmarking tool used is Hyperledger Caliper, which calculates the time to create and read the transactions. Most experiments22,23 consider the latency analysis using the transaction arrival rates using Hyperledger Caliper as the benchmarking tool.
Table 1 lists the channel distribution in the network with the IC and HN (HN1, HN2, HN3, HN4).
Channel Number | Channel Distribution |
1 | IC- HN1 |
2 | IC- HN2 |
3 | IC-HN3 |
4 | IC-HN4 |
IC: insurance company; HN: hospital network. |
The performance of the network is measured by varying one parameter as a rate controller and keeping other parameters constant. This is done to understand the maximum capacity of the network. The experiment is benchmarked against two popular experiments by Xu et al.22 and Al-Sumaidaee et al. Al-Sumaidaee et al.21 have performed a Hyperledger Fabric experiment using the concept of just two medical institutions and performed latency analysis of the network.
The experiment did not consider features such as the inclusion of more than two organizations in the network, varying the number of channels, or varying other parameters like block size and block intervals. The major benchmarking we followed from Al-Sumaidaee et al.21 is mainly varying the number of clients or workers, and steadily increasing the number of transactions and TPS to analyze the network capacity. Additionally, our experiment follows other criteria set by Xu et al.22 to check latency in terms of varying number of channels, block size, and block interval.
The benchmarking is performed using Hyperledger Caliper, which gives four performance indicators in terms of TPS. The indicators include the success and failure rates of transactions, latency (average time taken to complete the response), and throughput (average number of transactions/second). This section has the complete experimental setup and analysis with varying parameters. The readings are shown in table format, and the results are compared with Al-Sumaidaee et al.21 for the first three experiments. From Experiment 4 onwards, we do not compare the results with any benchmarking as we are observing our own implementation with five organizations. Each parameter impact is shown below through analysis graphs as well. Each experiment is performed at least three times, and an average is taken to get a final reading
The performance of the network shall be measured in terms of three indicators such as latency, throughput, and send rates. We record the latency, throughput and send rates for creating and reading records. In the below experiments, first to test the network, we keep one, that is, Channel 1—between IC and HN1 connected to test the impact of other parameters in the network. Each experiment that we perform is mentioned with fixed and variable rate controllers, and recorded for their latency, throughput, and send rates to measure the success and failure of the network.
In the first set of three experiments, we fixed the Block Interval = 1s and Block Size = 50, Number of Channels = 1, TPS = 75, Workers = 5. From Experiment 1 to Experiment 3, we have only one channel with variable rate controllers such as transaction numbers, number of workers, and the TPS. For the remaining experiments, we increase the number of channels to understand the channel impact along with TPS, block size, and block interval.
Only transaction numbers (Table 2 for create operations and Table 3 for read operations) are changed as variable rate controller with five clients or end users as workers.
The latency is high for create operation. It is observed that both the throughput and latency are higher as the transaction number increases (Figure 8).
Fig. 8. Evaluation of the impact of transaction numbers, TPS (transactions per second) = 75 from Tables 2 and 3, Org (organization) numbers = 5: Latency and Throughput in Create and Write operations.
Benchmarking is against experiments conducted using a similar platform by Al-Sumaidaee et al.21 with similar indicators but with two organizations only. It is observed that when the number of organizations increases, throughput and latency (Figure 8) are affected more when compared to results shown in Table 4.
Create Record | |||||
Workers (n) | txNumber | Transactions (per second) | Latency (ms) | Throughput | Send Rate |
5 | 1000 | 75 | 0.1 | 75.1 | 75.4 |
5 | 5000 | 75 | 0.09 | 75 | 75.1 |
5 | 15,000 | 75 | 0.09 | 75.1 | 75 |
TxNumber: The total number of transactions that must be sent. |
The variable rate controller is the number of workers or clients that join to process the transactions. Table 5 shows create operations, and Table 6 shows read operations.
There is a steady decrease in throughput and latency as the number of clients or workers increase for both create and read operations (Figure 9). This is a typical situation when end users are more at the network, and access write and read operations are requested by all 5 to 50 clients at the same time.
Fig. 9. Evaluation of impact of transaction numbers, TPS = 75 from Tables 5 and 6, Org numbers = 5, Latency and Throughput in Create and Read operations.
We compare our results with experiments performed by Al-Sumaidaee et al.,21 where the number of organizations (Table 7) is just two, whereas our network supports five organizations. Hence, the resources and services are shared between all the five organizations compared to two organizations. It is observed that the latency and throughput of our experiment are severely affected due to the greater number of organizations and workers.
Create Record | |||||
Workers (n) | txNumber | Transactions (per second) | Latency (ms) | Throughput | Send Rate (n) |
5 | 1000 | 75 | 0.1 | 75.1 | 75.4 |
25 | 1000 | 75 | 0.11 | 75.6 | 75.8 |
50 | 1000 | 75 | 0.76 | 76.4 | 76.7 |
TxNumber: The total number of transactions that must be sent. |
From Experiment 1 and Experiment 2, we found that throughput and latency are affected as more number of organizations join the network. Experiment 3 is conducted by increasing the TPS and keeping other parameters steady. Send Rate is referred to as the number of transactions that are actually sent, though we set the TPS to a few values as 75, 150, and 250, as in Tables 8 and 9 during the experiment.
As the TPS increases , the latency increases (Figure 10) mainly for Write operations. Read operations give a good throughput and less latency which is almost negligable compared to the write.
Fig. 10. Evaluation of impact of transaction numbers, Tx = 1000 from Tables 8 and 9, Org numbers = 5, Latency and Throughput in Create and Read operations.
The experiment is benchmarked against Al-Sumaidaee et al.21 and it is observed that the throughput and send rate are less when the number of organizations increase compared to the two organizations in the benchmarked experiment. txNumber is the total number of transactions that must be sent.
The throughput in transaction creations fell drastically compared to the benchmarking Al-Sumaidaee et al. (Table 10).21
Transactions (per second) | txNumber | Workers (n) | Latency (ms) | Throughput | Send Rate (n) |
75 | 1000 | 5 | 0.1 | 75.1 | 75.4 |
150 | 1000 | 5 | 0.08 | 149.6 | 150.8 |
250 | 1000 | 5 | 0.1 | 164.6 | 251.1 |
TxNumber: The total number of transactions that must be sent. |
Here, the rate controller is set as the number of channels. We add more channels to make sure step by step, the IC communicates in parallel with HN1, HN2, HN3, and HN4, respectively. For this, we do not keep any benchmarking with Al-Sumaidaee et al.21 as they do not have multiple channels. From Experiment 4 onwards, we escalate the experiment (Tables 11 and 12) to check more performance according to our experiment model only.
Though all readings were taken each time a channel joined the IC, we observed the difference in the performance of Channel 1 each time for Create (Table 11) and Read (Table 12). In this scenario, we try to check if the addition of more channels affects the performance of Channel 1, though any channel can go slow in the process. It is observed that as Channel 2 [HN2] joined to IC, there is a sudden latency (28.24 ms) experienced in the network (Table 11) and a fall in throughput to 49.0. This could be due to several clients’ requests being processed at the same time and the network sharing the resources parallelly all of a sudden. As always, the latency is higher in Create than in Read operations.
The next effort is to understand how the addition of channels affect the network as TPS changes to 100 from 75 (Experiment 4). We conduct the experiment for both create (Table 13) and read (Table 14) operations.
Similar to the pattern that is observed for TPS = 75 (Table 11), it is noted that during the write operation, while the second channel joins there is a sudden latency experienced (16.43 ms) in the network. Overall latency increases as the number of channels increases from one through four. It is also observed that a high rate in TPS also affects the network (TPS = 75 to TPS = 100) (Figure 11).
Fig. 11. Evaluation of latency for TPS (transactions per second) = 75 and TPS = 100 (Tables 11 and 13) in Write operation with the number of channels increased from 1 through 4.
Block interval is the time taken to create a new block. With all the changes being observed, it is essential to now learn if there is any impact on the block intervals. Therefore, we set the block interval to 2s from 1s, with the block size continuing to be 50 itself for create (Table 15) and read (Table 16) operations.
Compared to Experiment 4 with BT = 1s, it is observed that there is less latency in the network even with more number of channels joining when the Batch time or block interval increases to 2s (Figure 12). But there is no evidence of a heavy change in throughput. Figure 12 shows a comparison in latencies for write operation for BT = 1s and BT = 2s for channel numbers increasing from 1 through 4.
Fig. 12. Evaluation of block interval (1s and 2s) for TPS = 75 (Tables 11 and 15) in Write operation with the number of channels increase from 1 through 4. TPS: transactions per second.
It is essential to understand the impact of block size in the network. Hence, we keep block size as the rate controller in experiment 7 for create (Table 17) and read (Table 18) operations.
It is observed that with an increase in BS (100), the throughput is steady. And the latency reduces compared to BS = 50. Even when more number of channels joined, the throughput is not much affected when the block size is increased, but there is an increase in latency when the block size increases (Figure 13).
Fig. 13. Evaluation of block size (50 and 100) for TPS = 75 (Tables 13 and 15) in Write operation with the number of channels increase from 1 through 4. TPS: transactions per second.
This section completes a detailed analysis of the proposed two-layer hierarchical model. We experiment with the entire network with a total of five organizations. The prime one is the IC, and all the four HNs them. Each channel is deployed with its own smart contract. The performance and latency analysis of the network are benchmarked and evaluated against a two-organization network that has only two channels. The study varies the parameters and uses different rate controllers during the experiment. It is observed that as the number of organizations, channels, workers, and transactions increases, the latency of the network and the performance gradually falls. Effectively we conclude that vertical scaling is recommended for any organization that needs to improve the scalability of their network.
While we evaluate the performance of the entire model in terms of scalability, we would like to comment on how this entire exercise is useful for the healthcare community. As more users, clients, transactions, and patient data increase in the network, it becomes important to understand the capacity of the organization’s network (IC in the experiment). This study does not cover trust issues as we already understand BCT guarantees the security of patient data that is abundantly floating between insurance companies and different HNs. Hence, we deal with scalability issues only in such systems that require vertical scaling, that is, increasing the capacity of the network. So, if more clients (workers in the experiment) post more patient transactions, the network could be scaled to handle such situations. Otherwise, as shown in the experiment, there could be severe latency as more hospital groups join, and the network might crash.
The technology is being widely tested across many use cases like supply chains for agriculture sustainiability21 and semiconductor industry26 and other areas27 for performance analysis. Bag et al. conducted a study on how BCT can be effective in small and medium enterprises (SMEs).28 It is not always the performance analysis, but as research is delving more into this area, there is confidence now shown by organizations to implement blockchain technology. Another sector that recently made an impact was research in the automotive industry.29 While most studies are still progressing in this technology, keeping track of sustainability and supply chain, interesting use cases from different parts of the world like Jordan are impressive.30 Enormous studies have also been conducted in healthcare supply chain systems using blockchain technology.31 Though all sectors are trying to learn the implementation of blockchain, it is the supply chain that is mainly trying to research from all ends to check the resilience of the technology.32
In this work, a two-level hierarchical model is implemented using Hyperledger Fabric as a solution to the scalability issues faced in BCT in the healthcare sector. Detailed performance analysis of the model is done by varying the number of transactions, workers or clients, channels, block size, and block intervals. When more channels are added, the major performance analysis on latency is made on the records creation operations rather than on read operations. This is because the read operations mostly show a negligible amount of latency. The benchmarking tool used for the analysis is Hyperledger Fabric. Each channel has a patient contract deployed on it. The entire experiment tries to analyze how more patient data can be managed at hospitals while interacting with insurance companies for insurance claims. When block interval increases, there is less latency experienced, but it is studied that the block size increase is not recommended as there is higher latency experienced. Though more readings and experiments were conducted in this study, limited tables are included to show the performance of the network through worker size, channels, block size, and block intervals. The current model has only one smart contract implemented per channel.
The main limitation of the experiment is memory capacity. During the experiment at different stages, we scaled our network vertically, and it was observed that as we scale more, more organizations can be added to the network. Hence, the study emphasizes doing further experiments to check the maximum ability of a healthcare system network.
One major challenge that might occur in a running real-time network could be that the network might crash if the scalability issues during heavy transactions are not foreseen. The initial overhead and usage of more memory during major transactions could be high. However, this is required to protect patient data and make sure smooth and effective implementations are deployed to handle healthcare data well in advance.
Future work will extend to further test the scalability of the network to the maximum, and the implementation of more organizations in the proposed hierarchical model to maintain more levels of HNs.
All authors contributed to this paper. Lipsa Sadath developed the prototype, wrote the article and performed research. Deepti Mehrotra supervised the research, article and gave feedback. Anand Kumar gave feedback.
Copyright Ownership: This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, adapt, enhance this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0.
Terminology | Defined | Application |
Node |
|
|
Chaincode |
|
|
Endorsement policy |
|
|
Channel |
|
|
Ordering Service |
|
|
Certificate Authority |
|
|
Ledger |
|
|
Workers |
|
|
Fabric Gateway |
|
|
*The authors propose a hierarchical model within the Hyperledger Fabric enterprise application, focusing on the healthcare sector as a use case. **Patient data structure in a smart contract is illustrated in Figure 5. CA: certificate of authority; Couch DB: a clustered database that allows running a single logical database server on any number of servers; GDPR: Data Protection Rule; HN: hospital network; IC: insurance company; ID: identification; OR: endorsement where not all peers are required to endorse the request. Org: organization number (e.g., ORG1). Figure 7 shows that policy as either Org1 or Org3 can be endorsed through Channel 2.; SC: smart contract. |