An investigation into the applicability of Blockchain technology for Digiata’s clients
In response to being approached to provide blockchainbased solutions, Digiata researched the feasibility and applicability of various solutions.
It was determined that the public, open blockchains such as Ethereum and Bitcoin proved inappropriate for the clientbase because concerns over security, confidentiality, performance and compliance were not adequately addressed. We then investigated the growing list of enterprise blockchains that are designed to fit business needs for privacy, closed membership and high transaction throughput and found that enterprise blockchains were either adjustments to the source code of Ethereum or designed from the ground up so as not to inherit the ideological architectural debt from the open blockchains. We decided to pursue the latter option in the hopes that it would more closely match our needs than an altered Ethereum. The most popular choices were IBM’s Hyperledger Fabric and Intel’s Hyperledger Sawtooth. After encountering a great deal of operational difficulty related mostly to the projects still being fairly new, we decided to revisit enterprise Ethereum for a more mature development experience and after a number of iterations eventually settled on JP Morgan’s Quorum which intelligently layered privacy onto an otherwise unaltered Ethereum node. This allowed us to benefit from the very well established and mature development tools in the vibrant and large Ethereum ecosystem while being able to design products with throughput and privacy needs more than up to the task of meeting our clients’ needs.
With the advent of blockchain technology and the architectural disruption it promises, Digiata has been approached on more than one occasion by our clients with an interest in applying blockchain ꟷ also known as distributed ledger technology (DLT) ꟷ to their software stacks in order to realize the gains from decentralization (Trautman (2016)). In particular, clients are concerned with improved performance from distributing the workload, tighter automated auditing procedures, reduced vulnerability from spreading the points of failure from one to many and the promise of instant transaction settlement (Caytas (2016)). Given the sensitive nature of financial technology in particular and in light of the open, transparent ethos of the first generation of blockchain solutions, we initially deemed that while promising, the technology may force us to make architectural tradeoffs that were informed more by ideology than by market needs. In particular, the most advanced public smart contracting platform, Ethereum, was considered an inappropriate fit for the types of privacy centric, high throughput uses we usually cater to.
However, recent developments in the space have given rise to privacy-oriented, configurable, high throughput DLTs (Foundation (2015)), leading us to reconsider the initial criticisms. Upon thorough reinvestigation, we have concluded that the technology has finally caught up to the needs of the industry. The purpose of this article will be to outline the constraints faced by industries occupied by our clients, why first generation public blockchains were unsuited for operating in these conditions and why a new generation of enterprise-ready blockchains is currently the best purpose-built product for private, consortium blockchain solutions.
Decentralization: what is it good for?
The rise of fast, digital networking technology has ironically been accompanied by an increased centralization in clearing facilities across the financial industry over time, even as the cost and barriers to adopting fast computing has declined. Rather than engaging in peer-to-peer (p2p) direct transacting, specialized service providers have emerged which act as hubs through which unempowered peers transact, giving rise to such large international organizations as Swift, Calastone in Europe and FUND/SERV in the United States which themselves benefit from such large network effects that they resemble fundamental institutions, rather than private organizations. To understand why financial networks have organized around hubs, it is important to understand what counterparties in financial transactions require and why a distributed topology is unable to meet the requirements of the financial industry as well as trusted, centralized intermediaries. Consider the list of essential features of any system that supports transacting financial institutions:
1.Speed of transacting
2.Privacy
3.Security
4.Auditability
5.Controlled reversibility
6.Fidelity
In a distributed setting, each party maintains their own ledger of transactions. When a purchase is made, funds move in one direction and assets in the other. However, with no central source of truth, the possibility exists for the seller to alter the terms of trade after receiving the funds or for the buyer to claim that the asset was never truly sent. One response to this is to create an escrow contract that is overseen by a trusted third party. However, while this is adequate for slow clearing real estate transactions, it would introduce too many delays to be appropriate for rapid equity trading. To ameliorate this risk, parties can make their ledgers public so that foul play can be identified through discrepancies. This would allow transactions to be audited and for sensitive data to be essentially immutable since the public can make personal copies, negating the ability of parties to censor their records. However, this sacrifices the need for privacy. If ledgers are kept private, auditors can detect fraud if they have access to accounts for both parties but a deep forensic process can’t happen with the regularity required to enforce good behaviour on a day-to-day basis. The essential missing ingredient in meeting all of the above requirements in a p2p electronic trading system is trust and for this reason, the rise of clearing house solutions should be unsurprising.
Consensus and the Byzantine Generals Problem
In market settings, transacting parties have asymmetric values which creates the incentive for cheating or at the very least underreporting. The inability for networked transacting counterparties to coordinate a trustless flow of transactions is an example of the Byzantine Generals Problem, a class of networking situations that necessitates the institution of a centralized coordinator to detect and prevent cheating. In the absence of a central coordinator, communicating parties have to agree on a single version of the truth for all states in the network. This process is known as network consensus. While the problem of maintaining consensus without any doubt of tampering is mathematically proven to be unsolvable, blockchain technology provides a mechanism for achieving consensus while up to one third of all participants are malicious. Settings where all participants are known (rather than anonymous) can tolerate more than half of all nodes being malicious. For this reason, blockchain technology has proven to be of particular interest to industries which wish to disintermediate central clearinghouses and in particular for those in which the identities of participants are known.
Financial Industries and Public Blockchains: an Imperfect Fit
A database by any other name
Software technologies can be classified by what they do and how they are implemented and the naming of the software leans in eitherone of these directions, depending on the target audience. For the purposes of attracting customers or end users, software names and descriptions that illustrate usefulness are chosen. This is often the case with proprietary software where the implementation details are not only unnecessary for the end user to know but confidential. For instance, the Windows family of operating systems was named after the metaphor used to improve user experience without any mention in the name about the underlying DOS technology or the coupling to a shell. On the other hand, open source software is often intended by its nature to not only attract end users but introspection and contribution from developers and for this reason, the naming and descriptions often allude to the implementation of the ideas presented. For instance, the Unix operating system was named for its layered onion architecture of centralizing the kernel to distinguish it from multi kernel operating systems such as Multics. The Bitcoin White Paper (Nakamoto (2009)) is a hybrid of these approaches since we can assume the author intended both the adoption of the p2p internet cash (bitcoin) as well as inviting developers to understand the implementation details of the underlying consensus technology that makes it double-spend proof (blockchain).For this reason, the name “blockchain” is misleading and obfuscating for prospective adopters because it speaks to the algorithm used to achieve byzantine fault tolerance and says nothing about the features it offers users who may wish to adopt blockchain technology. From the perspective of the user, a blockchain is a distributed (as opposed to centralized) database that accepts data as an ongoing append-only ledger. Each participant maintains a copy of the database but no user can alter the state without the explicit consent and knowledge of every other user. This is why blockchain technology, when adopted in financial settings is often referred to as distributed ledger technology (DLT) and for the rest of this paper, the two are considered equivalent.
Public Blockchains
The first generation of blockchains were in the form of publicly accessible, permissionless ledgers. In order to update the state of the blockchain, a consensus protocol exists which specifies how and under what conditions users can add information to the blockchain. In addition to this base protocol of rules, blockchains provide a secondary layer of scripting for application specific rules, known as smart contracts, to be created to further restrict and control access to the state of the blockchain. This architecture provides a perfect audit trail, an immutable ledger and a trustless rule-based environment for transacting parties. Instead of a trusted third party, a logical abstraction in the form of a centralized ledger exists but the architecture is such that the ledger isn’t maintained by one party but by all transacting parties in unison. This approach to guaranteeing fidelity through consensus is known as triple entry bookkeeping because in addition to the double entry bookkeeping that each party engages in when maintaining separate ledgers, the forced consensus of ledgers with each other and all the other nodes in the network provides a third record in the abstract sense which acts to prevent fraud from either party (Sunder (2017)).
Privacy Tradeoffs
In order to discuss privacy tradeoffs, confidentiality must be distinguished from privacy. When dataflows are confidential, the actual data can be either public or privately accessible but the author of the data is hidden. When dataflows are private, the author can be either confidential of publicly known but the data being sent is encrypted or not visible.
In traditional online clearinghouses, user registration procedures define who can access the central database. Once registered, users are given roles and permissions which determine what those users can do to the database and what those users can see in the database. By restricting access in this manner, privacy between transacting parties can be maintained at the cost of the clearinghouse knowing all. While certain hashing and encryption techniques can protect participants from surveillance by the clearinghouse, the risk of data loss and data breach is completely centralized in one institution. When Bitcoin was developed, user registration was abandoned as an approach to manage and control access to the blockchain since one of the chief aims of Bitcoin was that unknown parties should be able to transact and retain anonymity (a feature of paper cash). Instead, all access and control to the database was given to the class of peers known as miners who acted as the decentralized clearinghouses for pending transactions, subject to the rules of the protocol. Transacting users would pay the miners to clear their transactions and the payment token was the native token, bitcoin. While this achieves immutability, auditability, and security, it comes at the cost of privacy since by design all transactions are logged to the publicly distributed blockchain so that nodes can observe that the rules of the protocol were followed and reject blocks produced by malicious miners. While transaction privacy can’t be achieved, confidentiality can be since transactions on the blockchain are not between human identities but between public keys. Users need only keep their ownership of keys secret which is why we know how much bitcoin Satoshi Nakamoto owns but we don’t know who they are in reality.
In the event of a central clearing house storing all private data, a single data breach can be catastrophic to the industry. With public blockchains, the risk of data breach simply shifts from access to database to access to knowledge of key ownership. For instance, suppose two parties are using a public, permissionless blockchain to transact confidentially by only revealing to each other which keys they own. If the identity of the keys were leaked, then the entire set of transactions switches instantly from anonymous and confidential to entirely public. Clearly neither the centralized solution to privacy nor the public blockchain approach to confidentiality is ideal for sensitive use cases where maximum privacy and confidentiality are required.
Speed
Due to the emphasis placed on widespread decentralization, the location and number of nodes in a public, permissionless blockchain is unknown and should not be restricted by the protocol. To achieve Byzantine Fault Tolerance, the entire network is given a time window in which to agree on the existing set of transactions to date. Transactions are bundled together in epochs of a duration equal to this time window. These bundles are called blocks and the epoch duration is known as the block time. A transaction is not considered valid until it has been bundled into a block and that block has been accepted by the network. Users in the network will therefore experience a delay in their transaction being verified approximately equal to the blocktime. In most financial industry settings, centralized online clearinghouses have allowed transacting parties to achieve transaction times on the order of seconds and in some cases milliseconds. For this reason, a blockchain replacement would need to achieve similar throughput to meet current expectations. This means that block times cannot be too high (such as in the order of minutes). In order to elucidate the difficulty with having fast block production, we will use a fictional example of a blockchain with a block time of 0.1 seconds. In our example, the most popular locations for nodes and miners are in New Zealand and Sweden. However, due to the distance, the internet latency between New Zealand and Sweden is 0.15 seconds. A miner in New Zealand discovers a block in 0.05 seconds after the previous block and proposes it to the network. At the same time a miner in Sweden discovers a block and proposes it to the network. The nodes in New Zealand hear nothing but silence from Sweden over the duration of the block window since it would take longer than the block window to hear anything from Sweden. Similarly Swedish nodes hear nothing but silence from New Zealand. As a result the New Zealand nodes update their copies of the blockchain with the New Zealand miner’s block and the Swedish miners update their copies of the blockchain with their miner’s block.The blockchain has now forked and universal consensus is lost. The Bitcoin whitepaper accounts for this scenario occurring by introducing a contest between competing chains but the process of re-establishing consensus is disruptive and can take some time. To avoid these types of disruptions from clogging up the blockchain, Bitcoin’s blocktime was set to approximately 10 minutes. Ethereum has managed to bring this down to around a minute but no more improvements are possible without creating regular “orphaned” chains. In order to achieve planetary consensus, public blockchains have had to sacrifice speed by design.
Immutability: a double-edged sword
While the append-only, immutability of the blockchain is perfect for establishing an audit trail and preventing the need for trust, existing financial service providers do often require controlled reversibility in the event that mistakes were made, servers hacked or to comply with regulatory frameworks. Public blockchains are use-case agnostic and so special measures for reversibility could not be added without losing generality of application. Instead, nodes are encouraged to fork when they wish to move away from an existing set of blocks. Miners then have to decide which fork to mine on. In the event that most miners reject a fork, the fork is vulnerable to double spend and 51% attacks. For this reason, public blockchains are strongly subject to Metcalfe’s Law, which is why we don’t see many forks of the same chain in the wild (Peterson (2018)). Bitcoin has a handful and Ethereum only has two. In the event that an entire industry operating on a public blockchain such as Ethereum wishes to alter the history of the blockchain, they have to appeal to the entirety of the network, the majority of whom might be entirely unsympathetic to their cause.
Private Blockchains
In response to the problems identified in the previous section with using a public blockchain to disintermediate private clearinghouses, a number of attempts have been made to create private, permissioned blockchains. The two dominant tracks taken have been to either re-engineer blockchain technology from the ground up or to tweak existing public blockchains to be more suitable for private industrial settings.
Repurposed Public Blockchains
Until recently the most dominant track in private blockchain solutions was to alter the Ethereum platform for private use. The main reason is that Ethereum is the most mature smart contracting platform and because Consensys, the research arm of Ethereum, actively engages with private industry to build private consortium implementations of Ethereum through the Ethereum Enterprise Alliance. In response to the growing demand for Ethereum and in cooperation with Consensys, Microsoft has recently launched private Ethereum blockchains as a service on their Azure platform. This follows the more established Quorum project, hosted and actively maintained by JP Morgan.
Private Consortium Ethereum Design Tradeoffs
Role Management
A private blockchain with privacy features, also known as a permissioned blockchain is one where the participants are known. The industry term for such a setup is a consortium blockchain. Because users are known, the possibility exists for setting fine-grained permissions and roles. However, the Ethereum blockchain does not distinguish between users and as such roles have to be programmed into and managed by the smart contracts, potentially allowing for mixing business logic with role management.
Speed and consensus
Since public blockchains are designed to be open and transparent, even privately encrypted transactions on a consortium Ethereum have to be broadcast and mined by all participants. It would be desirable if only the blockchains for the transacting parties in a private transaction are updated since the updates are meaningless to everyone else. Although a consortium is much smaller and localized than the public Ethereum network, the requirement for all nodes to be in constant consensus does place an upper limit in how fast transactions can be processed.
Speed and state management
Ethereum is a state machine that is updated through the execution of smart contracts written in a Turing-complete language, meaning that they can run without termination (for instance through an infinite loop). Since smart contracts executed on Ethereum must be run by every node for verification, this means that the possibility exists to seize up the operation of the entire network by running an infinite loop. To prevent this outcome, execution is metered on the Ethereum Virtual Machine (EVM) according to the relative CPU impact of the opcode executed. The metering is in a relative unit known as gas such that the ratio between the gas price of opcodes reveals how computationally expensive each opcode is. For instance, if the gas cost of assigning an integer a value is 1 gas then multiplying 2 integers and assigning the result might be 8 gas. Nodes on the network prevent scripts that never terminate by voting on a maximum gas limit per block. If a smart contract in execution breaches the gas limit, it is terminated and any state changes made during execution are reversed. The cost of this approach is that smart contracts have to be small and precise in their operations in order to avoid consuming their gas quota. Long running loops and sorting algorithms are out of the question. This also means that if the equivalent of a schema update is required, it has to happen over the course of many blocks in a non-transactional manner, meaning that the state of the application might exist in an inconsistent state for an extended period of time.
Hyperledger: A private, permissioned blockchain designed from first principles
A consortium Ethereum chain brings privacy to bear but the overall architecture of the blockchain is still one geared for public, decentralized access. As such, unless careful re-engineering is employed, attempts to squeeze Ethereum into a setting appropriate for the financial sector might be sacrificing too much architecturally in the name of privacy and decentralization. In researching the most suitable technology for our clients, we initially found the Linux Foundation’s Hyperledger project to be envisioned from the ground up to address every concern listed in the previous sections. Hyperledger starts from the assumption that the participants will be arranged in a consortium of known organizations. As such, certain assumptions about consensus can be made.
Fabric
The most popular implementation of Hyperledger is IBM’s Hyperledger Fabric and this was our first port of call in assessing enterprise blockchains in practice. This section outlines our motivations in starting the investigation here.
Speed of Transacting
In contrast to Ethereum, Fabric does not require state changes to be validated by every node on the network. Instead, special endorser nodes execute smart contracts and broadcast the resultant state change to the network for inclusion. In this way, smart contracts which are computationally intensive do not risk disrupting network consensus and so do not need to be arbitrarily curtailed through gas limitations. The other upside to this is that schema upgrades can be achieved in the span of one block, protecting data integrity. In addition to lazy smart contract evaluation, Hyperledger allows for many different consensus algorithms and defaults to practical byzantine fault tolerant (PBFT), a consensus algorithm that is orders of magnitude faster than proof of work, the algorithm used in both Ethereum and Bitcoin. Finally, since the nodes in the consortium are known, the risk of centralization attacks from cartels of hostile miners is eliminated. Instead of encouraging unlimited miners, consortiums can be restricted to just a handful of miners and in the extreme just one, resulting in a large improvement in network throughput.
Privacy
A fabric consortium is one in which all participants are known and by default the consortium maintains one global distributed ledger. However, individual organizations in the consortium can optionally open private channels of communication where a distributed ledger is maintained and visible only to the individuals participating. Transactions on private channels are never reconciled with the main ledger and as such privacy can be achieved without costing overall network performance. Ethereum has an analogous concept in Plasma sidechains but the technology is still immature and unsuitable for
adoption in private consortiums, particularly because privacy isn’t a first class citizen of sidechains.
Security
Fabric consortiums are secured by complying with the X.509 standard of certificate management, the same technology that secures HTTPS traÿc. Unauthorized access is only possible when consortium members leak private keys but the chain of authority can quickly revoke permission on those keys.
Auditability
Although private channels are invisible from the rest of the consortium, they are visible and transparent to the participating peers and to the orderer and endorser nodes (miners) who maintain the ledger in the channel. Complying with auditing or regulatory requirements without violating confidentiality from the rest of the consortium can be achieved by giving regulators access to the ledger through one of the aforementioned peer types. In many instances, the job of ordering or endorsing the ledger can be tasked to an outside regulator so that compliance is achieved in real time.
Controlled reversibility
Although irreversibility is essential for adversarial conditions present in internet commerce, financial transactions are often required to be reversible under certain conditions. While Ethereum and Fabric offer the same interface in the form of an append-only ledger, Hyperledger allows very large sets of data to be reversed in short periods of time which is often necessary in the event of a shock to the economy such as a financial crisis.
Fidelity
The triple entry bookkeeping mentioned previously is what gives blockchain technology its trustless feature and is a significant upgrade in using intermediated escrow relationships. Fabric provides the application developer the ability to control access to data in a fine-grained manner, protecting privacy while ensuring integrity without running up against regulatory frameworks. The fidelity created by Fabric isn’t just between transacting parties but also between them and the standards setters and regulators in the ambient industry because these governance players are all valid members and can be given specific powers in a consortium setting.
Critical Concerns
In practice, there were a number of challenges with deploying Fabric which proved particularly unappealing in a lean and agile development environment. In particular, the stress on granular design placed by the Linux Foundation on Hyperledger implementations meant that there were a great deal of operational hats to wear, spanning from network administration over certificate management to orchestration of microservice devops to pure low level development. This greatly delayed the development lifecycle because the environment lacks a sandpit with which developers can quickly iterate over
ideas. In addition to being fairly cumbersome to orchestrate, the modular nature of the implementation details results in every organization running their own implementation of each component, leading to a scattered online community of enclaves. The highly granular and distributed nature of Fabric
came at the cost of raising the surface area of hidden fault vulnerability, further increasing the operational risk and cost of deploying in a financial setting. Finally the development tools were mostly experimental and constantly changing and it was felt that until the project stabilizes sufficiently, it would prove too volatile for low risk deployment in the short term.
Sawtooth
While Fabric perhaps felt appropriate to a very R&D intensive environment, we still found the overall ideals of the Hyperledger family appealing and so with that in mind, the next implementation we researched was Intel’s Hyperledger Sawtooth. In contrast to Fabric, Sawtooth’s core architecture is focused and more coherent, leading to quicker development cycles and simpler orchestration. The concept of smart contracts is replaced with Transaction Families which is vaguely defined in the documentation but seemed to be akin to individual virtual machines which could run on top of the consensus layer. Owing to the fact that Ethereum is architecturally divided into virtual machine and consensus layer, an Ethereum Virtual Machine transaction family was recently introduced to the Sawtooth space, allowing developers to orchestrate enterprise, privacy-centric consortium chains while still being able to make use of the mature Ethereum ecosystem of tools and support. This combination yielded the fastest development cycle up to this point.
Ethereum revisited
With a growing list of practical concerns surrounding the adoption of Hyperledger and prompted by the eventual growing success of using the Ethereum transaction family in Sawtooth, a brief detour was conducted into the second branch of enterprise blockchains, namely altered versions of the Ethereum blockchain. After testing three implementations, we settled on the most mature offering in the space, JP Morgan’s Quorum.
Quorum: Enterprise Ethereum without the tradeoffs
It was mentioned previously that Ethereum in an enterprise setting carries the ideological baggage of being designed for maximum openness and transparency which conflicts directly with the common enterprise requirement for closed consortiums and private transactions. In addition, because attackers of the public Ethereum blockchain can potentially jam the network by forcing all nodes to execute an infinite loop, computation is metered and capped. The Quorum Project managed to solve these constraints by adopting a layered approach placing an unaltered Ethereum client at the kernel.
Development
Since the core of Quorum is an unaltered version of Ethereum, developers are free to tap into the vast ecosystem of mature development tools designed for the public Ethereum blockchain, increasing the security and reliability of the final deployment over alternative enterprise blockchain offerings. The truffle framework, incubated by Consensys, is a suit of tools for developing and unit testing smart contracts against a local, in-memory blockchain using an Ethereum-ready version of Chai and Mocha. In contrast to Hyperledger contracts, rapid test-driven development is possible in a Quorum environment.
Consortium
The first layer above the central Ethereum core is the permission layer which allows nodes to configure a list of IP addresses that form their logical consortium. As such, the permissioning of nodes is separated out from internal organization user management which is performed on chain in smart contracts. The addition of a thin permission layer makes it possible to decide how much permissioning to allocate between the smart contracting layer and the network administration layer which means that each implementation can be tailored to the needs of the client in question.
Privacy
The next layer up, introduces a secondary “private” blockchain, the concept of an Intel SGX enclave (Anati (2014)) and a transaction manager. When a node wishes to transact with another node in the consortium privately, their respective transaction managers perform a secure cryptographic handshake. The details of the transaction are only recorded on the private blockchains of the participating parties while the rest of the network is left oblivious. Finally, so that the public blockchains remain synchronized (an important security feature of blockchain consensus), the sending node broadcasts to the network that a private transaction was conducted but without revealing the details or the list of recipients. From the developer’s perspective, nothing about the Ethereum Virtual Machine or the programming language, Solidity, needs to be altered to enable private transactions. Instead, when broadcasting the transaction to the network using JSONRPC, an additional “privateFor” parameter is added with the list of recipients. When the transaction manager of the broadcasting node detects a non-empty “privateFor” parameter, it orchestrates a private connection to the recipients. Otherwise, it simply allows the transaction to be roadcast to the public chain as usual.
Speed and integrity
It was previously mentioned that in contrast to Fabric’s lazy smart contract evaluation, Ethereum’s metered computation would entail a schema upgrade to span multiple blocks and be non-transactional. The solution to this for Quorum comes both from the design of Quorum itself and from the growing list of best practices in the public Ethereum space. The first adjustment made by Quorum is to set the gas price to zero and the gas limit very high, eliminating the need for Ether, the native cryptocurrency. In addition to allowing for large computation, this dispenses with the unnecessary complication of distributing otherwise meaningless Ether to clients just so that they can participate. With regards to schema upgrades, while small upgrades are possible, the fact that smart contracts are both analogous to tables and classes simultaneously means that new patterns have been developed to enable safe, efficient, atomic upgrades to schemas such as the transparent proxy pattern, which knits contracts together via indirection proxy contracts which can “point” to new contract upgrades in the span of one transaction. In this way, though the deployment of a new schema can span many blocks, the act of bringing them all online at once can happen in the span of one block. This pointer flipping mechanism is similar to the double-buffer pattern employed by graphic’s engines to smooth transition between frames by pre-rendering the next frame in a buffer and simply flipping the active pointer when the previous frame is complete.
Applicability to Digiata
Digiata services clients across a broad spectrum of industries but has particular competencies in dealing with financial and business processing. In response to queries from clients, Digiata has conducted an investigation into the applicability of blockchain to various dominant use cases experienced by our clients and in particular into which blockchain solutions (if any) are adequate to meeting their needs. When identifying a suitable technology, general platforms were favoured over specific solutions and configurability was given precedence over monoliths. Given the sensitive nature of financial transactions, mature stable offerings would be given precedence when possible. For these and other reasons enumerated above, we eventually settled on JP Morgan’s Quorum implementation of the Ethereum protocol. The next step was to investigate exactly how impactful blockchain technology would be for our particular clients. To illustrate the benefits of adopting DLT, a list of fictional case studies will be presented in this section, each highlighting use of core features of the architecture.
Settlement
The task of settlement in asset trade rests on reconciling asset transfers with bank transfers. While DLT allows for asset representations to be traded back and forth on a shared ledger, without reconciling with bank flows, the transactions are fictional until confirmed by the banking institutions which control the corresponding matching funds.Settlement is the process of matching transfers in assets to transfers in funds. Since bank transfers can take far longer than the digital representation of the exchange, settlement is sometimes delayed until transaction batches are reconciled days later. To make settlement instantaneous, we require asset debits to be instantly matched to currency credits. One way to achieve this with a blockchain is to have a depository institution issue tokens on the consortium that track the national currency 1:1. Parties trade digital assets for tokens at the speed of blockchain consensus. When a party wishes access to the underlying fiat, they can redeem the token with the issuing authority in return for a bank transfer, effectively creating a fiat withdrawal offramp. A less centralized approach is to have multiple issuing organizations, each representing a different bank in the country. Token purchases and redemption can then take place between institutions at different banks. The advantage of this approach over traditional settlement is that it doesn’t require two transacting parties to belong to the same bank in order to experience fast settlement. Instead transacting parties need not be concerned about transacting across banking institutions when settling asset transfers. The token is all that matters in trade.This method allows the role of the bank to be decoupled from the act of settlement. This decoupling has a differential effect on high volume traders versus that of low volume infrequent traders. For infrequent trade between two parties, the number of steps to settlement was initially:
1. Conduct asset trade on platform (centralized or blockchain).
2. Transfer funds or wait for fund transfer in traditional banking system.
3. Once funds have cleared, reconcile.
The duration between steps 2 and 3 depends on whether the parties belong to the same bank. After adopting a settlement token approach, the number of steps for an infrequent trader is:
1. Purchase settlement token (from an issuer representing the bank of the purchaser)
2. Exchange token for asset (reconciliation is instant)
3. Withdraw token (from an issuer representing the bank of the seller)
4. Wait for bank to process withdrawal
The benefit for an infrequent trader isn’t much but if there exists token issuers for each bank then withdrawal and purchase are not subject to interbank delays. So for transacting parties at different banks, settlement tokens offer an improvement in speed of settlement. Now consider two transacting parties at different banks who make multiple transactions per day. The number of steps to settlement might look like this:
1. Conduct asset trade on platform
2. Initiate bank payment
3. Conduct asset trade on platform
4. Initiate another bank payment
5. Wait up to two days for transactions to be reconciled while still requiring more transactions in the interim.
Consider this same relationship on a settlement token setup. Once the initial tokens have been purchased (with no interbank delay), the immediate settlement can occur. Instead of being forced to wait for each transaction to reconcile before concluding safe settlement, parties can assume that the settlement is valid on trade, assuming they can trust their issuing institution. If a party needs access to some fiat, they can redeem only the tokens they need access to. In summary, a settlement system based on a fiat token has the following benefits:
1. Counterparty risk is replaced with token issuer risk (likely to be lower if underwritten by a large financial player)
2. Settlement is decoupled from bank transfer waiting times 3. No interbank delays 4. Redemption of fiat token is granular rather than subject to discrete bank payments
Auditing
The previous Settlement example demonstrates how a blockchain can lend itself to automatic audit trails since the entire transaction can be captured on an immutable, distributed ledger. The role of nodes (or validators) in a consortium allows for actual real time control and monitoring by auditing agencies. Specifically, an auditor can take the role of a node and be a party to private transactions. If a client submits a state change that conflicts with regulations, the auditor can issue a challenge via a smart contract in real time. In this way, the lines between auditing and regulating are blurred as business rules and national regulations can be enforced in real time.
Conclusion
After experimenting with a number of distributed ledger technologies, it was determined that JP Morgan’s Quorum implementation of the Ethereum protocol was most suitable for Digiata, both from a development perspective and for the sensitive nature of our client needs. With the rapidly advancing pace of enterprise blockchains and the gradual maturing of the space, it is the belief of Digiata that many of our clients could benefit from upgrading their stacks to a distributed blockchain architecture.