Big questions for IAT1
11. Block chain Tools and frameworks
The following list introduces several blockchain toolchains and frameworks that are popular
with both developers and the associated solution community:
• Truffle: The Truffle framework was developed by ConsenSys as an open source
project, offering a pipeline for the development, testing, and deployment of smart contracts
targeted on the EVM.
• Embark: The Embark framework was developed by Status as an open source project,
offering a debugging and integration environment for Ethereum smart contract developers.
Notably, Embark offers tighter integration with IPFS for the decentralized storage of
contract data.
• Hyperledger Composer: This is an open source effort from the Linux Foundation,
which offers tools to assist developers with converting requirements into proof of concept
for the DevOps process, for spinning a new network as required.
• MetaMask: This is a middleware that bridges an application running in the browser
with the Ethereum blockchain. It is an open source initiative supported and consumed
widely by all Ethereum developers. Users can perform transactions in a web application
through MetaMask.
• Ethers.js: This is a JavaScript-based library with full implementation of the Ethereum
wallet as per the specification. Developers use this open source library to create user
wallets, perform transactions, and much more. This library is also well known for its recent
support for Ethereum Name Service (ENS).
• Nethereum: This is an open source library used to build Ethereum-based blockchain
solutions in .NET environments. Nethereum offers .NET developers an SDK called NuGet,
which is integrated into the Visual Studio Integrated Development Environment (IDE) for
using web3 functionalities across web and mobile applications.
12. Eherium and Bitcoin
Ethereum
Ethereum is a public blockchain that was designed by Vitalik Buterin in 2013 as an
enhancement to the incumbent Bitcoin blockchain, by including transaction-based state
management with business logic scripting using a special-purpose programming language
and a virtual machine called the Ethereum Virtual Machine (EVM).
The following diagram outlines the basics of block creation in Ethereum:
• Motivation: The main motivation behind Ethereum was to support building
decentralized applications on the powerful medium of blockchain. Unable to convince the
Bitcoin community of the need for a scripting language, Vitalik and a like-minded group of
people created Ethereum.
• Facts: The project was open sourced with an initial release date of July 30, 2015. The
research and development upgrades to the Ethereum network is managed under the
Ethereum Foundation, financially supported by the initial crowd sale of the Ether (ETH)
token from July to August 2014. Around 105 million ETH has been minted so far. Ethereum
has one major fork called Ethereum Classic (the original Ethereum blockchain that denied
the DAO hard fork and retained the original unaltered state of the Ethereum network). At
the time of writing this book, each ETH is valued at around USD 156.00. The Ethereum
blockchain network also incentivizes the validating nodes by charging users who make
transactions on DApps or transfer ETH with a small fee, which is awarded to the winning mining
block maker. The rules of creating blocks and the acceptance of blocks are specified by
consensus algorithms called PoW or Proof of Stake (PoS). We will explore PoW and PoS in
more detail in the upcoming sections of this chapter.
• Criticism: The Ethereum community has had to face some of the earliest criticism
due to the hard-fork decision taken by the team, thereby contradicting some of the ideology
and values of blockchain, such as immutability and immunity from human political
dynamics. The network was later criticized and heavily scrutinized by the regulatory
authorities due to the alleged Ponzi schemes offered by the Initial Coin Offerings (ICOs)
without a stable product or service.
A hard fork is defined as a radical change made to the protocol, thereby rendering some of
the previous blocks and its transactions invalid.
Bitcoin
Bitcoin is a virtual currency on a peer-to-peer network with users and validators distributed
across the network. With the help of the Bitcoin blockchain network, users can transfer
cryptocurrency in a truly decentralized manner, without a need for either a central bank, a
clearing house, or an intermediary. The transfer of Bitcoin between users is recorded in the
form of a transaction, which is later verified, mined, and added to a canonical link of blocks.
Bitcoin is believed to have been created by a group work working under the pseudonym
Satoshi Nakamoto, with most of its features and functionalities derived based on existing
techniques in cryptographic hashes, peer-to-peer network communication, and immutable
data structures.
In the following diagram, we have illustrated how Bitcoin mining works in a single node, as
well as in pool environments:
Bitcoin Mining
Mining adds new blocks to the block chain, making transaction history hard to modify.
Introduction
Mining today takes on two forms:
(i) Solo mining, where the miner attempts to generate new blocks on his own, with the
proceeds from the block reward and transaction fees going entirely to himself, allowing him
to receive large payments with a higher variance (longer time between payments)
(ii) Pooled mining, where the miner pools resources with other miners to find blocks
more often, with the proceeds being shared among the pool miners in rough correlation to
the amount of hashing power they each contributed, allowing the miner to receive small
payments with a lower variance (shorter time between payments).
Solo Mining
As illustrated below, solo miners typically use bitcoind to get new transactions from the
network. Their mining software periodically polls bitcoind for new transactions using the
“getblocktemplate” RPC, which provides the list of new transactions plus the public key to
which the coinbase transaction should be sent.
The mining software constructs a block using the template (described below) and creates a block
header. It then sends the 80-byte block header to its mining hardware (an ASIC) along with a
target threshold (difficulty setting). The mining hardware iterates through every possible value
for the block header nonce and generates the corresponding hash.
If none of the hashes are below the threshold, the mining hardware gets an updated block header
with a new merkle root from the mining software; this new block header is created by adding extra
nonce data to the coinbase field of the coinbase transaction.
On the other hand, if a hash is found below the target threshold, the mining hardware returns the
block header with the successful nonce to the mining software. The mining software combines the
header with the block and sends the completed block to bitcoind to be broadcast to the network for
addition to the block chain
Pool Mining
Pool miners follow a similar workflow, illustrated below, which allows mining pool operators to
pay miners based on their share of the work done. The mining pool gets new transactions from
the network using bitcoind. Using one of the methods discussed later, each miner’s mining
software connects to the pool and requests the information it needs to construct block headers.
In pooled mining, the mining pool sets the target threshold a few orders of magnitude higher
(less difficult) than the network difficulty. This causes the mining hardware to return many
block headers which don’t hash to a value eligible for inclusion on the block chain but which
do hash below the pool’s target, proving (on average) that the miner checked a percentage of
the possible hash values.
The miner then sends to the pool a copy of the information the pool needs to validate that the
header will hash below the target and that the block of transactions referred to by the header
merkle root field is valid for the pool’s purposes. (This usually means that the coinbase
transaction must pay the pool.)
The information the miner sends to the pool is called a share because it proves the miner did a
share of the work. By chance, some shares the pool receives will also be below
the network target—the mining pool sends these to the network to be added to the block
chain.
The block reward and transaction fees that come from mining that block are paid to the mining
pool. The mining pool pays out a portion of these proceeds to individual miners based on how
many shares they generated. For example, if the mining pool’s target threshold is 100 times
lower than the network target threshold, 100 shares will need to be generated on average to
create a successful block, so the mining pool can pay 1/100th of its payout for each share
received. Different mining pools use different reward distribution systems based on this basic
share system
13. Hyperledger Tools and frameworks DLT - Distributed ledger Technology
Hyperledger Fabric is a blockchain framework initially developed by the IBM and Digital Assets
members. Fabric is a DLT that aims to provide a modular architecture for developers to use only
what is needed. The framework supports the execution of logic abstracted into containers called
chaincode. Using Fabric is easily enabled by the plethora of documentation, tutorials, and tools
available for deploying business networks without much hassle
Hyperledger Sawtooth is a blockchain framework that offers enterprises a secure leadership
election of nodes in the network, with special modes for executing instructions. Sawtooth offers a
powerful, developer-friendly Software Development Kit (SDK) for a majority of languages to write
and deploy smart contracts. Notably, Sawtooth is one of the early live projects to experiment
with WebAssembly (WASM) as a virtual medium for the execution of smart contracts.
Hyperledger Indy A blockchain platform to specially handle decentralized identities from inside
or external systems
Hyperledger Grid A WASM-based project for building supply chain solutions
Hyperledger Quilt A blockchain tool to connect blockchain realms of different protocols using the
Interledger Protocol (ILP) specifications
Hyperledger Caliper A blockchain benchmarking tool to assess the performance of a specific
blockchain with specific parameters such as Transactions Per Second (TPS), transaction latency,
resource utilization, and so on
14. IPFS or InterPlanetary File System
IPFS is a distributed filesystem that allows users to host and receive content in a peer-to-peer
(P2P) manner, eliminating any need for intermediaries, for storing or accessing data from any
corner of the world. IPFS allows users to store and serve data in a censorless manner. The data
remains persisted in the network as long as somebody in the network values the data. Although
there may not be a monetary incentive for users who persist the data on their computers, the
data may be valuable and reusable for other users in the network. Hence, data on IPFS can be
virtually hosted forever, as long as the need for the data exists in the network. Notably, IPFS has
been considered as one among the many de facto decentralized databases by DApp developers
for their applications. Content accessed by anybody on IPFS is cryptographically verified, ensuring
that information has not been tampered with.
IPFS has been used in many cases to circumvent transparency challenges, where access to global
information was inhibited. Some of the notable uses of IPFS as a global database are listed as
follows:
• The 2017 Catalan independence referendum document was hosted on IPFS, bypassing the
High Court of Justice of Catalonia's order to block the original website.
• Filecoin, a decentralized storage network, uses IPFS to leverage unused storage space in
computers and incentivizes users to host data for a fee. Fees are collected from users in the form
of micropayments to serve smaller chunks of data from the hosted computer using the Filecoin
token. These tokens are paid to the owner of the computer hosting the data with the help of
smart contracts. Reportedly, Filecoin was one of the largest Initial Coin Offerings (ICOs) ever,
grossing USD 250 million.
• Wikipedia was mirrored on IPFS when access to the online encyclopedia was restricted in
Turkey on April 29, 2017. This circumvention provided access to Wikipedia without visiting the
official website, which was blocked.
The following diagram depicts the internal components of IPFS and the dependencies among
them to provide storage and access to data:
15. Data management in a DAO, Aragon, Bisq
A Decentralized Autonomous Organization (DAO) is a computer program representing a group of
stakeholders and entities and is not influenced by external environments. A DAO is programmed by
a set of rules and governance protocols to ensure that transactions occur between parties without
the chance of any conflict. Dash and BitShares are some of the earliest implementations of a DAO. In
the past few months, many more DAOs have been launched on blockchains such as Ethereum and
Bitcoin.
Aragon
Aragon is an open source DAO running on the Ethereum blockchain network. Aragon leverages
Solidity smart contracts for business logic, and IPFS for decentralized files and governance record
management, thereby creating a truly P2P operating system for a whole new generation of
organizations, called aragonOS. Users can perform operations and govern their DAOs using the
Aragon Network Token (ANT).
Aragon has integrated IPFS very closely into its command-line interface (CLI) program. The IPFS
daemon can be initiated through Aragon's CLI and manages pinning operations to ensure the reliable
storage of critical components and files.
Bisq
Bisq is a DAO running on the Bitcoin blockchain network. It offers a P2P cryptocurrency exchange
service with no company or institution in control. Users and contributors vote on proposed updates.
Governance decisions are made to the DAO by the BSQ token holders. A Bisq DAO is made
sustainable by the following two approaches:
• Revenue distribution by sharing trading fees with contributors with the BSQ token. This
makes the contributors a partial owner of the network and balances the power between traders and
contributors.
• Decision-making through voting by the owners of the BSQ tokens—the traders and
contributors. Since they are both in charge of the issuance collectively, there are no centralization
vectors that can help either side achieve a majority.
At the time of writing, the Bisq community was making initial efforts in using the decentralized
storage IPFS for hosting software binaries, documentation, network, and trading statistics. Proposals
were also made on storing the governance proposal data to ensure that the access to DAO
information is consistently available without a single point of failure.
16. Consensus algorithms
Proof of work
Proof of work (PoW) is a consensus algorithm introduced by the anonymous founder of
Bitcoin—Satoshi Nakamoto. The PoW consensus algorithm is one of the earliest consensus
algorithms used in a blockchain environment. It leverages a combination of cryptography, P2P
network communications, and a Merkle data structure to offer a distributed, immutable, and
cumulative state of accounts in the Bitcoin blockchain. The solution computed by the first node is
verified by the remaining nodes and the block producer is broadcast in the network:
• Merit: The PoW algorithm has been time tested in the Bitcoin blockchain network and there
is not a single hack/compromise of the account states in the network leading to double spend.
• Demerit: As the PoW algorithm needs to find a solution to a mathematical problem,
significant CPU cycles are required to generate hashes and so it is an energy-intensive technique.
Proof of stake
Proof of stake (PoS) is a new consensus algorithm designed and developed to address some of the
trade-offs of the PoW algorithm. The block-producing node is determined by an application of
mathematical function involving a few determining factors, such as the stake (for example, ETH), the
age of the node, and the randomization of eligible node candidates:
• Merit: The PoS algorithm is energy-efficient as there are fewer computational requirements
and it does not select a block-producing node based on a solution-verification model.
• Demerit: Although the PoS algorithm is efficient in its block times and is environment-
friendly, there have been criticisms relating to the algorithm's vulnerability to capitalist attacks on
the network of the node owner and tries to compete with other candidates with a stupendous
amount of cryptocurrency at stake, higher than all the other candidates.
Proof of burn
Proof of Burn (PoB) is a consensus algorithm with an interesting approach to solving transition
problems from one version of cryptocurrency to another in the blockchains. Through the PoB
algorithm, the old cryptocurrency (or its preceding version) is burnt in order to reduce its supply and
gradually increase the supply of the new cryptocurrency (or its succeeding version). This consensus
algorithm is practiced in various forms, including a method wherein users can transfer the old
cryptocurrency to an unspendable wallet address in exchange for new ones:
• Merit: The PoB algorithm is convenient during the transition of cryptocurrencies and
network upgrades if the system trusts the participating entities.
• Demerit: The PoB algorithm is usually applicable in PoW-based blockchains and so has a
limitation of applicability. This is due to the requirement of verifiable proofs and the ability to decay
the burnt coins over time, which is naturally capable through PoW algorithms.
Delegated Proof of Stake
Delegated Proof of Stake (dPOS) is a consensus algorithm developed and used by the Block.one EOS
platform. Under dPOS, the token holders reserve the right to nominate the validators (also called
block producers). The selection of block producers is a continuous process and performs the duties
of packaging user transactions into blocks with Byzantine fault-tolerant safety:
• Merit: dPOS is Byzantine Fault Tolerance (BFT) -ready and scales easily in a public network
environment.
• Demerit: Although dPOS is efficient, it is prone to capitalistic efforts to supersede other
minor token stakeholders.
Proof of authority
As the name suggests, the Proof of Authority (PoA) algorithm facilitates a distributed consensus with
a few eligible verifiable nodes preserving the right to add transactions to blocks, if some criteria is
met. There are many variants of the PoA algorithm, with or without the reputations of the validating
nodes used in the public, private, and permissioned blockchains:
• Merit: The PoA algorithm is energy-efficient and not prone to capitalistic pitfalls as the
validator nodes are authorized to add transactions to blocks based on their reputation. If the node is
observed to malfunction, its reputation is severely affected and cannot proceed as a validator.
• Demerit: The PoA algorithm is partially centralized as the authority of adding or rejecting
transactions lies in the purview of very few nodes in the network.
Practical Byzantine fault tolerance
Practical Byzantine Fault Tolerance (PBFT) is one of the replication algorithms brought to light by
academic research. Authored by Miguel Castro and Barbara Liskov in 1999
(http://pmg.csail.mit.edu/papers/osdi99.pdf), this algorithm was primarily aimed at solving the
Byzantine faults caused by the arbitrary point of failures in the nodes of a network.
Notably, the PBFT algorithm is used by the Hyperledger Fabric blockchain framework:
• Merit: The PBFT algorithm is efficient, with fast transaction processing and scalable to
hundreds of nodes in a private network.
• Demerit: The algorithm is based on a gatekeeper technique and is hence criticized for its
centralized approaches. PBFT is not suitable for public blockchains.
Proof of elapsed time
Proof of Elapsed Time (PoET) is a consensus algorithm developed and used by the Hyperledger
Sawtooth blockchain framework. The PoET algorithm ensures security and randomness involved in
the leadership of validator nodes with special CPU instructions available in most of the advanced
processors featuring secure virtual environments:
• Merit: PoET allows anyone with eligible hardware to participate as a validator node, allowing
legitimate ways of verifying the leader election.
• Demerit: Although PoET does not involve staking cryptocurrencies to form a validatory node,
the cost of affording specialized hardware does not come cheap. So, there have been criticisms
highlighting this as an unfair bar to enter the network.
RAFT
RAFT is a consensus algorithm designed and developed by Diego Ongaro and John Ousterhout with
the main motivation to bring about a distributed consensus algorithm that is much easier to
understand than Paxos. Notably, RAFT ensures safe leader election, appending log entries in a
distributed manner, and state machine consistency. The RAFT consensus is implemented in the
Quorum blockchain to inherit the preceding described safety features:
• Merit: RAFT is one of the fastest algorithms in processing complex transaction payloads with
the security of leadership and state machine consistency.
• Demerit: RAFT is suitable for permissioned or private blockchains only.
Ternary augmented RAFT architecture
Ternary Augmented RAFT Architecture (TARA) is a consensus algorithm designed for large-scale
Byzantine-distributed networks. It is an enhanced version of the RAFT consensus algorithm to
address heterogeneous transactions identifiable by their asset classes by leveraging PBFT hardening
and cryptographic message exchanges. TARA introduces dynamic hierarchy to networks to ensure
that their authority is not concentrated among a few nodes:
• Merits: TARA offers service clusters to ensure high availability, throughput, and scale. It has
the hardware of all form factors with the ability to compute and store transactions can participate.
TARA can be applied in all three environments—public, private, and permissioned blockchain
networks.
• Demerit: Leadership election is not inherently dependent on the node's reputation, thereby
allowing a potential attack on systems. These constraints must be implemented explicitly.
Avalanche
The Avalanche consensus is a protocol for distributed systems, introducing leaderless Byzantine fault
tolerance, using a metastable mechanism achieving the same level of security and consistency
among the nodes. Avalanche depends on the Snowball family to form a DAG, which stores the user
transactional data, instead of blocks:
• Merit: Avalanche guarantees liveness and is immune to race conditions in the network.
• Demerit: Leadership consensus may not be applicable to all blockchain environments as
there is not a carefully analyzed set of heuristics to ensure consistency.
With this detailed analysis of consensus algorithms, let's now go through the development tools
available to blockchain developers.