Преглед изворни кода

PIP-0009: typo fixes, minor edits

Herman Schoenfeld пре 7 година
родитељ
комит
11d8f9e39a
1 измењених фајлова са 30 додато и 24 уклоњено
  1. 30 24
      PIP/PIP-0009.md

+ 30 - 24
PIP/PIP-0009.md

@@ -16,7 +16,7 @@ A GPU and ASIC resistant hashing algorithm change is proposed in order to resolv
 
 ## Motivation
 
-PascalCoin is currently experiencing 99% mining centralization by a single pool which has severely impacted ecosystem growth and adoption. Exchanges are reticent to list PASC due to the risk of double-spend attacks and infrastructure providers are reticent to invest due to low-volume and stunted price-growth. 
+PascalCoin is currently experiencing 99% mining centralization by a single pool which has severely impacted ecosystem growth and adoption. Exchanges are reticent to list PASC due to the risk of double-spend attacks and infrastructure providers are reticent to invest further due to low-volume and stunted price-growth. 
 
 ### Background
 PascalCoin is a 100% original Proof-of-Work coin offering a unique value proposition focused on scalability. After the initial launch, a healthy decentralized mining community emerged and became active in the coins ecosystem, as expected. However, after 9 months a single pool (herein referred to as Pool-X) managed to centralize mining over a short period of time. At the time, it was believed that a technical exploit was being employed by Pool-X, but this possibility was ruled out after exhaustive analysis and review by the developers and 3rd parties. It is now understood why and how this centralization occurred, and how it can be fixed.
@@ -25,13 +25,13 @@ PascalCoin is a 100% original Proof-of-Work coin offering a unique value proposi
 
 Ordinarily, a coins mining ecosystem grows organically with interest and centralization does not occur. This is due to the "hash-power follows price" law. As price grows organically due to interest, so do the number of miners. If there are too many miners, the coin becomes unprofitable, and some miners leave. This homeostasis between mining, price and ecosystem size is part of the economic formula that makes cryptocurrencies work.
 
-With dual-mining, this is broken. Dual-mining has led to coins with small user-base to have totally disproportionate number of miners who mine the coin even when "unprofitable". In the case of PascalCoin, miners are primarily on Pool-X to mine Ethereum, not PascalCoin. So the number of PascalCoin miners are a reflection of Ethereum's ecosystem, not PascalCoin's. Also, these miners mine PascalCoin because they have latent computing power, so it technically costs them nothing to mine PascalCoin. As a result, they mine PascalCoin even when it's unprofitable thus forcing out ordinary miners who are not dual-mining. 
+With dual-mining, this is broken. Dual-mining has led to coins with small user-base having totally disproportionate number of miners who mine the coin even when "unprofitable". In the case of PascalCoin, miners are primarily on Pool-X to mine Ethereum, not PascalCoin. So the number of PascalCoin miners are a reflection of Ethereum's ecosystem, not PascalCoin's. Also, these miners mine PascalCoin because they have latent computing power, so it technically costs them nothing to mine PascalCoin. As a result, they mine PascalCoin even when unprofitable thus forcing out ordinary miners who are not dual-mining. 
 
 **These mis-aligned economic incentives result in a rapid convergence to 99% centralization, even though no actor is malicious.**
 
 ## Specification
 
-A low-memory, GPU and ASIC-resistant hash algorithm called **Random Hash** is proposed to resolve and prevent dual-mining centralization. Random Hash, defined first here, is a "high-level cryptographic hash" algorithm that combines other well-known hash primitives in a highly serial manner. The distinguishing feature is that calculations for a nonce are dependent on partial calculations of other nonces, selected at random. This allows a serial hasher (CPU) to re-use these partial calculations in subsequent nonce-mining saving 50% or more of the work-load. Parallel hashers (GPU) cannot benefit from this optimization since the optimal nonce-set cannot be pre-calculated as it is determined on-the-fly. As a result, parallel hashers (GPU) are required to perform the full workload for every nonce. Also, the algorithm results in 10x memory bloat for a parallel implementation. In addition to it's serial nature, it is branch-heavy and recursive making in optimal for CPU-only mining.
+A low-memory, GPU and ASIC-resistant hash algorithm called **Random Hash** is proposed to resolve and prevent dual-mining centralization. Random Hash, defined first here, is a "high-level cryptographic hash" algorithm that combines other well-known hash primitives in a highly serial manner. The distinguishing feature is that calculations for a nonce are dependent on partial calculations of other nonces, selected at random. This allows a serial hasher (CPU) to re-use these partial calculations in subsequent mining saving 50% or more of the work-load. Parallel hashers (GPU) cannot benefit from this optimization since the optimal nonce-set cannot be pre-calculated as it is determined on-the-fly. As a result, parallel hashers (GPU) are required to perform the full workload for every nonce. Also, the algorithm results in 10x memory bloat for a parallel implementation. In addition to it's serial nature, it is branch-heavy and recursive making in optimal for CPU-only mining.
 
 ### Overview
 
@@ -40,7 +40,7 @@ A low-memory, GPU and ASIC-resistant hash algorithm called **Random Hash** is pr
 3. The input at round ```x``` depends on the output from round ```x-1```
 4. The input at round ```x``` depends on the output from another previous round ```1..x-1```, randomly selected
 5. The input at round x depends on the output from round ```x-1``` **of a different nonce**
-6. The input at round ```x``` is a compression of (3), (4) and (5) to ```100 bytes```.
+6. The input at round ```x``` is a compression of (3), (4) and (5) to ```100 bytes```
 7. The output of every round is expanded for memory-hardness
 8. Randomness is generated using ```Mersenne Twister``` algorithm
 9. Randomness is seeded via ```MurMur3``` checksum of previous round
@@ -81,8 +81,8 @@ A low-memory, GPU and ASIC-resistant hash algorithm called **Random Hash** is pr
 
         Function RandomHash(blockHeader : ByteArray, Round : Integer) : ByteArray
         begin
-            let RoundOutputs = array [1..Round] of RawBytes;
-            let seed = Checksum(blockHeader)
+            let RoundOutputs = array [1..Round] of ByteArray;
+            let seed = Checksum(blockHeader)      // can hash blockHeader first, but not required
             let gen = RandomNumberGenerator(seed)
             let input = blockHeader
             
@@ -108,9 +108,9 @@ A low-memory, GPU and ASIC-resistant hash algorithm called **Random Hash** is pr
 
         function Expand(input : ByteArray, ExpansionFactor : Integer, gen : RandomNumberGenerator) : ByteArray
         begin
-            let Size = Length(randomBytes) + ExpansionFactor*M;
+            let Size = Length(input) + ExpansionFactor*M;
             let output = input.Clone
-            let bytesToAdd = Size - Length(RandomBytes)
+            let bytesToAdd = Size - Length(input)
             while output < Size do
                 let nextChunk = output.Clone
                 if Length(output) + Length(nextChunk) > Size then
@@ -130,7 +130,7 @@ A low-memory, GPU and ASIC-resistant hash algorithm called **Random Hash** is pr
 
         function Compress(input1, input2, input3 : ByteArray, gen : RandomNumberGenerator) : ByteArray
         begin
-            let output = ByteArray[0..99]
+            let output = Byte[0..99]
 
             for i = 0 to 99 do
                 var random = gen.NextDWord
@@ -161,7 +161,7 @@ A low-memory, GPU and ASIC-resistant hash algorithm called **Random Hash** is pr
 
 #### Memory transform methods
 
-These methods are iteratively and randomly applied to a hash output in order to rapidly expand it to M bytes
+These methods are iteratively and randomly applied to a hash output in order to rapidly expand it for compression in the next round
 ```
      - Method 1: No-Op         (e.g. input = 123456   output = 123456)
      - Method 2: Swap-LR       (e.g. input = 123456   output = 456123)   
@@ -185,19 +185,21 @@ RandomHash is memory-light in order to support low-end hardware.  A CPU will onl
 
 #### GPU Resistance 
 
-GPU performance is generally driven by parallel execution of identical non-branching code-blocks across private regions of memory. Due to the inter-dependence on of hashing rounds, the slower global memory will need to be used. Also, due to the highly serial nature of RandomHash's algorithm, GPU implementations will be inherently inefficient. In addition, the use of Mersenne Twister to generate random numbers and the use of recursion will result in executive decision making further degrading GPU performance.  Most importantly, since nonce's are inter-dependent on other random nonces, attempts to buffer many nonces for batch hashing will result in high memory-wastage and 200% more work than a CPU. This occurs because each buffered nonce will require calculation of many other non-buffered nonces, rapidly consuming the available memory. A CPU implementation does not suffer this since the optimal nonce-set to mine are always the previous random nonces it's already partially calculated. Another important feature is the pattern of memory expansion factors chosen for each round. These were deliberately chosen to hinder GPUs by amplifying the memory needed for their wasted calculations.
+GPU performance is generally driven by parallel execution of identical non-branching code-blocks across private regions of memory. Due to the inter-dependence between hashing rounds, the slower global memory will need to be used. Also, due to the highly serial nature of RandomHash's algorithm, GPU implementations will be inherently inefficient. In addition, the use of Mersenne Twister to generate random numbers and the use of recursion will result in executive decision making further degrading GPU performance.  Most importantly, since nonce's are inter-dependent on other random nonces, attempts to buffer many nonces for batch hashing will result in high memory-wastage and 200% more work than a CPU. This occurs because each buffered nonce will require calculation of many other unbuffered dependent nonces, rapidly consuming the available memory. A CPU implementation does not suffer this since the optimal nonce-set to mine is enumerated on-the-fly as each nonce completes. Another important feature is the pattern of memory expansion factors chosen for each round. These were deliberately chosen to hinder GPUs by amplifying the memory needed for their wasted calculations.
 
-As a result, it's expected that GPU performance will at best never exceed CPU performance or at worst perform linearly better (not exponentially as is the case now).
+As a result, it's expected that GPU performance will at best never exceed CPU performance or at worst perform only linearly better (not exponentially as is the case now with SHA2-256D).
 
 #### ASIC Resistance 
 
-ASIC-resistance is fundamentally achieved on an economic basis. Since 16 hash algorithms are employed the R&D costs of a RandomHash ASIC are equivalent to that of 16 ordinary mining ASICS. Furthermore, due to the non-deterministic branching and executive decision making arising from Mersenne Twister, expansion and contraction, an ASIC implementation will inevitably result in dense highly inter-connected cells, impacting performance. It is the opinion of the author that such an ASIC design would, in some ways, require "re-creating a CPU" inside the ASIC, defeating its purpose. However, fundamentally it is expected that since the costs to develop will far exceed the ROI, no rational economic actor will undertake ASIC development of RandomHash.
+ASIC-resistance is fundamentally achieved on an economic basis. Since 16 hash algorithms are employed the R&D costs of a RandomHash ASIC are equivalent to that of 16 ordinary mining ASICs. Furthermore, due to the non-deterministic branching and executive decision making arising from Mersenne Twister, expansion and contraction, an ASIC implementation will inevitably result in densr and highly inter-connected cells, impacting performance. It is the opinion of the author that such an ASIC design would, in some ways, require "re-creating a CPU" wihtin the ASIC, defeating its purpose. However, fundamentally it is expected that since the costs to develop will far exceed the ROI, no rational economic actor will undertake ASIC development of RandomHash.
 
 #### RandomHash Variations
 
 Variations of RandomHash can be made by varying N (the number of rounds required) and M (the memory expansion). For non-blockchain applications, the dependence on other nonces can be removed, providing a cryptographically secure general-purpose, albeit slow, secure hasher.
 
-It is also possible to change the depdendence graph between nonces. For example, requiring the initial rounds to depend on more than one nonce and the higher rounds on no nonces at all, could allow further CPU vs GPU optimization. Similarly, for memory expansion factors.
+It is also possible to change the dependence graph between nonces for stronger CPU bias. For example, requiring the lower rounds to depend on more than one nonce and the upper rounds on no nonces at all, may allow further CPU vs GPU optimization. Similarly, for memory expansion factors.
+
+Extra, albeit unnecessary, strengethening can be added in the initial rounds of hashing by using the hash of the blockheader for seeding, instead of the blockheader itself. In the analysis of the author, this is unnecessary and has subsequently been removed.
 
 ### Formal Proofs
 
@@ -221,6 +223,9 @@ Since a hash at round x is the hash of the previous round **and** of round ```x-
     F(x) = 1 + F(x-1) + F(x-1)  
 ```
 
+**NOTE** 
+The dependence on a previous random round 1..x-1 is omitted above since it's computationally inconsequential as this is always know for all x. It's only a salt needed to prevent certain GPU optimizations, and does not change the number of hashes in ```F```. 
+
 Simplifying
 ```
     F(x) = 1 + 2 F(x-1) 
@@ -254,7 +259,7 @@ It follows that the total memory for the round is calculated as follows
     TotalMemoryAtRound(x) = (N-x) * TotalHashesAtRound(x)
                           = 2^(N-x) * (N-x)
 ```
-This can be seen by observing the memory-expansion factors in the diagram. Notice it starts at ```N-1``` for the first round and decrease every subsequent round. 
+This can be seen by observing the memory-expansion factors in the diagram. Notice it starts at ```N-1``` for the first round and decreases every subsequent round. 
 
 The total memory, ```G(N)``` is simply the sum of all the memory at each round
 ```
@@ -268,14 +273,15 @@ Thus,
     G(N) = 2^N (N-2) + 2
 ```
 
-**NOTE**: For PascalCoin ```N=5``` which means ```98``` units of memory are required for a single nonce. Choosing memory unit ```M=10kb``` means that approximately ```1MB``` will be required. Quite low for a CPU, but bloats quickly for a GPU as mentioned below.
+**NOTE**: For PascalCoin, ```N=5``` which results ```98``` units of memory for every single nonce. Choosing memory unit ```M=10kb``` results in approximately ```1MB``` per nonce. Quite low for a CPU, but bloats quickly for a GPU as mentioned below.
 
 #### CPU Bias
 
 To show that CPU does 50% the hashing work of a GPU consider that
- - N rounds are required to trial a single nonce 
- - After the completion of any nonce, another nonce is known and computed to round ```N-1```
- - Almost all nonce computations resume the previous ```N-1``` nonce, requiring onlY ```F(N-1)``` work. This is true for serial mining (CPU), not for parallel mining (GPU)
+ - N rounds are required to complete a single nonce 
+ - After the completion of any nonce, another nonce is known and pre-computed to round ```N-1```
+ - For serial mining (CPU), almost all nonce hashing are simply the resumption of a previous pre-computed nonce to ```N-1```. Thus it only does ```F(N-1)``` the work.  
+ - For parallel mining (GPU), all the work ```F(N)``` must be performed for every nonce.
 
 Thus the work a CPU does is
  
@@ -287,7 +293,7 @@ However GPU does the entire work for every nonce
     GPU Work = F(N)
              = 2^n - 1
 
-The efficiency is thus
+The efficiency is 
 
     Efficiency = (CPU Work) / (GPU Work)
                = (2^(N-1)-1) / (2^N - 1)
@@ -312,17 +318,17 @@ Since this is a significant change, the PascalCoin community will be asked to vo
 
 #### Implementation
 
-If after a period of time and consensus is reached, RandomHash will be merged into the PascalCoin code-base by the PascalCoin developers. After thorough testing on TestNet, a suitable activation date will be chosen to allow ecosystem to adopt this mandatory upgrade. A release will be made and notifications provided of activation within the time-frame.
+If after a period of time and consensus is reached, RandomHash will be merged into the PascalCoin code-base by the PascalCoin developers. After thorough testing on TestNet, a suitable activation date will be chosen to allow the ecosystem to adopt this mandatory upgrade. A release will be made and notifications provided of activation within the time-frame.
 
 #### Difficulty Reset
 
-On activation, the block difficulty will be reset to an appropriately low value. During this period, he block time will be highly unstable but quickly stabilize over approximately 200 blocks. Exchanges are recommended to pause deposits and withdrawals 1 hour before activation and 10 hours after.
+On activation, the block difficulty will be reset to an appropriately low number. During this period, the block times will be highly unstable but will stabilize over approximately 200 blocks. Exchanges are recommended to pause deposits and withdrawals 1 hour before activation and 10 hours after.
 
 ## Rationale
 
-Aside from a hash algorithm change, the only other known option to resolve 99% mining centralization is to encourage other large Ethereum mining pools to also offer PascalCoin dual-mining. Even if this were achieved, it would still price-out ordinary pools and solo-miners, which is undesirable. Efforts to encourage other dual-miners were undertaken but have failed. As a result, this option is no longer considered viable. Changing the hash algorithm is now the only known option to resolve centralization.
+Aside from a hash algorithm change, the only other known option to resolve 99% mining centralization is to encourage other large Ethereum mining pools to duplicate Pool-X's features thus incentivizing decentralized ETH-PASC dual-mining. Even if this were achieved, it would still price-out ordinary PASC-pools and solo-miners, which is undesirable. It would also fundamentally link the two ecosystems together for no good reason. Efforts to encourage other dual-miners were undertaken but have failed. As a result, this option is no longer considered viable. Changing the hash algorithm is now the only known option to resolve this centralization.
 
-Within the scope of changing hash algorithm, other possible hash algorithms like Equihash were considered. However, these were ruled out due to their excessive memory consumption contradicting. PascalCoin's requirements to run on low-end hardware without voluminous amounts of fast memory available to validate block hashes.
+Within the scope of changing the hash algorithm, other hash algorithms were considered like Equihash. However, these were ruled out due to their excessive memory consumption contradicting PascalCoin's vision of globally decentralized network that runs fine on low-end hardware available anywhere on this world. Requiring voluminous amounts of fast memory to validate blocks is not consistent with this vision.
 
 ## Backwards Compatibility