|
@@ -128,8 +128,93 @@ if LNeighbourWasLastRound then begin
|
|
|
end;
|
|
|
```
|
|
|
|
|
|
-### Full Code
|
|
|
+### Analysis
|
|
|
|
|
|
+#### Cryptographic Strength
|
|
|
+
|
|
|
+Since hashing starts and ends with a ```SHA2-256``` the cryptographic strength of RandomHash2 is **at least** that of ```SHA2-256D``` as used in Bitcoin. Even if one were to assume the data transformations in between the start/end were cryptographically insecure, it wouldn't change this minimum security guarantee.
|
|
|
+
|
|
|
+However, the transformations in between are not weak and involve the use of 18 other cryptographically strong hash algorithms. As a result, RandomHash2 is orders of magnitude more stronger than standard cryptographic hash algorithms since they are combined in random, non-determinstic ways. However this achievement is paid for by significant performance overhead (which is intentional).
|
|
|
+
|
|
|
+However, within the 18 hash algorithms used, some are considered "cryptographically weak" such as MD5. The use of some weak algorithms is inconsequential to overall security since their purpose is not to add to security but to computational complexity to prevent ASIC manufacturing.
|
|
|
+
|
|
|
+In order to get a grasp of the minmum security provided by RandomHash2, consider it's high-level algorithmic structure as essentially a set of nested hashes as follows:
|
|
|
+```
|
|
|
+RandomHash2(Data) = SHA2_256( H1( H2( .... H_N( SHA2_256(DATA)) ...) )
|
|
|
+where
|
|
|
+ H_i = a randomly selected hash function based on the output of H_(i-1)
|
|
|
+ N = a random number determined by the nonce and neighbouring nonces (indeterminable but bound)
|
|
|
+```
|
|
|
+
|
|
|
+It follows that the weakest possible RandomHash2 for some ```WeakestDigest``` would comprise of 2 levels of evaluation (``MIN_N=2```) with each of those 2 levels having 0 nieghbouring nonce dependencies (```MIN_J```). Also, we assume the hash algorithms are ```MD5``` for all levels, as it is considered weakest of the 18 possible algorithms. In this case,
|
|
|
+```
|
|
|
+RandomHash2(WeakestDigest) = SHA2_256( MD5 ( MD5 ( SHA2_256( WeakestDigest ) ) ) )
|
|
|
+```
|
|
|
+
|
|
|
+Clearly the above is still far stronger than the typical SHA2-256D used in almost all cryptocurrencies
|
|
|
+```pascal
|
|
|
+SHA2-256D(WeakestDigest) = SHA2-256 ( SHA2-256 ( WeakestDigest ) )
|
|
|
+```
|
|
|
+
|
|
|
+
|
|
|
+In addition to the above, RandomHash2 internally transforms data using expansions and compressions which are themselves cryptographically secure. As a result, it's clear that RandomHash2's cryptographic strength is at least as strong as Bitcoin's ```SHA2-256D``` with likelihood of being orders of magnitude stronger.
|
|
|
+
|
|
|
+### Nonce Scanning Attack
|
|
|
+
|
|
|
+In RandomHash2, the number of levels ```N``` required to mine a nonce is now random and varies per nonce in a non-deterministic manner. The randomization of ```N``` introduces new level of randomness and executive decision-making into the core algorithm in order to enhance GPU and ASIC resistivity. However, it introduces a new attack vector called "Nonce Scanning Attack". In this attack, a miner can implement a simplified miner that only tries to mine "simple nonces" that require few levels to evaluate whilst rejecting "complex nonces" that require more levels to evaluate. By reducing the number of computations required to evaluate a nonce and simplifying the algorithm implementation, a higher hashrate could be achieved and an ASIC implementation made viable.
|
|
|
+
|
|
|
+To thwart this attack, RandomHash2 restricts the range of values ```N``` can take to be between ```MIN_N = 2``` and ```MAX_N = 4```, inclusive.
|
|
|
+
|
|
|
+By forcing a nonce evaluation to have at least 2 levels of computation, the miner necessarily requires the full algorithm implementation which prevents simplified ASIC miners. Also, since each nonce requires at least 2 levels of evaluation, and each of those levels depends on at least 2 more nonces evaluated to at least 1 level each, the number of computations saved by nonce-scanning is balanced by the higher pre-computed cached nonces a miner has by honestly mining without nonce-scanning (due to higher number of dependent neighboring nonces).
|
|
|
+
|
|
|
+In order to determine if this is true, an empirical nonce-scanning attack was conducted. The below table shows empirical results from nonce-scanning ```N=MIN_N``` to ```N=MAX_N```.
|
|
|
+
|
|
|
+| N | Mean Hashrate (H/s) | Mean Mem Per Hash (b) | Min Mem Per Hash (b) | Max Mem Per Hash (b) | Sample Std Dev. (b) |
|
|
|
+| :------ | :------------------------ | :-------------------- | :------------------- | :------------------- | :------------------ |
|
|
|
+| 2 (min) | 240 | 4,175 | 1,312 | 7,184 | 1,854 |
|
|
|
+| 3 | 651 | 5,984 | 1,312 | 49,436 | 6,293 |
|
|
|
+| 4 (max) | 1,051 | 16,693 | 1,312 | 251,104 | 29,374 |
|
|
|
+
|
|
|
+_**Machine**: AMD FX-8150 8 Core 3.60 Ghz utilizing 1 thread_
|
|
|
+
|
|
|
+As the above table shows, nonce-scanning (via CPU) yields a hashrate penalty, not a benefit. In the opinion of the author, it is unlikely a future implementation optimization would necessarily change this result since it would improve all scanning levels proportionally. However, a line of inquiry is to investigate whether or not the reduced memory-hardness may yield a benefit for GPU-based nonce-scanning.
|
|
|
+
|
|
|
+#### CPU Bias
|
|
|
+
|
|
|
+The RandomHash2 algorithm, like it's predecessor, is inherently biased towards CPU mining due to it's highly serial nature, use of non-deterministic recursion and executive-decision making. In addition, RandomHash2 can now evaluate many nonces when evaluating one, allowing CPU miners to enumerate the optimal nonce-set on the fly. Testing shows a 300% - 400% advantage for serial mining over batch mining, which indicates a proportional CPU bias.
|
|
|
+
|
|
|
+#### Memory Complexity
|
|
|
+
|
|
|
+RandomHash is memory-light in order to support low-end hardware. A CPU will only need 300KB of memory to verify a hash. Unlike RandomHash, mining does not consume additional memory since the cached nonces are fully evaluated.
|
|
|
+
|
|
|
+#### GPU Resistance
|
|
|
+
|
|
|
+GPU performance is generally driven by parallel execution of identical non-branching code-blocks across private regions of memory. RandomHash2 is a highly serial and recursive algorithm requiring a lot of executive-decision making, and decisions driven by Mersenne Twister random number generator. These characteristics make GPU implementations quite tedious and inefficient. Since the predecessor algorithm was shown to be GPU resistant, and this algorithm only exarcerbates these characteristics (except for memory hardness), it is expected that GPU resistance is maintained, although not confirmed as of the writing of this PIP.
|
|
|
+
|
|
|
+#### ASIC Resistance
|
|
|
+
|
|
|
+ASIC-resistance is fundamentally achieved on an economic basis. Due to the use of 18 sub-hash algorithms and the use of recursion in the core algorithm, it is expected that the R&D costs of a RandomHash ASIC will mirror that of building 18 independent ASICs rather than 1. This moves the economic viability goal-posts away by an order of magnitude. For as long as the costs of general ASIC development remain in relative parity to the costs of consumer grade CPUs as of today, a RandomHash ASIC will always remain "not worth it" for a "rational economic actor".
|
|
|
+
|
|
|
+Furthermore, RandomHash offers a wide ASIC-breaking attack surface. This is due to it's branch-heavy, serial, recursive nature and heavy dependence on sub-algorithms. By making minor tweaks to the high-level algorithm, or changing a sub-algorithm, an ASIC design can be mostly invalidated and sent back the drawing board with minimal updates to the CPU miner.
|
|
|
+
|
|
|
+This is true since ASIC designs tend to mirror the assembly structure of an algorithm rather than the high-level algorithm itself. Thus by making relatively minor tweaks at the high-level that necessarily result in significant low-level assembly restructuring, an ASIC design is made obsolete. So long as this "tweak-to-break-ASIC" policy is maintained by the PascalCoin Developers and Community, ASIC resistance is guaranteed.
|
|
|
+
|
|
|
+### Hard-Fork Activation
|
|
|
+
|
|
|
+The PIP requires a hard-fork activation involving various aspects discussed below.
|
|
|
+
|
|
|
+## Rationale
|
|
|
+
|
|
|
+Aside from a hash algorithm change, the only other known option to resolve slow validation time is to ship the client with precomputed lookup tables to speed up verification. This has already been done for RandomHash1 periods, but is not a viable option long-term.
|
|
|
+
|
|
|
+## Backwards Compatibility
|
|
|
+
|
|
|
+This PIP is not backwards compatible and requires a hard-fork activation. Previous hashing algorithm must be retained in order to validate blocks mined prior to the hard-fork.
|
|
|
+
|
|
|
+## Reference Implementation
|
|
|
+
|
|
|
+A reference implementation of RandomHash can be found [here][2]. A full implementation is provided below.
|
|
|
+
|
|
|
```pascal
|
|
|
|
|
|
TRandomHash2 = class sealed(TObject)
|
|
@@ -575,92 +660,7 @@ end.
|
|
|
|
|
|
```
|
|
|
|
|
|
-### Analysis
|
|
|
-
|
|
|
-#### Cryptographic Strength
|
|
|
-
|
|
|
-Since hashing starts and ends with a ```SHA2-256``` the cryptographic strength of RandomHash2 is **at least** that of ```SHA2-256D``` as used in Bitcoin. Even if one were to assume the data transformations in between the start/end were cryptographically insecure, it wouldn't change this minimum security guarantee.
|
|
|
-
|
|
|
-However, the transformations in between are not weak and involve the use of 18 other cryptographically strong hash algorithms. As a result, RandomHash2 is orders of magnitude more stronger than standard cryptographic hash algorithms since they are combined in random, non-determinstic ways. However this achievement is paid for by significant performance overhead (which is intentional).
|
|
|
-
|
|
|
-However, within the 18 hash algorithms used, some are considered "cryptographically weak" such as MD5. The use of some weak algorithms is inconsequential to overall security since their purpose is not to add to security but to computational complexity to prevent ASIC manufacturing.
|
|
|
-
|
|
|
-In order to get a grasp of the minmum security provided by RandomHash2, consider it's high-level algorithmic structure as essentially a set of nested hashes as follows:
|
|
|
-```
|
|
|
-RandomHash2(Data) = SHA2_256( H1( H2( .... H_N( SHA2_256(DATA)) ...) )
|
|
|
-where
|
|
|
- H_i = a randomly selected hash function based on the output of H_(i-1)
|
|
|
- N = a random number determined by the nonce and neighbouring nonces (indeterminable but bound)
|
|
|
-```
|
|
|
-
|
|
|
-It follows that the weakest possible RandomHash2 for some ```WeakestDigest``` would comprise of 2 levels of evaluation (``MIN_N=2```) with each of those 2 levels having 0 nieghbouring nonce dependencies (```MIN_J```). Also, we assume the hash algorithms are ```MD5``` for all levels, as it is considered weakest of the 18 possible algorithms. In this case,
|
|
|
-```
|
|
|
-RandomHash2(WeakestDigest) = SHA2_256( MD5 ( MD5 ( SHA2_256( WeakestDigest ) ) ) )
|
|
|
-```
|
|
|
-
|
|
|
-Clearly the above is still far stronger than the typical SHA2-256D used in almost all cryptocurrencies
|
|
|
-```pascal
|
|
|
-SHA2-256D(WeakestDigest) = SHA2-256 ( SHA2-256 ( WeakestDigest ) )
|
|
|
-```
|
|
|
-
|
|
|
-
|
|
|
-In addition to the above, RandomHash2 internally transforms data using expansions and compressions which are themselves cryptographically secure. As a result, it's clear that RandomHash2's cryptographic strength is at least as strong as Bitcoin's ```SHA2-256D``` with likelihood of being orders of magnitude stronger.
|
|
|
-
|
|
|
-### Nonce Scanning Attack
|
|
|
-
|
|
|
-In RandomHash2, the number of levels ```N``` required to mine a nonce is now random and varies per nonce in a non-deterministic manner. The randomization of ```N``` introduces new level of randomness and executive decision-making into the core algorithm in order to enhance GPU and ASIC resistivity. However, it introduces a new attack vector called "Nonce Scanning Attack". In this attack, a miner can implement a simplified miner that only tries to mine "simple nonces" that require few levels to evaluate whilst rejecting "complex nonces" that require more levels to evaluate. By reducing the number of computations required to evaluate a nonce and simplifying the algorithm implementation, a higher hashrate could be achieved and an ASIC implementation made viable.
|
|
|
-
|
|
|
-To thwart this attack, RandomHash2 restricts the range of values ```N``` can take to be between ```MIN_N = 2``` and ```MAX_N = 4```, inclusive.
|
|
|
-
|
|
|
-By forcing a nonce evaluation to have at least 2 levels of computation, the miner necessarily requires the full algorithm implementation which prevents simplified ASIC miners. Also, since each nonce requires at least 2 levels of evaluation, and each of those levels depends on at least 2 more nonces evaluated to at least 1 level each, the number of computations saved by nonce-scanning is balanced by the higher pre-computed cached nonces a miner has by honestly mining without nonce-scanning (due to higher number of dependent neighboring nonces).
|
|
|
-
|
|
|
-In order to determine if this is true, an empirical nonce-scanning attack was conducted. The below table shows empirical results from nonce-scanning ```N=MIN_N``` to ```N=MAX_N```.
|
|
|
-
|
|
|
-| N | Mean Hashrate (H/s) | Mean Mem Per Hash (b) | Min Mem Per Hash (b) | Max Mem Per Hash (b) | Sample Std Dev. (b) |
|
|
|
-| :------ | :------------------------ | :-------------------- | :------------------- | :------------------- | :------------------ |
|
|
|
-| 2 (min) | 240 | 4,175 | 1,312 | 7,184 | 1,854 |
|
|
|
-| 3 | 651 | 5,984 | 1,312 | 49,436 | 6,293 |
|
|
|
-| 4 (max) | 1,051 | 16,693 | 1,312 | 251,104 | 29,374 |
|
|
|
-
|
|
|
-_**Machine**: AMD FX-8150 8 Core 3.60 Ghz utilizing 1 thread_
|
|
|
-
|
|
|
-As the above table shows, nonce-scanning (via CPU) yields a hashrate penalty, not a benefit. In the opinion of the author, it is unlikely a future implementation optimization would necessarily change this result since it would improve all scanning levels proportionally. However, a line of inquiry is to investigate whether or not the reduced memory-hardness may yield a benefit for GPU-based nonce-scanning.
|
|
|
-
|
|
|
-#### CPU Bias
|
|
|
-
|
|
|
-The RandomHash2 algorithm, like it's predecessor, is inherently biased towards CPU mining due to it's highly serial nature, use of non-deterministic recursion and executive-decision making. In addition, RandomHash2 can now evaluate many nonces when evaluating one, allowing CPU miners to enumerate the optimal nonce-set on the fly. Testing shows a 300% - 400% advantage for serial mining over batch mining, which indicates a proportional CPU bias.
|
|
|
-
|
|
|
-#### Memory Complexity
|
|
|
-
|
|
|
-RandomHash is memory-light in order to support low-end hardware. A CPU will only need 300KB of memory to verify a hash. Unlike RandomHash, mining does not consume additional memory since the cached nonces are fully evaluated.
|
|
|
-
|
|
|
-#### GPU Resistance
|
|
|
-
|
|
|
-GPU performance is generally driven by parallel execution of identical non-branching code-blocks across private regions of memory. RandomHash2 is a highly serial and recursive algorithm requiring a lot of executive-decision making, and decisions driven by Mersenne Twister random number generator. These characteristics make GPU implementations quite tedious and inefficient. Since the predecessor algorithm was shown to be GPU resistant, and this algorithm only exarcerbates these characteristics (except for memory hardness), it is expected that GPU resistance is maintained, although not confirmed as of the writing of this PIP.
|
|
|
-
|
|
|
-#### ASIC Resistance
|
|
|
-
|
|
|
-ASIC-resistance is fundamentally achieved on an economic basis. Due to the use of 18 sub-hash algorithms and the use of recursion in the core algorithm, it is expected that the R&D costs of a RandomHash ASIC will mirror that of building 18 independent ASICs rather than 1. This moves the economic viability goal-posts away by an order of magnitude. For as long as the costs of general ASIC development remain in relative parity to the costs of consumer grade CPUs as of today, a RandomHash ASIC will always remain "not worth it" for a "rational economic actor".
|
|
|
-
|
|
|
-Furthermore, RandomHash offers a wide ASIC-breaking attack surface. This is due to it's branch-heavy, serial, recursive nature and heavy dependence on sub-algorithms. By making minor tweaks to the high-level algorithm, or changing a sub-algorithm, an ASIC design can be mostly invalidated and sent back the drawing board with minimal updates to the CPU miner.
|
|
|
-
|
|
|
-This is true since ASIC designs tend to mirror the assembly structure of an algorithm rather than the high-level algorithm itself. Thus by making relatively minor tweaks at the high-level that necessarily result in significant low-level assembly restructuring, an ASIC design is made obsolete. So long as this "tweak-to-break-ASIC" policy is maintained by the PascalCoin Developers and Community, ASIC resistance is guaranteed.
|
|
|
-
|
|
|
-### Hard-Fork Activation
|
|
|
-
|
|
|
-The PIP requires a hard-fork activation involving various aspects discussed below.
|
|
|
-
|
|
|
-## Rationale
|
|
|
-
|
|
|
-Aside from a hash algorithm change, the only other known option to resolve slow validation time is to ship the client with precomputed lookup tables to speed up verification. This has already been done for RandomHash1 periods, but is not a viable option long-term.
|
|
|
-
|
|
|
-## Backwards Compatibility
|
|
|
-
|
|
|
-This PIP is not backwards compatible and requires a hard-fork activation. Previous hashing algorithm must be retained in order to validate blocks mined prior to the hard-fork.
|
|
|
-
|
|
|
-## Reference Implementation
|
|
|
|
|
|
-A reference implementation of RandomHash can be found [here][2].
|
|
|
|
|
|
## Links
|
|
|
|