jit-thoughts 5.8 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166
  1. Just some thoughts for the JITer:
  2. General issues:
  3. ===============
  4. We are designing a JIT compiler, so we have to consider two things:
  5. - the quality of the generated code
  6. - the time needed to generate that code
  7. The current approach is to keep the JITer as simple as possible, and thus as
  8. fast as possible. The generated code quality will suffer from that.
  9. We do not map local variables to registers at the moment, and this makes the
  10. whole JIT much easier, for example we do not need to identify basic block
  11. boundaries or the lifetime of local variables, or select the variables which
  12. are worth to put into a register.
  13. Register allocation is thus done only inside the trees of the forest, and each
  14. tree can use the full set of registers. We simply split a tree if we get out of
  15. registers, for example the following tree:
  16. add(R0)
  17. / \
  18. / \
  19. a(R0) add(R1)
  20. / \
  21. / \
  22. b(R1) add(R2)
  23. / \
  24. / \
  25. c(R2) b(R3)
  26. can be transformed to:
  27. stloc(t1) add(R0)
  28. | / \
  29. | / \
  30. add(R0) a(R0) add(R1)
  31. / \ / \
  32. / \ / \
  33. c(R0) b(R1) b(R1) t1(R2)
  34. Please notice that the split trees use less registers than the original
  35. tree.
  36. Triggering JIT compilation:
  37. ===========================
  38. The current approach is to call functions indirectly. The address to call is
  39. stored in the MonoMethod structure. For each method we create a trampoline
  40. function. When called, this function does the JIT compilation and replaces the
  41. trampoline with the compiled method address.
  42. Register Allocation:
  43. ====================
  44. With lcc you can assign a fixed register to a tree before register
  45. allocation. For example this is needed by call, which return the value always
  46. in EAX on x86. The current implementation works without such system, due to
  47. special forest generation.
  48. X86 Register Allocation:
  49. ========================
  50. We can use 8bit or 16bit registers on the x86. If we use that feature we have
  51. more registers to allocate, which maybe prevents some register spills. We
  52. currently ignore that ability and always allocate 32 bit registers, because I
  53. think we would gain very little from that optimisation and it would complicate
  54. the code.
  55. Different Register Sets:
  56. ========================
  57. Most processors have more that one register set, at least one for floating
  58. point values, and one for integers. Should we support architectures with more
  59. that two sets? Does someone knows such an architecture?
  60. 64bit Integer Values:
  61. =====================
  62. I can imagine two different implementation. On possibility would be to treat
  63. long (64bit) values simply like any other value type. This implies that we
  64. call class methods for ALU operations like add or sub. Sure, this method will
  65. be be a bit inefficient.
  66. The more performant solution is to allocate two 32bit registers for each 64bit
  67. value. We add a new non terminal to the monoburg grammar called long_reg. The
  68. register allocation routines takes care of this non terminal and allocates two
  69. registers for them.
  70. Forest generation:
  71. ==================
  72. It seems that trees generated from the CIL language have some special
  73. properties, i.e. the trees already represents basic blocks, so there can be no
  74. branches to the inside of such a tree. All results of those trees are stored to
  75. memory.
  76. One idea was to drive the code generation directly from the CIL code, without
  77. generating an intermediate forest of trees. I think this is not possible,
  78. because you always have to gather some attributes and attach it to the
  79. instruction (for example the register allocation info). So I thing generating a
  80. tree is the right thing and that also works perfectly with monoburg. IMO we
  81. would not get any benefit from trying to feed monoburg directly with CIL
  82. instructions.
  83. DAG handling:
  84. =============
  85. Monoburg can't handle DAGs, instead we need real trees as input for
  86. the code generator. So we have two problems:
  87. 1.) DUP instruction: This one is obvious - we need to store the value
  88. into a temporary variable to solve the problem.
  89. 2.) function calls: Chapter 12.8, page 343 of "A retargetable C compiler"
  90. explains that: "because listing a call node will give it a hidden reference
  91. from the code list". I don't understand that (can someone explain that?), but
  92. there is another reason to save return values to temporaries: Consider the
  93. following code:
  94. x = f(y) + g(z); // all functions return integers
  95. We could generate such a tree for this expression: STLOC(ADD(CALL,CALL))
  96. The problem is that both calls returns the value in the same register,
  97. so it is non trivial to generate code for that tree. We must copy one
  98. register into another one, which make register allocation more complex.
  99. The easier solution is store the result of function calls to
  100. temporaries. This leads to the following forest:
  101. STLOC(CALL)
  102. STLOC(CALL)
  103. STLOC(ADD (LDLOC, LDLOC))
  104. This is what lcc is doing, if I understood 12.8, page 342, 343?
  105. Value Types:
  106. ============
  107. The only CLI instructions which can handle value types are loads and stores,
  108. either to local variable, to the stack or to array elements. Value types with a
  109. size smaller than sizeof(int) are handled like any other basic type. For other
  110. value types we load the base address and emit block copies to store them.
  111. Possible Optimisations:
  112. =======================
  113. Miguel said ORP does some optimisation on IL level, for example moving array
  114. bounds checking out of loops:
  115. for (i = 0; i < N; i++) { check_range (a, i); a [i] = X; }
  116. id transformed to:
  117. if (in_range (a, 0, N)) { for (i = 0; i < N; i++) a[i] = X; }
  118. else for (i = 0; i < N; i++) { check_range (a, i); a [i] = X; }
  119. The else is only to keep original semantics (exception handling).