|
1 year ago | |
---|---|---|
.. | ||
classes | 1 year ago | |
examples | 1 year ago | |
output | 1 year ago | |
resources | 1 year ago | |
tests | 1 year ago | |
conf.lua | 1 year ago | |
main.lua | 1 year ago | |
readme.md | 1 year ago |
Basic testing suite for the Löve APIs, based off of this issue.
Currently written for Löve 12, which is still in development.
This is the status of all module tests currently.
| Module | Done | Todo | Skip |
| ----------------- | ---- | ---- | ---- |
| 🟢 audio | 28 | 0 | 0 |
| 🟢 data | 12 | 0 | 0 |
| 🟢 event | 4 | 0 | 2 |
| 🟢 filesystem | 29 | 0 | 2 |
| 🟢 font | 7 | 0 | 0 |
| 🟡 graphics | 99 | 5 | 1 |
| 🟢 image | 5 | 0 | 0 |
| 🟢 math | 20 | 0 | 0 |
| 🟡 physics | 22 | 6 | 0 |
| 🟢 sound | 4 | 0 | 0 |
| 🟢 system | 6 | 0 | 2 |
| 🟢 thread | 5 | 0 | 0 |
| 🟢 timer | 6 | 0 | 0 |
| 🟢 video | 2 | 0 | 0 |
| 🟢 window | 34 | 0 | 2 |
The following modules are not covered as we can't really emulate input nicely:
joystick
,keyboard
,mouse
, andtouch
The testsuite aims to keep things as simple as possible, and just runs all the tests inside Löve to match how they'd be used by developers in-engine. To run the tests, download the repo and then run the main.lua as you would a Löve game, i.e:
WINDOWS: & 'c:\Program Files\LOVE\love.exe' PATH_TO_TESTING_FOLDER --console
MACOS: /Applications/love.app/Contents/MacOS/love PATH_TO_TESTING_FOLDER
LINUX: ./love.AppImage PATH_TO_TESTING_FOLDER
By default all tests will be run for all modules.
If you want to specify a module/s you can use:
--runSpecificModules filesystem,audio
If you want to specify only 1 specific method only you can use:
--runSpecificMethod filesystem write
All results will be printed in the console per method as PASS, FAIL, or SKIP with total assertions met on a module level and overall level.
When finished, the following files will be generated in the /output
directory with a summary of the test results:
XML
file in the style of JUnit XMLHTML
file that shows any visual test resultsMarkdown
file for use with this github action
> An example of all types of output can be found in the /examples
/output/actual
Each method and object has it's own test method written in /tests
under the matching module name.
When you run the tests, a single TestSuite object is created which handles the progress + totals for all the tests.
Each module has a TestModule object created, and each test method has a TestMethod object created which keeps track of assertions for that method. You can currently do the following assertions:
Example test method:
-- love.filesystem.read test method
-- all methods should be put under love.test.MODULE.METHOD, matching the API
love.test.filesystem.read = function(test)
-- setup any data needed then run any asserts using the passed test object
local content, size = love.filesystem.read('resources/test.txt')
test:assertNotNil(content)
test:assertEquals('helloworld', content, 'check content match')
test:assertEquals(10, size, 'check size match')
content, size = love.filesystem.read('resources/test.txt', 5)
test:assertNotNil(content)
test:assertEquals('hello', content, 'check content match')
test:assertEquals(5, size, 'check size match')
-- no need to return anything or cleanup, GCC is called after each method
end
Each test is run inside it's own coroutine - you can use test:waitFrames(frames)
to pause the test for a small period if you need to check things that won't happen for a few seconds.
After each test method is ran, the assertions are totalled up, printed, and we move onto the next method! Once all methods in the suite are run a total pass/fail/skip is given for that module and we move onto the next module (if any)
For sanity-checking, if it's currently not covered or it's not possible to test the method we can set the test to be skipped with test:skipTest(reason)
- this way we still see the method listed in the test output without it affected the pass/fail totals
Things still left to do: