소스 검색

Upload results to tfb-status after each failure (#5289)

This should fix an issue we're seeing with false positive 'Benchmarking
environment "Citrine" crashed?' emails internally.  When one framework
has many variants that are failing in a row, and each one of them is
taking an hour to fail, there may be several hours between successful
tests.  Currently this means several hours between uploads, and this
triggers tfb-status's "crashed?" email.  With this change, there should
be not much more than an hour between uploads at worst.

Also added what looked like a missing call to write_intermediate, which
uploads the "completed" key in the results.json, in the case of a database
failure.
Michael Hixson 5 년 전
부모
커밋
5c37036a0b
1개의 변경된 파일6개의 추가작업 그리고 0개의 파일을 삭제
  1. 6 0
      toolset/benchmark/benchmarker.py

+ 6 - 0
toolset/benchmark/benchmarker.py

@@ -115,6 +115,7 @@ class Benchmarker:
             message = "Test {name} has been added to the excludes list. Skipping.".format(
                 name=test.name)
             self.results.write_intermediate(test.name, message)
+            self.results.upload()
             return self.__exit_test(
                 success=False,
                 message=message,
@@ -130,6 +131,8 @@ class Benchmarker:
                     test.database.lower())
                 if database_container is None:
                     message = "ERROR: Problem building/running database container"
+                    self.results.write_intermediate(test.name, message)
+                    self.results.upload()
                     return self.__exit_test(
                         success=False,
                         message=message,
@@ -144,6 +147,7 @@ class Benchmarker:
                 message = "ERROR: Problem starting {name}".format(
                     name=test.name)
                 self.results.write_intermediate(test.name, message)
+                self.results.upload()
                 return self.__exit_test(
                     success=False,
                     message=message,
@@ -165,6 +169,7 @@ class Benchmarker:
             if not accepting_requests:
                 message = "ERROR: Framework is not accepting requests from client machine"
                 self.results.write_intermediate(test.name, message)
+                self.results.upload()
                 return self.__exit_test(
                     success=False,
                     message=message,
@@ -223,6 +228,7 @@ class Benchmarker:
             tb = traceback.format_exc()
             self.results.write_intermediate(test.name,
                                             "error during test: " + str(e))
+            self.results.upload()
             log(tb, prefix=log_prefix, file=benchmark_log)
             return self.__exit_test(
                 success=False,