Jenkins Pipeline is the mechanism used to orchestrate validations, builds, and tests for pull requests to the O3DE repo.
Jenkins server url: https://jenkins.build.o3de.org/
Add sig-build as an approver for all changes to the pipeline.
When merging in changes to the Jenkinsfile, it's important to be aware of how Jenkins loads the pipeline files and how it can affect other users.
Here is a summary of what happens after a user submits a pull request:
Determining the source of the files used throughout the pipeline, can be confusing. Below is a summary of where Jenkins pulls the files used duing the AR runs:
File(s) | Description | Source |
---|---|---|
Jenkinsfile | Pipeline definition file | Source branch with latest merged from Target branch |
Bootstrap files | Required during pipeline setup steps to load pipeline options, build configs, and setup scripts | Source branch |
Workspace contents | Repo contents pulled during a git checkout to run builds and tests | Source branch |
One important consideration is that all script references will be based on the state of the source branch. If you need to update the pipeline to use a new file, other pull requests will fail if they do not have that file in their branch.
For non-blocking scripts, add a check to prevent the pipeline from attempting to load the script if it does not exist or only load the script if the associated pipeline is running. This will avoid blocking the AR on other pull requests if the file is not yet merged into their branch.
Example:
if (fileExists(SCRIPT_PATH)) {
stash name: 'script_name',
includes: SCRIPT_PATH
}
If you have any questions about the pipeline, reach out to the Build SIG by using the resources below:
If you are new to Jenkins Pipeline, it's recommended to review the info on these pages:
Below are references to refer to later if you need to lookup additional pipeline details:
You will need a Jenkins multi-branch pipeline pointing to your fork repo in order to test in your branch prior to submitting a pull request.
The replay feature is your friend here for quickly testing pipeline changes. This allows you to update the Jenkinsfile and run builds without having to make multiple commits to the repo.
How to use replay:
The pipeline will trigger a build with the updated Jenkinsfile.
Our Jenkinsfile is based on the scripted pipeline syntax. More info here: Jenkinsfile Syntax
This file is located in: scripts/build/Jenkins/Jenkinsfile
There are two types of configs that are loaded at the start of the pipeline: Platform options and Pipeline options.
Jobs for the pipeline are generated by loading the configs defined in the build_config.json
files for each platform. Located in scripts/build/Jenkins/Platform/<platform>/build_config.json
The configs that are labeled with "DEFAULT" are included in the pipeline used to test pull requests. Configs can also be grouped together to run on the same node with the desired configs listed under steps.
Example config:
"profile_vs2019_pipe": {
"TAGS": [
"default"
],
"steps": [
"profile_vs2019",
"test_impact_analysis_profile_vs2019",
"asset_profile_vs2019",
"test_cpu_profile_vs2019"
]
}
There are also tags for other pipelines:
"TAGS":[
"default",
"metric",
"nightly",
"packaging",
"weekly"
],
Pipeline options include build labels, workspace paths, etc. These settings are located in the .json files in scripts/build/Jenkins
Example pipeline options:
"BUILD_ENTRY_POINT": "scripts/build/ci_build.py",
"PIPELINE_CONFIGS": [
"scripts/build/Platform/*/pipeline.json",
"restricted/*/scripts/build/pipeline.json"
],
"BUILD_CONFIGS": [
"scripts/build/Platform/*/build_config.json",
"restricted/*/scripts/build/build_config.json"
],
"PYTHON_DIR": "python"
To run the builds we load the configs from the build_config.json files for each platform. Then add each config that is labeled for that pipeline into the buildConfigs map.
Example from our Jenkinsfile:
// Build Config map to be used for the parallel command
def buildConfigs = [:]
// This section iterates on the configs and loads them into the map if the builds are enabled for that platform
pipelineConfig.platforms.each { platform ->
platform.value.build_types.each { build_job ->
if (IsJobEnabled(branchName, build_job, pipelineName, platform.key)) { // User can filter jobs, jobs are tagged by pipeline
def envVars = GetBuildEnvVars(platform.value.PIPELINE_ENV ?: EMPTY_JSON, build_job.value.PIPELINE_ENV ?: EMPTY_JSON, pipelineName)
envVars['JOB_NAME'] = "${branchName}_${platform.key}_${build_job.key}" // backwards compatibility, some scripts rely on this
def nodeLabel = envVars['NODE_LABEL']
someBuildHappened = true
buildConfigs["${platform.key} [${build_job.key}]"] = {
node("${nodeLabel}") {
// Add build steps here
}
}
}
}
}
stage('Build') {
parallel buildConfigs // Run parallel builds
}
More info can be found here: Pipeline - Parallel execution of tasks
These are variables that available to any step within the pipeline. This includes Jenkins configured environment variables, build parameters, and build variables.
List of the available global variables are listed here: Global Variable Reference
There are two main notifications defined at the end of the pipeline, email notifications and SNS messages.
Email notifications are configured to send results to the user that triggered the build. It uses the Mailer pipeline step: Mailer
SNS messages are published at the end of the pipeline. The message includes the build status and other useful info. This message is used to trigger automated downstream of the pipeline.