Reference¶
All of the magic that kube-burner does is described in its configuration file. As previously mentioned, the location of this configuration file is provided by the flag -c. This flag points to a YAML-formatted file that consists of several sections.
Templating the configuraion file¶
go-template semantics may be used within the configuration file.
The input for the templates is taken from a user data file (using the --user-data parameter) and/or environment variables.
Environment variables take precedence over those defined in the file when the same variable is defined in both.
For example, you could define the indexers section of your own configuration file, such as:
metricsEndpoints:
{{ if .OS_INDEXING }}
- prometheusURL: http://localhost:9090
indexer:
type: opensearch
esServers: ["{{ .ES_SERVER }}"]
defaultIndex: {{ .ES_INDEX }}
{{ end }}
{{ if .LOCAL_INDEXING }}
- prometheusURL: http://localhost:9090
indexer:
type: local
metricsDirectory: {{ .METRICS_FOLDER }}
{{ end }}
This feature can be very useful at the time of defining secrets, such as the user and password of our indexer, or a token to use in pprof collection.
Global¶
In this section is described global job configuration, it holds the following parameters:
| Option | Description | Type | Default |
|---|---|---|---|
measurements |
List of measurements. Detailed in the measurements section | List | [] |
requestTimeout |
Client-go request timeout | Duration | 60s |
gc |
Garbage collect created namespaces | Boolean | false |
gcMetrics |
Flag to collect metrics during garbage collection | Boolean | false |
waitWhenFinished |
Wait for all pods/jobs (including probes) to be running/completed when all jobs are completed | Boolean | false |
clusterHealth |
Checks if all the nodes are in "Ready" state | Boolean | false |
timeout |
Global benchmark timeout | Duration | 4hr |
functionTemplates |
Function template files to render at runtime | List | [] |
deletionStrategy |
Global deletion strategy to apply, default or gvr (where default deletes entire namespaces and gvr deletes objects within namespaces) |
String | default |
Note
The precedence order to wait on resources is Global.waitWhenFinished > Job.waitWhenFinished > Job.podWait
Warning
Global waitWhenFinished and job gc are mutually exclusive and cannot be enabled at the same time.
kube-burner connects k8s clusters using the following methods in this order:
KUBECONFIGenvironment variable$HOME/.kube/config- In-cluster config (Used when kube-burner runs inside a pod)
Function templating example¶
Using function templates we can define a block of code as function and reuse it in any parts of our configuration. For the purpose of this example, lets assume we have a configuration like below in our deployment.yaml
env:
- name: ENVVAR1_{{.name}}
value: {{.envVar}}
- name: ENVVAR2_{{.name}}
value: {{.envVar}}
- name: ENVVAR3_{{.name}}
value: {{.envVar}}
- name: ENVVAR4_{{.name}}
value: {{.envVar}}
{{- define "env_func" -}}
{{- range $i := until $.n }}
{{- printf "- name: ENVVAR%d_%s\n value: %s" (add $i 1) $.name $.envVar | nindent $.indent }}
{{- end }}
{{- end }}
global:
functionTemplates:
- envs.tpl
env:
{{- template "env_func" (dict "name" .name "envVar" .envVar "n" 4 "indent" 8) }}
DeletionStrategy¶
kube-burner supports multiple deletion strategies that control how resources created during a run are cleaned up.
default¶
- Deletes all namespaced resources created by kube-burner
- Deletes the namespaces created by kube-burner, hence their child objects too
- Deletes cluster-scoped objects created by kube-burner
gvr¶
- Deletes namespaced resources one by one using GVR-based deletion
- After removing those resources, deletes their parent namespaces
- Finally garbage-collects cluster-scoped objects created by kube-burner
Note: The
gvrstrategy deletes namespaced resources first. Namespace deletion occurs after those resources are removed as part of the overall cleanup flow.
Jobs¶
This section contains the list of jobs kube-burner will execute. Each job can hold the following parameters.
| Option | Description | Type | Default |
|---|---|---|---|
name |
Job name | String | "" |
jobType |
Type of job to execute. More details at job types | String | create |
jobIterations |
How many times to execute the job | Integer | 1 |
namespace |
Namespace base name to use | String | "" |
namespacedIterations |
Whether to create a namespace per job iteration | Boolean | true |
iterationsPerNamespace |
The maximum number of jobIterations to create in a single namespace. Important for node-density workloads that create Services. |
Integer | 1 |
cleanup |
Cleanup clean up old namespaces | Boolean | true |
podWait |
Wait for all pods/jobs (including probes) to be running/completed before moving forward to the next job iteration | Boolean | false |
waitWhenFinished |
Wait for all pods/jobs (including probes) to be running/completed when all job iterations are completed | Boolean | true |
maxWaitTimeout |
Maximum wait timeout per namespace | Duration | 4h |
jobIterationDelay |
How long to wait between each job iteration. This is also the wait interval between each delete operation | Duration | 0s |
jobPause |
How long to pause after finishing the job | Duration | 0s |
beforeCleanup |
Allows to run a bash script before the workload is deleted | String | "" |
gc |
Garbage collect job | Boolean | false |
qps |
Limit object creation queries per second | Integer | 0 |
burst |
Maximum burst for throttle | Integer | 0 |
objects |
List of objects the job will create. Detailed on the objects section | List | [] |
watchers |
List of watchers to be created for the job. Detailed on the watchers section | List | [] |
verifyObjects |
Verify object count after running each job | Boolean | true |
errorOnVerify |
Set RC to 1 when objects verification fails | Boolean | true |
skipIndexing |
Skip metric indexing on this job | Boolean | false |
preLoadImages |
Kube-burner will create a DS before triggering the job to pull all the images of the job | Boolean | true |
preLoadPeriod |
Maximum time to wait for the preload DaemonSet to become ready on all nodes and cleanup pre-load objects | Duration | 10m |
preloadNodeLabels |
Add node selector labels for the resources created in preload stage | Object | {} |
namespaceLabels |
Add custom labels to the namespaces created by kube-burner | Object | {} |
namespaceAnnotations |
Add custom annotations to the namespaces created by kube-burner | Object | {} |
churnConfig |
Configures job churning, only supported for create jobs, see churning jobs section | Object | {} |
defaultMissingKeysWithZero |
Stops templates from exiting with an error when a missing key is found, meaning users will have to ensure templates hand missing keys | Boolean | false |
executionMode |
Execution mode for processing objects within a job. Only applies to patch and kubevirt job types (create, delete, and read jobs ignore this setting). More details at execution modes |
String | Varies by job type |
objectDelay |
How long to wait between each object in a job | Duration | 0s |
objectWait |
Wait for each object to complete before processing the next one - not for Create jobs | Boolean | false |
metricsAggregate |
Aggregate the metrics collected for this job with those of the next one | Boolean | false |
metricsClosing |
To define when the metrics collection should stop. More details at MetricsClosing | String | afterJobPause |
hooks |
List of hooks to execute at different job stages. See hooks section | List | [] |
incrementalLoad |
Enables incremental load behaviour for creation jobs. See Incremental Load. | Object | {} |
Note
Both churnCycles and churnDuration serve as termination conditions, with the churn process halting when either condition is met first. If someone wishes to exclusively utilize churnDuration to control churn, they can achieve this by setting churnCycles to 0. Conversely, to prioritize churnCycles, one should set a longer churnDuration accordingly.
Note
When jobType is set to Delete the following settings are forced:
jobIterations is set to 1,
waitWhenFinished is set to false,
executionMode is set to sequential.
When jobType is set to Read, executionMode is forced to sequential.
Any user-specified executionMode value is ignored for these job types.
Our configuration files strictly follow YAML syntax. To clarify on List and Object types usage, they are nothing but the Lists and Dictionaries in YAML syntax.
Examples of valid configuration files can be found in the examples folder.
Incremental Load¶
incrementalLoad enables gradual increase in number of iterations for creation jobs. The runner performs create operations for each step window, then runs either a configured health-check script or a built-in cluster health-check, and finally waits stepDelay before the next step.
| Option | Description | Type | Default |
|---|---|---|---|
startIterations |
Initial number of iterations to start with. If omitted, the job's jobIterations is used. |
Integer | jobIterations |
totalIterations |
Total number of iterations to reach. If omitted, no increase beyond startIterations is performed. |
Integer | same as startIterations |
stepDelay |
Delay between incremental steps (Go duration, e.g., 30s). |
Duration | 0s |
pattern.type |
Load pattern: linear or exponential. |
String | linear |
pattern.linear.minSteps |
Minimum number of steps for linear pattern. | Integer | 0 |
pattern.linear.stepSize |
Fixed step size (iterations) for linear pattern. When set, steps will increment by this value. | Integer | 1 |
pattern.exponential.base |
Base of the exponential increase. | Float | 2.0 |
pattern.exponential.maxIncrease |
Maximum tolerable increase (absolute iterations) for an exponential bump. | Integer | 0 |
pattern.exponential.warmupSteps |
Number of linear warmup steps before applying exponential increases. | Integer | 0 |
healthCheckScript |
Optional shell script path (.i.e local/remote) executed after each incremental step. If omitted, a built-in API + node readiness check is used. | String | "" |
Note
Linear pattern falls back to a single step when minSteps is not provided and the implementation guards against division-by-zero when computing ranges. The runner will stop on any health-check error (script non-zero exit or built-in check failure).
Incremental load behavior¶
The incremental load feature increases the number of iterations from a configured start (startIterations) to a configured total (totalIterations) in cumulative fashion. Two growth patterns are supported:
- Linear: iterations increase by a fixed amount each step (configured with
pattern.linear.stepSize). - Exponential: iterations grow multiplicatively using
pattern.exponential.base. An optionalpattern.exponential.warmupStepsvalue can apply a few initial linear increases before exponential growth begins.
After each increase the runner performs the configured health check and will stop early on failure. Between successful steps the runner waits the configured stepDelay before applying the next increase.
Simple examples:
- Linear example (startIterations=10, totalIterations=50, pattern.linear.stepSize=10):
- Step 1 runs 10 iterations, captures metircs and does GC preparing for the next step.
- Step 2 similarly runs the entrie cycle for 20 iterations and (+10).
- Step 3 runs 30 (+10).
- Step 4 runs 40 (+10).
- Step 5 runs 50 (+10, target reached).
- Progression: 10 → 20 → 30 → 40 → 50.
- Exponential example (startIterations=5, totalIterations=100, pattern.exponential.base=2):
- Step 1 runs 5 iterations, captures metircs and does GC preparing for the next step.
- Step 2 similarly runs the entire cycle for 10 iterations and (×2).
- Step 3 runs 20 (×2).
- Step 4 runs 40 (×2).
- Step 5 runs 80 (×2).
- Step 6 would be 160, but it is capped at the configured target, so it runs 100.
- Progression: 5 → 10 → 20 → 40 → 80 → 100.
Watchers¶
We have watchers support during the benchmark workload. It is at a job level and will be usefull in scenarios where we want to monitor overhead created by watchers on a cluster.
Note
This feature doesn't effect the overall QPS/Burst as it uses its own client instance.
| Option | Description | Type | Default |
|---|---|---|---|
kind |
Object kind to consider for watch | String | "" |
apiVersion |
Object apiVersion to consider for watch | String | "" |
labelSelector |
Objects with these labels will be considered for watch | Object | {} |
replicas |
Number of watcher replicas to create | Integer | 1 |
Objects¶
The objects created by kube-burner are rendered using the default golang's template library.
Each object element supports the following parameters:
| Option | Description | Type | Default |
|---|---|---|---|
objectTemplate |
Object template file path or URL | String | "" |
replicas |
How replicas of this object to create per job iteration | Integer | - |
inputVars |
Map of arbitrary input variables to inject to the object template | Object | - |
wait |
Wait for object to be ready | Boolean | true |
waitOptions |
Customize how to wait for object to be ready | Object | {} |
runOnce |
Create or delete this object only once during the entire job | Boolean | false |
Warning
Kube-burner is only able to wait for a subset of resources, unless waitOptions are specified.
Built-in support for object waiters¶
The following object types have built-in waiters: - StatefulSet - Deployment - DaemonSet - ReplicaSet - Job - Pod - ReplicationController - Build - BuildConfig - VirtualMachine - VirtualMachineInstance - VirtualMachineInstanceReplicaSet - PersistentVolumeClaim - VolumeSnapshot - DataVolume - DataSource
Info
Find more info about the waiters implementation in the pkg/burner/waiters.go file
Object wait Options¶
If you want to override the default waiter behaviors, you can specify wait options for your objects.
| Option | Description | Type | Default |
|---|---|---|---|
apiVersion |
Object apiVersion to consider for wait | String | "" |
kind |
Object kind to consider for wait | String | "" |
labelSelector |
Objects with these labels will be considered for wait | Object | {} |
customStatusPaths |
list of jq path/values to verify readiness of the object | Object | [] |
For example, the snippet below can be used to make kube-burner wait for all containers from the pod defined at pod.yml to be ready.
objects:
- objectTemplate: deployment.yml
replicas: 3
waitOptions:
kind: Pod
labelSelector: {kube-burner-label : abcd}
Additionally, you can use customStatusPaths to specify custom paths to be checked for the readiness of the object. For example, to wait for a deployment to be available
objects:
- kind: Deployment
objectTemplate: deployment.yml
replicas: 1
waitOptions:
customStatusPaths:
- key: '(.conditions.[] | select(.type == "Available")).status'
value: "True"
Note
Currently, the value field expects only strings.
In order to test other types make sure to convert the result to a string in the key.
For example, to verify that a VolumeSnapshot is readyToUse set the customStatusPaths to:
customStatusPaths:
- key: '(.conditions.[] | select(.type == "Ready")).status'
value: "True"
Note
waitOptions.kind, waitOptions.customStatusPaths and waitOptions.labelSelector are fully optional. waitOptions.kind is used when an application has child objects to be waited & waitOptions.labelSelector is used when we want to wait on objects with specific labels.
Default labels¶
All objects created by kube-burner are labeled with kube-burner.io/uuid=<UUID>,kube-burner.io/job=<jobName>,kube-burner.io/index=<objectIndex>. They are used for internal purposes, but they can also be used by the users.
Multi-Document YAML Templates¶
Kube-burner supports defining multiple Kubernetes resources in a single object template file using YAML document separators (---). This is useful when you need to create related resources together, such as a Gateway and VirtualService for Istio, or any combination of resources that logically belong together.
When using multi-document YAML templates:
- Each document in the template is parsed and created separately
- All documents share the same
replicascount andinputVars - Template variables like
{{.Iteration}},{{.Replica}},{{.JobName}}, etc. are available in all documents - The network policy latency measurement also supports multi-document templates
Example: Istio Gateway and VirtualService¶
Create a file istio-combined.yml:
apiVersion: networking.istio.io/v1
kind: Gateway
metadata:
name: my-gateway-{{.Iteration}}
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
name: my-virtualservice-{{.Iteration}}
spec:
hosts:
- "*"
gateways:
- my-gateway-{{.Iteration}}
http:
- route:
- destination:
host: my-service
port:
number: 80
Reference this template in your job configuration:
jobs:
- name: istio-density
jobIterations: 10
qps: 5
burst: 5
namespace: istio-test
objects:
- objectTemplate: ./istio-combined.yml
replicas: 1
This will create both the Gateway and VirtualService for each iteration, with proper templating applied to both resources.
Note
Each document in the multi-document template is treated as a separate object internally, but they share the same replica configuration and input variables from the parent object definition.
Hooks¶
Hooks allow you to execute external commands at various stages of job execution. They support both foreground (blocking) and background (non-blocking) execution modes.
Hook Configuration¶
Hooks are configured as a list under the hooks field in a job:
| Option | Description | Type | Default |
|---|---|---|---|
cmd |
Command and arguments to execute | List | [] |
when |
Execution stage for the hook | String | "" |
background |
Run hook in background (non-blocking) | Boolean | false |
Supported Hook Stages¶
The when field specifies at which stage the hook should execute:
| Stage | Description |
|---|---|
beforeJobExecution |
Before job objects are created |
afterJobExecution |
After job objects are created (before churning) |
onEachIteration |
On each job iteration |
beforeChurn |
Before churn operation starts |
afterChurn |
After churn operation completes |
beforeCleanup |
Before cleanup/deletion begins |
afterCleanup |
After cleanup/deletion completes |
beforeGC |
Before garbage collection |
afterGC |
After garbage collection |
Execution Behavior¶
Foreground Hooks (background: false):
- Execute sequentially in the order defined
- Block job execution until completion
- No timeout by default (respects parent context cancellation only)
- Errors cause job to fail
Background Hooks (background: true):
- All background hooks for a stage start in parallel
- Job execution continues immediately
- Results are collected at the end of the job execution
- Errors are reported but don't block execution
- Properly cleaned up when parent context is cancelled
Execution Order:
- All background hooks for the stage start in parallel
- Foreground hooks execute sequentially after background hooks start
- Background hooks are waited on before proceeding to the next major phase
Example Configuration¶
jobs:
- name: my-workload
jobType: create
jobIterations: 100
namespace: workload-ns
hooks:
# Background monitoring hook - runs throughout deployment
- cmd: ["/bin/bash", "/scripts/monitor-resources.sh"]
when: beforeJobExecution
background: true
# Foreground setup hook - blocks until complete
- cmd: ["/usr/bin/setup-environment.sh", "--mode=production"]
when: beforeJobExecution
background: false
# Per-iteration hook
- cmd: ["/bin/bash", "/scripts/log-iteration.sh"]
when: onEachIteration
background: false
# Cleanup verification
- cmd: ["/scripts/verify-cleanup.sh"]
when: afterCleanup
background: false
objects:
- objectTemplate: deployment.yml
replicas: 10
Use Cases¶
Long-running background monitoring:
hooks:
- cmd: ["/usr/bin/prometheus-monitor", "--output=/metrics"]
when: beforeJobExecution
background: true
VM provisioning and readiness:
hooks:
- cmd: ["/scripts/provision-vm.sh", "--wait-ready"]
when: beforeJobExecution
background: false # No timeout, waits as long as needed
Data collection during churn:
hooks:
- cmd: ["/scripts/collect-churn-metrics.sh"]
when: beforeChurn
background: true
Sequential cleanup verification:
hooks:
- cmd: ["/scripts/check-resources.sh"]
when: afterCleanup
background: false
Best Practices¶
- Use background hooks for monitoring - Start monitoring/data collection in the background while workload runs
- Use foreground hooks for setup - Block execution for critical setup steps
- Handle errors appropriately - Foreground hook failures will fail the job
- Use absolute paths - Specify full paths to executables and scripts
- Keep hooks lightweight for
onEachIteration- This runs for every iteration
Error Handling¶
- Foreground hooks: Errors stop job execution and are reported immediately
- Background hooks: Errors are collected and reported after job completion
- All hook errors are included in job summary and return code
Job types¶
Configured by the parameter jobType, kube-burner supports four types of jobs with different parameters each:
- Create
- Delete
- Read
- Patch
Create¶
The default jobType is create. Creates objects listed in the objects list as described in the objects section. The amount of objects created is configured by jobIterations, replicas. If the object is namespaced and has an empty .metadata.namespace field, kube-burner creates a new namespace with the name namespace-<iteration>, and creates the defined amount of objects in it.
Delete¶
This type of job deletes objects described in the objects list. Using delete as job type the objects list would have the following structure:
objects:
- kind: Deployment
labelSelector: {kube-burner.io/job: cluster-density}
apiVersion: apps/v1
- kind: Secret
labelSelector: {kube-burner.io/job: cluster-density}
Where:
kind: Object kind of the k8s object to delete.labelSelector: Deletes the objects with the given labels.apiVersion: API version from the k8s object.
This type of job supports the following parameters. Described in the jobs section:
waitForDeletion: Wait for objects to be deleted before finishing the job. Defaults totrue.nameqpsburstjobPausejobIterationDelay
Read¶
This type of job reads objects described in the objects list. Using read as job type the objects list would have the following structure:
objects:
- kind: Deployment
labelSelector: {kube-burner.io/job: cluster-density}
apiVersion: apps/v1
- kind: Secret
labelSelector: {kube-burner.io/job: cluster-density}
Where:
kind: Object kind of the k8s object to read.labelSelector: Reads the objects with the given labels.apiVersion: API version from the k8s object.
This type of job supports the following parameters. Described in the jobs section:
nameqpsburstjobPausejobIterationDelayjobIterations
Patch¶
This type of job can be used to patch objects with the template described in the object list. This object list has the following structure:
objects:
- kind: Deployment
labelSelector: {kube-burner.io/job: cluster-density}
objectTemplate: templates/deployment_patch_add_label.json
patchType: "application/strategic-merge-patch+json"
apiVersion: apps/v1
Where:
kind: Object kind of the k8s object to patch.labelSelector: Map with the labelSelector.objectTemplate: The YAML template or JSON file to patch.apiVersion: API version from the k8s object.patchType: The Kubernetes request patch type (see below).
Valid patch types:
- application/json-patch+json
- application/merge-patch+json
- application/strategic-merge-patch+json
- application/apply-patch+yaml (requires YAML)
As mentioned previously, all objects created by kube-burner are labeled with kube-burner.io/uuid=<UUID>,kube-burner.io/job=<jobName>,kube-burner.io/index=<objectIndex>. Therefore, you can design a workload with one job to create objects and another one to patch or remove the objects created by the previous.
jobs:
- name: create-objects
namespace: job-namespace
jobIterations: 100
objects:
- objectTemplate: deployment.yml
replicas: 10
- objectTemplate: service.yml
replicas: 10
- name: remove-objects
jobType: delete
objects:
- kind: Deployment
labelSelector: {kube-burner.io/job: create-objects}
apiVersion: apps/v1
- kind: Secret
labelSelector: {kube-burner.io/job: create-objects}
Kubevirt¶
This type of job can be used to execute virtctl commands described in the object list. This object list has the following structure:
objects:
- kubeVirtOp: start
labelSelector: {kube-burner.io/job: cluster-density}
inputVars:
force: true
Where:
kubeVirtOp: virtctl operation to execute.labelSelector: Map with the labelSelector.inputVars: Additional command parameters
Supported Operations¶
start¶
Execute virtctl start on the VMs mapped by the labelSelector.
Additional parameters may be set using the inputVars field:
startPaused- VM will start inPausedstate. Defaultfalse
stop¶
Execute virtctl stop on the VMs mapped by the labelSelector.
Additional parameters may be set using the inputVars field:
force- Force stop the VM without waiting. Defaultfalse
restart¶
Execute virtctl restart on the VMs mapped by the labelSelector.
Additional parameters may be set using the inputVars field:
force- Force restart the VM without waiting. Defaultfalse
pause¶
Execute virtctl pause on the VMs mapped by the labelSelector.
No additional parametes are supported.
unpause¶
Execute virtctl unpause on the VMs mapped by the labelSelector.
No additional parametes are supported.
migrate¶
Execute virtctl migrate on the VMs mapped by the labelSelector.
No additional parametes are supported.
add-volume¶
Execute virtctl addvolume on the VMs mapped by the labelSelector.
Additional parameters should be set using the inputVars field:
volumeName- Name of the already existing volume to add. MandatorydiskType- Type of the new volume (disk/lun). Defaultdiskserial- serial number you want to assign to the disk. Defaults to the value ofvolumeNamecache- caching options attribute control the cache mechanism. Default''persist- if set, the added volume will be persisted in the VM spec (if it exists). Defaultfalse
remove-volume¶
Execute virtctl removevolume on the VMs mapped by the labelSelector.
Additional parameters should be set using the inputVars field:
volumeName- Name of the volume to remove. Mandatorypersist- if set, the added volume will be persisted in the VM spec (if it exists). Defaultfalse
Wait for completion¶
Wait is supported for the following operations:
start- Wait for theReadystate of theVirtualMachineto becomeTruestop- Wait for theReadystate of theVirtualMachinestate to becomeFalsewithreasonequal toVMINotExistsrestart- Wait for theReadystate of theVirtualMachineto becomeTruepause- Wait for thePausedstate of theVirtualMachineto becomeTrueunpause- Wait for theReadystate of theVirtualMachineto becomeTruemigrate- Wait for theReadystate of theVirtualMachineto becomeTrue
Note
The waiter makes sure that the lastTransitionTime of the condition is after the time of the command.
This requires that the timestamps on the cluster side are in UTC
Execution Modes¶
The executionMode parameter controls how objects are processed within a job. It is a per-job setting defined under each job entry in the configuration file. There is no global-level executionMode and no CLI flag to override it.
Supported values¶
parallel— Process all objects across all iterations concurrently, without waiting between objects or iterations.sequential— Process each object before moving to the next, with optional delays between objects (objectDelay) and/or between iterations (jobIterationDelay).
Per-job-type behavior¶
| Job Type | executionMode behavior |
Default | User-configurable? |
|---|---|---|---|
create |
Not used. Create jobs have their own execution path and ignore this setting | N/A | No |
patch |
Fully supported | parallel |
Yes |
delete |
Forced to sequential. User config is overridden |
sequential |
No |
read |
Forced to sequential. User config is overridden |
sequential |
No |
kubevirt |
Fully supported | sequential |
Yes |
Precedence rules¶
- For
deleteandreadjobs, the implementation unconditionally setsexecutionModetosequential, regardless of any user-specified value. - For
patchandkubevirtjobs, the user-specified value takes effect. If omitted, the default shown in the table above is used. - There is no global
executionModesetting and no CLI flag. The value is always resolved per job.
Example¶
jobs:
- name: patch-deployments
jobType: patch
jobIterations: 5
executionMode: sequential # User-configurable; default would be "parallel"
objectDelay: 2s # Only effective when executionMode is "sequential"
objects:
- kind: Deployment
labelSelector: {kube-burner.io/job: create-deployments}
objectTemplate: templates/deployment_patch.json
patchType: "application/strategic-merge-patch+json"
apiVersion: apps/v1
- name: delete-objects
jobType: delete
# executionMode is forced to "sequential" for delete jobs;
# setting it here has no effect.
objects:
- kind: Deployment
labelSelector: {kube-burner.io/job: create-deployments}
apiVersion: apps/v1
Churning Jobs¶
Only supported in create jobs, churn is the deletion and re-creation of objects, and is supported for namespace-based jobs only. This occurs after the job has completed
but prior to uploading metrics, if applicable. It deletes a percentage of contiguous randomly chosen namespaces or objects within those namespaces and re-creates them
with all of the appropriate objects. It will then wait for a specified delay (or none if set to 0) before deleting and recreating the
next randomly chosen set. This cycle continues until the churn duration/cycles has passed.
When metrics are collected during churning, any range query datapoints that fall between the churn start and end times will have the churnMetric field set to true in the indexed metrics. This allows for identification of metrics captured during churning periods for analysis purposes.
An example implementation that would churn 20% of the 100 job iterations for 2 hours with no delay between sets:
jobs:
- name: churning-job
jobIterations: 100
namespacedIterations: true
namespace: churning
churnConfig:
percent: 20
duration: 2h
objects:
- objectTemplate: deployment.yml
replicas: 10
- objectTemplate: service.yml
replicas: 10
Supported options¶
Churn supports the following options:
cycles: Number of churn cycles to executepercent: Percentage of the jobIterations to churn each periodduration: Length of time that the job is churned fordelay: Length of time to wait between each churn perioddeleteDelay: Length of time to wait after deletion and before recreation within a churn period. Defaults to0smode: Churning mode, eithernamespaces, to churn entire namespaces orobjects, to churn individual cluster-scoped and namespaced objects. Defaults tonamespaces.
Note
In order to enable churning for a job, either duration or cycles must be set. It's possible to use both at the same time.
Disable churning on individual objects¶
By default, when churning type is configured to object, all namespaced objects in the job's namespace are churned. But it's possible to skip individual objects by using the flag churn: false in the object definition.
jobs:
- name: churning-job
jobIterations: 100
namespacedIterations: true
namespace: churning
churnConfig:
percent: 20
cycles: 10
objects:
- objectTemplate: deployment.yml
replicas: 10
churn: false
- objectTemplate: service.yml
replicas: 10
Injected variables¶
All object templates are injected with the variables below by default:
Iteration: Job iteration number.Replica: Object replica number. Keep in mind that this number is reset to 1 with each job iteration.JobName: Job name.UUID: Benchmark UUID. (Can be also referenced in the main configuration file)RunID: Internal run id. Can be used to match resources for metrics collection
In addition, you can also inject arbitrary variables with the option inputVars of the object:
- objectTemplate: service.yml
replicas: 2
inputVars:
port: 80
targetPort: 8080
The following code snippet shows an example of a k8s service using these variables:
apiVersion: v1
kind: Service
metadata:
name: sleep-app-{{.Iteration}}-{{.Replica}}
labels:
name: my-app-{{.Iteration}}-{{.Replica}}
spec:
selector:
app: sleep-app-{{.Iteration}}-{{.Replica}}
ports:
- name: serviceport
protocol: TCP
port: "{{.port}}"
targetPort: "{{.targetPort}}"
type: ClusterIP
You can also use golang template semantics in your objectTemplate definitions
kind: ImageStream
apiVersion: image.openshift.io/v1
metadata:
name: {{.prefix}}-{{.Replica}}
spec:
{{ if .image }}
dockerImageRepository: {{.image}}
{{ end }}
Template functions¶
On top of the default golang template semantics, kube-burner supports additional template functions.
External libraries¶
- sprig library which adds over 70 template functions for Go’s template language.
Additional functions¶
Binomial- returns the binomial coefficient of (n,k)IndexToCombination- returns the combination corresponding to the given indexGetSubnet24GetIPAddress- returns number of addresses requested per iteration from the list of total provided addressesReadFile- returns the content of the file in the provided path
RunOnce¶
All objects within the job will iteratively run based on the JobIteration number,
but there may be a situation if an object need to be created only once (ex. clusterrole), in such cases
we can add an optional field as runOnce for that particular object to execute only once in the entire job.
An example scenario as below template, a job iteration of 100 but create the clusterrole only once.
jobs:
- name: cluster-density
jobIterations: 100
namespacedIterations: true
namespace: cluster-density
objects:
- objectTemplate: clusterrole.yml
replicas: 1
runOnce: true
- objectTemplate: clusterrolebinding.yml
replicas: 1
runOnce: true
- objectTemplate: deployment.yml
replicas: 10
MetricsClosing¶
This config defines when the metrics collection should stop. The option supports three values:
afterJob- collect metrics after the job completesafterJobPause- collect metrics after the jobPause duration ends (Default)afterMeasurements- collect metrics after all measurements are finished