Kubernetes
delete_nodes
Delete nodes gracefully
Drain nodes matching the given label or name, so that no pods are scheduled on them any longer and running pods are evicted
Below are the details and signature of the activity Python module.
Type | action |
Module | chaosk8s.node.actions |
Name | drain_nodes |
Return | boolean |
Usage
JSON
{
"name": "drain-nodes",
"type": "action",
"provider": {
"type": "python",
"module": "chaosk8s.node.actions",
"func": "drain_nodes"
}
}
YAML
name: drain-nodes
provider:
func: drain_nodes
module: chaosk8s.node.actions
type: python
type: action
Arguments
Name | Type | Default | Required |
---|---|---|---|
name | string | null | No |
label_selector | string | null | No |
delete_pods_with_local_storage | boolean | false | No |
timeout | integer | 120 | No |
count | integer | null | No |
pod_label_selector | string | null | No |
pod_namespace | string | null | No |
This action does a similar job to kubectl drain --ignore-daemonsets
or kubectl drain --delete-local-data --ignore-daemonsets
if delete_pods_with_local_storage
is set to True
. There is no equivalent to the kubectl drain --force
flag.
You probably want to call uncordon
from in your experiment’s rollbacks.
Signature
def drain_nodes(name: str = None,
label_selector: str = None,
delete_pods_with_local_storage: bool = False,
timeout: int = 120,
secrets: Dict[str, Dict[str, str]] = None,
count: int = None,
pod_label_selector: str = None,
pod_namespace: str = None) -> bool:
pass