Command Executor

Command Executor provides a functionality to run operators in various environment like Docker.

sh>, py>, and rb> support Command Executor.

Supported environments are AWS ECS (Elastic Container Service), Docker, and local. Kubernetes is under development.

For example, if you define a task with sh>, the task runs in local. If you add configuration for Docker, the task is executed in a docker container. You can switch environment to run a task without changing task definition.

Currently, ECS is default Command Executor. If there is no valid configuration for ECS, fallback to Docker. If there is no valid configuration for Docker, fallback to local.



The following is an example configuration for ECS Command Executor. = digdag-test

agent.command_executor.ecs.digdag-test.access_key_id = <ACCESS KEY>
agent.command_executor.ecs.digdag-test.secret_access_key = <SECRET KEY>
agent.command_executor.ecs.digdag-test.launch_type = FARGATE
agent.command_executor.ecs.digdag-test.region = us-east-1
agent.command_executor.ecs.digdag-test.subnets = subnet-NNNNN
agent.command_executor.ecs.digdag-test.max_retries = 3

agent.command_executor.ecs.temporal_storage.type = s3
agent.command_executor.ecs.temporal_storage.s3.bucket = <Bucket>
agent.command_executor.ecs.temporal_storage.s3.endpoint =
agent.command_executor.ecs.temporal_storage.s3.credentials.access-key-id = <ACCESS KEY>
agent.command_executor.ecs.temporal_storage.s3.credentials.secret-access-key = <SECRET KEY>

Each sub keys of agent.command_executor are as follows:

key description ECS Cluster name. The value <name> is used as the key of following configuration
ecs.<name>.access_key_id (Optional)AWS access key for ECS. The key needs permissions for ECS and CloudWatch. If it is not specified, other credentials are used for authorization.
ecs.<name>.secret_access_key (Optional)AWS secret key
ecs.<name>.launch_type The launch type of container. FARGATE or EC2
ecs.<name>.region AWS region
ecs.<name>.subnets AWS subnet
ecs.<name>.max_retries Number of retry for AWS client
ecs.<name>.use_environment_file (Optional) whether use environmentFiles or environment when setting variables to ECS. Default value is false

Following keys are for configuration of temporal storage with AWS S3.

key description
ecs.temporal_storage.type The bucket type. s3 for AWS S3
ecs.temporal_storage.s3.bucket Bucket name
ecs.temporal_storage.s3.endpoint The end point URL for S3
ecs.temporal_storage.s3.credentials.access-key-id (Optional) AWS access key for the bucket
ecs.temporal_storage.s3.credentials.secret-access-key (Optional) AWS secret key

The ways of authorizing to ECS cluster, tasks and S3 temporal storage.

DefaultAWSCredentialsProviderChain besides AWS access key and secret can be used as a credential for connecting with ECS on version 0.10.5 or above . As a result of that, if ecs.<name>.access_key_id is not specified, digdag server looks for one of the credentials described in the document. Digdag server uses the same credentials when connecting with S3 temporal storage, thus if ecs.temporal_storage.s3.credentials.access-key-id is not specified, digdag server also looks for credentials same as ECS.

How to use from workflow

In workflow definition, there are two ways to set a task on ECS.

Set ecs.task_definition_arn

    task_definition_arn: "arn:aws:ecs:us-east-1:..."

  py>: ...

Set docker.image

    image: "digdag/digdag-python:3.7"

  py>: ...

You need to set a tag digdag.docker.image in the task definition. ECS Command Executor try to search the tagged task definition.

(This way lists and check all task definitions until found and take long time to run the task. See issue #1488)



No configuration is required to use Docker Command Executor.

How to use from workflow

The following is an example workflow definition for Docker Command Executor.

    image: "python:3.7"
    docker: "/usr/local/bin/docker"
    run_options: [ "-m", "1G" ]
    pull_always: true
    selinux: true

  py>: ...

The sub keys in docker are as follows.

key description
image Docker image
docker Docker command. default is docker
run_options Arguments to be passed to docker run1
pull_always Digdag caches the docker image. If you want to pull the image always, set to true. Default is false
selinux Set to true when using SELinux. Default is false

You can build a docker image to be used with build parameter.

    image: "azul/zulu-openjdk:8"
    docker: "/usr/local/bin/docker"
    run_options: [ "-m", "1G" ]
      - apt-get -y update
      - apt-get -y install software-properties-common
      - --build-arg var1=test1

  py>: ...

Docker Command Executor generates a Dockerfile and build an image then run a container with the image.

key description
image Base image in the generated Dockerfile
build Command list which are described in the generated Dockerfile with RUN
build_options Option list for docker build command