Concurrent Composition

This section explains how to run multiple tasks concurrently using when_all.

Code examples assume using namespace boost::capy; is in effect.

The Problem

Tasks are sequential by default:

task<void> sequential()
{
    int a = co_await fetch_a();  // Wait for A
    int b = co_await fetch_b();  // Then wait for B
    int c = co_await fetch_c();  // Then wait for C
    // Total time: A + B + C
}

For independent operations, this wastes time.

when_all

The when_all function launches tasks concurrently:

#include <boost/capy/when_all.hpp>

task<void> concurrent()
{
    auto [a, b, c] = co_await when_all(
        fetch_a(),
        fetch_b(),
        fetch_c()
    );
    // Total time: max(A, B, C)
}

All three fetches run in parallel. The co_await completes when the slowest one finishes.

Return Value

when_all returns a tuple of results, with void types filtered out:

// All non-void: tuple of all results
auto [x, y] = co_await when_all(
    returns_int(),     // task<int>
    returns_string()   // task<std::string>
);
// x is int, y is std::string

// Mixed: void tasks don't contribute
auto [value] = co_await when_all(
    returns_int(),  // task<int>
    returns_void(), // task<void> — no contribution
    returns_void()  // task<void> — no contribution
);
// value is int (only non-void result)

// All void: returns void
co_await when_all(
    task_void_1(),
    task_void_2()
);
// No tuple, no return value

Results appear in the same order as input tasks.

Error Handling

When a task throws:

  1. The exception is captured

  2. Stop is requested for sibling tasks

  3. All tasks complete (or respond to stop)

  4. The first exception is rethrown

task<void> handle_errors()
{
    try {
        co_await when_all(
            might_fail(),
            another_task(),
            third_task()
        );
    } catch (std::exception const& e) {
        std::cerr << "First error: " << e.what() << "\n";
    }
}

First-Error Semantics

Only the first exception is captured; subsequent exceptions are discarded.

Stop Propagation

When an error occurs, when_all requests stop for siblings:

task<void> cancellable_work()
{
    auto token = co_await this_coro::stop_token;
    for (int i = 0; i < 1000; ++i)
    {
        if (token.stop_requested())
            co_return;  // Exit early
        co_await do_chunk(i);
    }
}

task<void> example()
{
    // If failing_task throws, cancellable_work sees stop_requested
    co_await when_all(
        failing_task(),
        cancellable_work()
    );
}

Parent Stop Token

when_all forwards the parent’s stop token:

task<void> parent()
{
    co_await when_all(
        child_a(),  // Sees parent's stop token
        child_b()   // Sees parent's stop token
    );
}

std::stop_source source;
run_async(ex, source.get_token())(parent());

// Cancel everything
source.request_stop();

Execution Model

Children inherit the parent’s executor:

task<void> parent()  // Running on executor ex
{
    co_await when_all(
        child_a(),  // Runs on ex
        child_b()   // Runs on ex
    );
}

Children are launched via dispatch(), which may run them inline or queue them.

No Parallelism by Default

With a single-threaded executor, tasks interleave but don’t run in parallel:

thread_pool pool(1);  // Single thread
run_async(pool.get_executor())(parent());
// Tasks interleave at suspension points

For true parallelism, use multiple threads:

thread_pool pool(4);  // Four threads
run_async(pool.get_executor())(parent());
// Tasks may run on different threads

Example: Parallel Fetches

task<std::string> fetch(http_client& client, std::string url)
{
    co_return co_await client.get(url);
}

task<void> fetch_all(http_client& client)
{
    auto [home, about, contact] = co_await when_all(
        fetch(client, "https://example.com/"),
        fetch(client, "https://example.com/about"),
        fetch(client, "https://example.com/contact")
    );

    std::cout << "Home: " << home.size() << " bytes\n";
    std::cout << "About: " << about.size() << " bytes\n";
    std::cout << "Contact: " << contact.size() << " bytes\n";
}

When to Use when_all

Use when_all when:

  • Operations are independent

  • You want to reduce total wait time

  • You need all results before proceeding

Do NOT use when_all when:

  • Operations depend on each other—use sequential co_await

  • Memory is constrained—concurrent tasks use more memory

Summary

Feature Description

when_all(tasks…​)

Launch tasks concurrently, wait for all

Return type

Tuple of non-void results in input order

Error handling

First exception propagated, siblings get stop

Affinity

Children inherit parent’s executor

Stop propagation

Parent and sibling stop tokens forwarded

Next Steps