dspy.Parallel¶
dspy.Parallel(num_threads: int | None = None, max_errors: int | None = None, access_examples: bool = True, return_failed_examples: bool = False, provide_traceback: bool | None = None, disable_progress_bar: bool = False, timeout: int = 120, straggler_limit: int = 3)
¶
A utility class for parallel, multi-threaded execution of (module, example) pairs.
Supports various example formats (e.g., Example, dict, tuple, list), robust error handling,
optional progress tracking, and can optionally return failed examples and exceptions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
num_threads
|
Optional[int]
|
The number of threads to use. Defaults to |
None
|
max_errors
|
Optional[int]
|
The maximum number of errors allowed before raising an exception. Defaults to |
None
|
access_examples
|
bool
|
Whether to unpack |
True
|
return_failed_examples
|
bool
|
Whether to return failed examples. Defaults to False. |
False
|
provide_traceback
|
Optional[bool]
|
Whether to provide traceback. Defaults to None. |
None
|
disable_progress_bar
|
bool
|
Whether to disable progress bar. Defaults to False. |
False
|
Example
import dspy
from dspy import Parallel
lm = dspy.LM("openai/gpt-4o-mini")
dspy.configure(lm=lm)
examples = [
{"question": "What is the capital of Spain?"},
{"question": "What is 3 * 4?"},
{"question": "Who wrote Hamlet?"},
]
module = dspy.Predict("question->answer")
exec_pairs = [(module, example) for example in examples]
parallel = Parallel(num_threads=3, disable_progress_bar=False)
results = parallel(exec_pairs)
for i, result in enumerate(results):
print(f"Result {i+1}: {result.answer}")
# Expected Output:
# Result 1: Madrid
# Result 2: 12
# Result 3: William Shakespeare