dspy.Refine
dspy.Refine(module: Module, N: int, reward_fn: Callable[[dict, Prediction], float], threshold: float, fail_count: Optional[int] = None)
Bases: Module
Refines a module by running it up to N times with different temperatures and returns the best prediction.
This module runs the provided module multiple times with varying temperature settings and selects either the first prediction that exceeds the specified threshold or the one with the highest reward. If no prediction meets the threshold, it automatically generates feedback to improve future predictions.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
module
|
Module
|
The module to refine. |
required |
N
|
int
|
The number of times to run the module. must |
required |
reward_fn
|
Callable
|
The reward function. |
required |
threshold
|
float
|
The threshold for the reward function. |
required |
fail_count
|
Optional[int]
|
The number of times the module can fail before raising an error |
None
|
Example
import dspy
dspy.settings.configure(lm=dspy.LM("openai/gpt-4o-mini"))
# Define a QA module with chain of thought
qa = dspy.ChainOfThought("question -> answer")
# Define a reward function that checks for one-word answers
def one_word_answer(args, pred):
return 1.0 if len(pred.answer.split()) == 1 else 0.0
# Create a refined module that tries up to 3 times
best_of_3 = dspy.Refine(module=qa, N=3, reward_fn=one_word_answer, threshold=1.0)
# Use the refined module
result = best_of_3(question="What is the capital of Belgium?").answer
# Returns: Brussels
Source code in dspy/predict/refine.py
Functions
__call__(*args, **kwargs)
Source code in dspy/primitives/program.py
acall(*args, **kwargs)
async
Source code in dspy/primitives/program.py
batch(examples, num_threads: Optional[int] = None, max_errors: int = 10, return_failed_examples: bool = False, provide_traceback: Optional[bool] = None, disable_progress_bar: bool = False)
Processes a list of dspy.Example instances in parallel using the Parallel module.
:param examples: List of dspy.Example instances to process. :param num_threads: Number of threads to use for parallel processing. :param max_errors: Maximum number of errors allowed before stopping execution. :param return_failed_examples: Whether to return failed examples and exceptions. :param provide_traceback: Whether to include traceback information in error logs. :return: List of results, and optionally failed examples and exceptions.
Source code in dspy/primitives/program.py
deepcopy()
Deep copy the module.
This is a tweak to the default python deepcopy that only deep copies self.parameters()
, and for other
attributes, we just do the shallow copy.
Source code in dspy/primitives/module.py
dump_state()
forward(**kwargs)
Source code in dspy/predict/refine.py
97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 |
|
get_lm()
inspect_history(n: int = 1)
load(path)
Load the saved module. You may also want to check out dspy.load, if you want to load an entire program, not just the state for an existing program.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str
|
Path to the saved state file, which should be a .json or a .pkl file |
required |
Source code in dspy/primitives/module.py
load_state(state)
map_named_predictors(func)
named_parameters()
Unlike PyTorch, handles (non-recursive) lists of parameters too.
Source code in dspy/primitives/module.py
named_predictors()
named_sub_modules(type_=None, skip_compiled=False) -> Generator[tuple[str, BaseModule], None, None]
Find all sub-modules in the module, as well as their names.
Say self.children[4]['key'].sub_module is a sub-module. Then the name will be 'children[4][key].sub_module'. But if the sub-module is accessible at different paths, only one of the paths will be returned.
Source code in dspy/primitives/module.py
parameters()
predictors()
reset_copy()
save(path, save_program=False, modules_to_serialize=None)
Save the module.
Save the module to a directory or a file. There are two modes:
- save_program=False
: Save only the state of the module to a json or pickle file, based on the value of
the file extension.
- save_program=True
: Save the whole module to a directory via cloudpickle, which contains both the state and
architecture of the model.
If save_program=True
and modules_to_serialize
are provided, it will register those modules for serialization
with cloudpickle's register_pickle_by_value
. This causes cloudpickle to serialize the module by value rather
than by reference, ensuring the module is fully preserved along with the saved program. This is useful
when you have custom modules that need to be serialized alongside your program. If None, then no modules
will be registered for serialization.
We also save the dependency versions, so that the loaded model can check if there is a version mismatch on critical dependencies or DSPy version.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str
|
Path to the saved state file, which should be a .json or .pkl file when |
required |
save_program
|
bool
|
If True, save the whole module to a directory via cloudpickle, otherwise only save the state. |
False
|
modules_to_serialize
|
list
|
A list of modules to serialize with cloudpickle's |
None
|
Source code in dspy/primitives/module.py
166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 |
|