Skip to content

dspy.RLM

RLM (Recursive Language Model) is a DSPy module that lets LLMs programmatically explore large contexts through a sandboxed Python REPL. Instead of feeding huge contexts directly into the prompt, RLM treats context as external data that the LLM examines via code execution and recursive sub-LLM calls.

This implements the approach described in "Recursive Language Models" (Zhang, Kraska, Khattab, 2025).

When to Use RLM

As contexts grow, LLM performance degrades — a phenomenon known as context rot. RLMs address this by separating the variable space (information stored in the REPL) from the token space (what the LLM actually processes). The LLM dynamically loads only the context it needs, when it needs it.

Use RLM when:

  • Your context is too large to fit in the LLM's context window effectively
  • The task benefits from programmatic exploration (searching, filtering, aggregating, chunking)
  • You need the LLM to decide how to decompose the problem, not you

Basic Usage

import dspy

dspy.configure(lm=dspy.LM("openai/gpt-5"))

# Create an RLM module
rlm = dspy.RLM("context, query -> answer")

# Call it like any other module
result = rlm(
    context="...very long document or data...",
    query="What is the total revenue mentioned?"
)
print(result.answer)

Deno Installation

RLM relies on Deno and (Pyodide)[tocite] to create a WASM sandbox locally that

You can install Deno with: curl -fsSL https://deno.land/install.sh | sh on macOS and Linux. See the Deno Installation Docs for more details. Make sure to accept the prompt when it asks to add it to your shell profile.

After you have installed Deno, Make sure to restart your shell

Then you can run dspy.RLM.

User's have reported issues with the Deno cache not being found by DSPy. We are actively investigating these issues, and your feedback is greatly appreciated.

You can also work with an external sandbox provider. We are still working on creating an example of using external sandbox providers.

How It Works

RLM operates in an iterative REPL loop:

  1. The LLM receives metadata about the context (type, length, preview) but not the full context
  2. The LLM writes Python code to explore the data (print samples, search, filter)
  3. Code executes in a sandboxed interpreter, and the LLM sees the output
  4. The LLM can call llm_query(prompt) to run sub-LLM calls for semantic analysis on snippets
  5. When done, the LLM calls SUBMIT(output) to return the final answer

What the LLM sees (step-by-step trace):

Step 1: Initial Metadata (no direct access to full context)

# Step 1: Peek at the data
print(context[:2000])
Output shown to the LLM:
[Preview of the first 2000 characters of the document]

Step 2: Write Code to Explore Context

# Step 2: Search for relevant sections
import re
matches = re.findall(r'revenue.*?\$[\d,]+', context, re.IGNORECASE)
print(matches)
Output shown to the LLM:
['Revenue in Q4: $5,000,000', 'Total revenue: $20,000,000']

Step 3: Trigger Sub-LLM Calls

# Step 3: Use sub-LLM for semantic extraction
result = llm_query(f"Extract the total revenue from: {matches[0]}")
print(result)
Output shown to the LLM:
$5,000,000

Step 4: Submit Final Answer

# Step 4: Return final answer
SUBMIT(result)
Output shown to the user:
$5,000,000

Constructor Parameters

Parameter Type Default Description
signature str \| Signature required Defines inputs and outputs (e.g., "context, query -> answer")
max_iterations int 20 Maximum REPL interaction loops before fallback extraction
max_llm_calls int 50 Maximum llm_query/llm_query_batched calls per execution
max_output_chars int 100_000 Maximum characters to include from REPL output
verbose bool False Log detailed execution info
tools list[Union[Callable, dspy.Tool]] None Additional tool functions callable from interpreter code
sub_lm dspy.LM None LM for sub-queries. Defaults to dspy.settings.lm. Use a cheaper model here.
interpreter CodeInterpreter None Custom interpreter. Defaults to PythonInterpreter (Deno/Pyodide WASM).

Built-in Tools

Inside the REPL, the LLM has access to:

Tool Description
llm_query(prompt) Query a sub-LLM for semantic analysis (~500K char capacity)
llm_query_batched(prompts) Query multiple prompts concurrently (faster for batch operations)
print() Print output (required to see results)
SUBMIT(...) Submit final output and end execution
Standard library re, json, collections, math, etc.

Examples

Long Document Q&A

import dspy

dspy.configure(lm=dspy.LM("openai/gpt-5"))

rlm = dspy.RLM("document, question -> answer", max_iterations=10)

with open("large_report.txt") as f:
    document = f.read()  # 500K+ characters

result = rlm(
    document=document,
    question="What were the key findings from Q3?"
)
print(result.answer)

Using a Cheaper Sub-LM

import dspy

main_lm = dspy.LM("openai/gpt-5")
cheap_lm = dspy.LM("openai/gpt-5-nano")

dspy.configure(lm=main_lm)

# Root LM (gpt-4o) decides strategy; sub-LM (gpt-4o-mini) handles extraction
rlm = dspy.RLM("data, query -> summary", sub_lm=cheap_lm)

Multiple Typed Outputs

rlm = dspy.RLM("logs -> error_count: int, critical_errors: list[str]")

result = rlm(logs=server_logs)
print(f"Found {result.error_count} errors")
print(f"Critical: {result.critical_errors}")

Custom Tools

def fetch_metadata(doc_id: str) -> str:
    """Fetch metadata for a document ID."""
    return database.get_metadata(doc_id)

rlm = dspy.RLM(
    "documents, query -> answer",
    tools=[fetch_metadata]
)

Async Execution

import asyncio

rlm = dspy.RLM("context, query -> answer")

async def process():
    result = await rlm.aforward(context=data, query="Summarize this")
    return result.answer

answer = asyncio.run(process())

Inspecting the Trajectory

result = rlm(context=data, query="Find the magic number")

# See what code the LLM executed
for step in result.trajectory:
    print(f"Code:\n{step['code']}")
    print(f"Output:\n{step['output']}\n")

Output

RLM returns a Prediction with:

  • Output fields from your signature (e.g., result.answer)
  • trajectory: List of dicts with reasoning, code, output for each step
  • final_reasoning: The LLM's reasoning on the final step

Notes

Experimental

RLM is marked as experimental. The API may change in future releases.

Thread Safety

RLM instances are not thread-safe when using a custom interpreter. Create separate instances for concurrent use, or use the default PythonInterpreter which creates a fresh instance per forward() call.

Interpreter Requirements

The default PythonInterpreter requires Deno to be installed for the Pyodide WASM sandbox.

API Reference

dspy.RLM(signature: type[Signature] | str, max_iterations: int = 20, max_llm_calls: int = 50, max_output_chars: int = 100000, verbose: bool = False, tools: list[Callable] | None = None, sub_lm: dspy.LM | None = None, interpreter: CodeInterpreter | None = None)

Bases: Module

Recursive Language Model module.

Uses a sandboxed REPL to let the LLM programmatically explore large contexts through code execution. The LLM writes Python code to examine data, call sub-LLMs for semantic analysis, and build up answers iteratively.

The default interpreter is PythonInterpreter (Deno/Pyodide/WASM), but you can provide any CodeInterpreter implementation (e.g., MockInterpreter, or write a custom one using E2B or Modal).

Note: RLM instances are not thread-safe when using a custom interpreter. Create separate RLM instances for concurrent use, or use the default PythonInterpreter which creates a fresh instance per forward() call.

Example
# Basic usage
rlm = dspy.RLM("context, query -> output", max_iterations=10)
result = rlm(context="...very long text...", query="What is the magic number?")
print(result.output)

Parameters:

Name Type Description Default
signature type[Signature] | str

Defines inputs and outputs. String like "context, query -> answer" or a Signature class.

required
max_iterations int

Maximum REPL interaction iterations.

20
max_llm_calls int

Maximum sub-LLM calls (llm_query/llm_query_batched) per execution.

50
max_output_chars int

Maximum characters to include from REPL output.

100000
verbose bool

Whether to log detailed execution info.

False
tools list[Callable] | None

List of tool functions or dspy.Tool objects callable from interpreter code. Built-in tools: llm_query(prompt), llm_query_batched(prompts).

None
sub_lm LM | None

LM for llm_query/llm_query_batched. Defaults to dspy.settings.lm. Allows using a different (e.g., cheaper) model for sub-queries.

None
interpreter CodeInterpreter | None

CodeInterpreter implementation to use. Defaults to PythonInterpreter.

None
Source code in .venv/lib/python3.14/site-packages/dspy/predict/rlm.py
def __init__(
    self,
    signature: type[Signature] | str,
    max_iterations: int = 20,
    max_llm_calls: int = 50,
    max_output_chars: int = 100_000,
    verbose: bool = False,
    tools: list[Callable] | None = None,
    sub_lm: dspy.LM | None = None,
    interpreter: CodeInterpreter | None = None,
):
    """
    Args:
        signature: Defines inputs and outputs. String like "context, query -> answer"
                  or a Signature class.
        max_iterations: Maximum REPL interaction iterations.
        max_llm_calls: Maximum sub-LLM calls (llm_query/llm_query_batched) per execution.
        max_output_chars: Maximum characters to include from REPL output.
        verbose: Whether to log detailed execution info.
        tools: List of tool functions or dspy.Tool objects callable from interpreter code.
              Built-in tools: llm_query(prompt), llm_query_batched(prompts).
        sub_lm: LM for llm_query/llm_query_batched. Defaults to dspy.settings.lm.
               Allows using a different (e.g., cheaper) model for sub-queries.
        interpreter: CodeInterpreter implementation to use. Defaults to PythonInterpreter.
    """
    super().__init__()
    self.signature = ensure_signature(signature)
    self.max_iterations = max_iterations
    self.max_llm_calls = max_llm_calls
    self.max_output_chars = max_output_chars
    self.verbose = verbose
    self.sub_lm = sub_lm
    self._interpreter = interpreter
    self._user_tools = self._normalize_tools(tools)
    self._validate_tools(self._user_tools)

    # Build the action and extract signatures
    action_sig, extract_sig = self._build_signatures()
    self.generate_action = dspy.Predict(action_sig)
    self.extract = dspy.Predict(extract_sig)

Attributes

tools: dict[str, Tool] property

User-provided tools (excludes internal llm_query/llm_query_batched).

Functions

__call__(*args, **kwargs) -> Prediction
Source code in .venv/lib/python3.14/site-packages/dspy/primitives/module.py
@with_callbacks
def __call__(self, *args, **kwargs) -> Prediction:
    from dspy.dsp.utils.settings import thread_local_overrides

    caller_modules = settings.caller_modules or []
    caller_modules = list(caller_modules)
    caller_modules.append(self)

    with settings.context(caller_modules=caller_modules):
        if settings.track_usage and thread_local_overrides.get().get("usage_tracker") is None:
            with track_usage() as usage_tracker:
                output = self.forward(*args, **kwargs)
            tokens = usage_tracker.get_total_tokens()
            self._set_lm_usage(tokens, output)

            return output

        return self.forward(*args, **kwargs)
forward(**input_args) -> Prediction

Execute RLM to produce outputs from the given inputs.

Parameters:

Name Type Description Default
**input_args

Input values matching the signature's input fields

{}

Returns:

Type Description
Prediction

Prediction with output field(s) from the signature and 'trajectory' for debugging

Raises:

Type Description
ValueError

If required input fields are missing

Source code in .venv/lib/python3.14/site-packages/dspy/predict/rlm.py
def forward(self, **input_args) -> Prediction:
    """Execute RLM to produce outputs from the given inputs.

    Args:
        **input_args: Input values matching the signature's input fields

    Returns:
        Prediction with output field(s) from the signature and 'trajectory' for debugging

    Raises:
        ValueError: If required input fields are missing
    """
    self._validate_inputs(input_args)

    output_field_names = list(self.signature.output_fields.keys())
    execution_tools = self._prepare_execution_tools()
    variables = self._build_variables(**input_args)

    with self._interpreter_context(execution_tools) as repl:
        history: REPLHistory = REPLHistory()

        for iteration in range(self.max_iterations):
            result: Prediction | REPLHistory = self._execute_iteration(
                repl, variables, history, iteration, input_args, output_field_names
            )
            if isinstance(result, Prediction):
                return result
            history = result

        # Max iterations reached - use extract fallback
        return self._extract_fallback(variables, history, output_field_names)
aforward(**input_args) -> Prediction async

Async version of forward(). Execute RLM to produce outputs.

Parameters:

Name Type Description Default
**input_args

Input values matching the signature's input fields

{}

Returns:

Type Description
Prediction

Prediction with output field(s) from the signature and 'trajectory' for debugging

Raises:

Type Description
ValueError

If required input fields are missing

Source code in .venv/lib/python3.14/site-packages/dspy/predict/rlm.py
async def aforward(self, **input_args) -> Prediction:
    """Async version of forward(). Execute RLM to produce outputs.

    Args:
        **input_args: Input values matching the signature's input fields

    Returns:
        Prediction with output field(s) from the signature and 'trajectory' for debugging

    Raises:
        ValueError: If required input fields are missing
    """
    self._validate_inputs(input_args)

    output_field_names = list(self.signature.output_fields.keys())
    execution_tools = self._prepare_execution_tools()
    variables = self._build_variables(**input_args)

    with self._interpreter_context(execution_tools) as repl:
        history = REPLHistory()

        for iteration in range(self.max_iterations):
            result = await self._aexecute_iteration(
                repl, variables, history, iteration, input_args, output_field_names
            )
            if isinstance(result, Prediction):
                return result
            history = result

        # Max iterations reached - use extract fallback
        return await self._aextract_fallback(variables, history, output_field_names)
batch(examples: list[Example], num_threads: int | None = None, max_errors: int | None = None, return_failed_examples: bool = False, provide_traceback: bool | None = None, disable_progress_bar: bool = False, timeout: int = 120, straggler_limit: int = 3) -> list[Example] | tuple[list[Example], list[Example], list[Exception]]

Processes a list of dspy.Example instances in parallel using the Parallel module.

Parameters:

Name Type Description Default
examples list[Example]

List of dspy.Example instances to process.

required
num_threads int | None

Number of threads to use for parallel processing.

None
max_errors int | None

Maximum number of errors allowed before stopping execution. If None, inherits from dspy.settings.max_errors.

None
return_failed_examples bool

Whether to return failed examples and exceptions.

False
provide_traceback bool | None

Whether to include traceback information in error logs.

None
disable_progress_bar bool

Whether to display the progress bar.

False
timeout int

Seconds before a straggler task is resubmitted. Set to 0 to disable.

120
straggler_limit int

Only check for stragglers when this many or fewer tasks remain.

3

Returns:

Type Description
list[Example] | tuple[list[Example], list[Example], list[Exception]]

List of results, and optionally failed examples and exceptions.

Source code in .venv/lib/python3.14/site-packages/dspy/primitives/module.py
def batch(
    self,
    examples: list[Example],
    num_threads: int | None = None,
    max_errors: int | None = None,
    return_failed_examples: bool = False,
    provide_traceback: bool | None = None,
    disable_progress_bar: bool = False,
    timeout: int = 120,
    straggler_limit: int = 3,
) -> list[Example] | tuple[list[Example], list[Example], list[Exception]]:
    """
    Processes a list of dspy.Example instances in parallel using the Parallel module.

    Args:
        examples: List of dspy.Example instances to process.
        num_threads: Number of threads to use for parallel processing.
        max_errors: Maximum number of errors allowed before stopping execution.
            If ``None``, inherits from ``dspy.settings.max_errors``.
        return_failed_examples: Whether to return failed examples and exceptions.
        provide_traceback: Whether to include traceback information in error logs.
        disable_progress_bar: Whether to display the progress bar.
        timeout: Seconds before a straggler task is resubmitted. Set to 0 to disable.
        straggler_limit: Only check for stragglers when this many or fewer tasks remain.

    Returns:
        List of results, and optionally failed examples and exceptions.
    """
    # Create a list of execution pairs (self, example)
    exec_pairs = [(self, example.inputs()) for example in examples]

    # Create an instance of Parallel
    parallel_executor = Parallel(
        num_threads=num_threads,
        max_errors=max_errors,
        return_failed_examples=return_failed_examples,
        provide_traceback=provide_traceback,
        disable_progress_bar=disable_progress_bar,
        timeout=timeout,
        straggler_limit=straggler_limit,
    )

    # Execute the forward method of Parallel
    if return_failed_examples:
        results, failed_examples, exceptions = parallel_executor.forward(exec_pairs)
        return results, failed_examples, exceptions
    else:
        results = parallel_executor.forward(exec_pairs)
        return results
deepcopy()

Deep copy the module.

This is a tweak to the default python deepcopy that only deep copies self.parameters(), and for other attributes, we just do the shallow copy.

Source code in .venv/lib/python3.14/site-packages/dspy/primitives/base_module.py
def deepcopy(self):
    """Deep copy the module.

    This is a tweak to the default python deepcopy that only deep copies `self.parameters()`, and for other
    attributes, we just do the shallow copy.
    """
    try:
        # If the instance itself is copyable, we can just deep copy it.
        # Otherwise we will have to create a new instance and copy over the attributes one by one.
        return copy.deepcopy(self)
    except Exception:
        pass

    # Create an empty instance.
    new_instance = self.__class__.__new__(self.__class__)
    # Set attribuetes of the copied instance.
    for attr, value in self.__dict__.items():
        if isinstance(value, BaseModule):
            setattr(new_instance, attr, value.deepcopy())
        else:
            try:
                # Try to deep copy the attribute
                setattr(new_instance, attr, copy.deepcopy(value))
            except Exception:
                logging.warning(
                    f"Failed to deep copy attribute '{attr}' of {self.__class__.__name__}, "
                    "falling back to shallow copy or reference copy."
                )
                try:
                    # Fallback to shallow copy if deep copy fails
                    setattr(new_instance, attr, copy.copy(value))
                except Exception:
                    # If even the shallow copy fails, we just copy over the reference.
                    setattr(new_instance, attr, value)

    return new_instance
dump_state(json_mode=True)
Source code in .venv/lib/python3.14/site-packages/dspy/primitives/base_module.py
def dump_state(self, json_mode=True):
    return {name: param.dump_state(json_mode=json_mode) for name, param in self.named_parameters()}
get_lm()
Source code in .venv/lib/python3.14/site-packages/dspy/primitives/module.py
def get_lm(self):
    all_used_lms = [param.lm for _, param in self.named_predictors()]

    if len(set(all_used_lms)) == 1:
        return all_used_lms[0]

    raise ValueError("Multiple LMs are being used in the module. There's no unique LM to return.")
load(path, allow_pickle=False)

Load the saved module. You may also want to check out dspy.load, if you want to load an entire program, not just the state for an existing program.

Parameters:

Name Type Description Default
path str

Path to the saved state file, which should be a .json or a .pkl file

required
allow_pickle bool

If True, allow loading .pkl files, which can run arbitrary code. This is dangerous and should only be used if you are sure about the source of the file and in a trusted environment.

False
Source code in .venv/lib/python3.14/site-packages/dspy/primitives/base_module.py
def load(self, path, allow_pickle=False):
    """Load the saved module. You may also want to check out dspy.load, if you want to
    load an entire program, not just the state for an existing program.

    Args:
        path (str): Path to the saved state file, which should be a .json or a .pkl file
        allow_pickle (bool): If True, allow loading .pkl files, which can run arbitrary code.
            This is dangerous and should only be used if you are sure about the source of the file and in a trusted environment.
    """
    path = Path(path)

    if path.suffix == ".json":
        with open(path, "rb") as f:
            state = orjson.loads(f.read())
    elif path.suffix == ".pkl":
        if not allow_pickle:
            raise ValueError("Loading .pkl files can run arbitrary code, which may be dangerous. Prefer "
                             "saving with .json files if possible. Set `allow_pickle=True` "
                             "if you are sure about the source of the file and in a trusted environment.")
        with open(path, "rb") as f:
            state = cloudpickle.load(f)
    else:
        raise ValueError(f"`path` must end with `.json` or `.pkl`, but received: {path}")

    dependency_versions = get_dependency_versions()
    saved_dependency_versions = state["metadata"]["dependency_versions"]
    for key, saved_version in saved_dependency_versions.items():
        if dependency_versions[key] != saved_version:
            logger.warning(
                f"There is a mismatch of {key} version between saved model and current environment. "
                f"You saved with `{key}=={saved_version}`, but now you have "
                f"`{key}=={dependency_versions[key]}`. This might cause errors or performance downgrade "
                "on the loaded model, please consider loading the model in the same environment as the "
                "saving environment."
            )
    self.load_state(state)
load_state(state)
Source code in .venv/lib/python3.14/site-packages/dspy/primitives/base_module.py
def load_state(self, state):
    for name, param in self.named_parameters():
        param.load_state(state[name])
named_parameters()

Unlike PyTorch, handles (non-recursive) lists of parameters too.

Source code in .venv/lib/python3.14/site-packages/dspy/primitives/base_module.py
def named_parameters(self):
    """
    Unlike PyTorch, handles (non-recursive) lists of parameters too.
    """

    import dspy
    from dspy.predict.parameter import Parameter

    visited = set()
    named_parameters = []

    def add_parameter(param_name, param_value):
        if isinstance(param_value, Parameter):
            if id(param_value) not in visited:
                visited.add(id(param_value))
                named_parameters.append((param_name, param_value))

        elif isinstance(param_value, dspy.Module):
            # When a sub-module is pre-compiled, keep it frozen.
            if not getattr(param_value, "_compiled", False):
                for sub_name, param in param_value.named_parameters():
                    add_parameter(f"{param_name}.{sub_name}", param)

    if isinstance(self, Parameter):
        add_parameter("self", self)

    for name, value in self.__dict__.items():
        if isinstance(value, Parameter):
            add_parameter(name, value)

        elif isinstance(value, dspy.Module):
            # When a sub-module is pre-compiled, keep it frozen.
            if not getattr(value, "_compiled", False):
                for sub_name, param in value.named_parameters():
                    add_parameter(f"{name}.{sub_name}", param)

        elif isinstance(value, (list, tuple)):
            for idx, item in enumerate(value):
                add_parameter(f"{name}[{idx}]", item)

        elif isinstance(value, dict):
            for key, item in value.items():
                add_parameter(f"{name}['{key}']", item)

    return named_parameters
named_predictors()
Source code in .venv/lib/python3.14/site-packages/dspy/primitives/module.py
def named_predictors(self):
    from dspy.predict.predict import Predict

    return [(name, param) for name, param in self.named_parameters() if isinstance(param, Predict)]
parameters()
Source code in .venv/lib/python3.14/site-packages/dspy/primitives/base_module.py
def parameters(self):
    return [param for _, param in self.named_parameters()]
predictors()
Source code in .venv/lib/python3.14/site-packages/dspy/primitives/module.py
def predictors(self):
    return [param for _, param in self.named_predictors()]
reset_copy()

Deep copy the module and reset all parameters.

Source code in .venv/lib/python3.14/site-packages/dspy/primitives/base_module.py
def reset_copy(self):
    """Deep copy the module and reset all parameters."""
    new_instance = self.deepcopy()

    for param in new_instance.parameters():
        param.reset()

    return new_instance
save(path, save_program=False, modules_to_serialize=None)

Save the module.

Save the module to a directory or a file. There are two modes: - save_program=False: Save only the state of the module to a json or pickle file, based on the value of the file extension. - save_program=True: Save the whole module to a directory via cloudpickle, which contains both the state and architecture of the model.

If save_program=True and modules_to_serialize are provided, it will register those modules for serialization with cloudpickle's register_pickle_by_value. This causes cloudpickle to serialize the module by value rather than by reference, ensuring the module is fully preserved along with the saved program. This is useful when you have custom modules that need to be serialized alongside your program. If None, then no modules will be registered for serialization.

We also save the dependency versions, so that the loaded model can check if there is a version mismatch on critical dependencies or DSPy version.

Parameters:

Name Type Description Default
path str

Path to the saved state file, which should be a .json or .pkl file when save_program=False, and a directory when save_program=True.

required
save_program bool

If True, save the whole module to a directory via cloudpickle, otherwise only save the state.

False
modules_to_serialize list

A list of modules to serialize with cloudpickle's register_pickle_by_value. If None, then no modules will be registered for serialization.

None
Source code in .venv/lib/python3.14/site-packages/dspy/primitives/base_module.py
def save(self, path, save_program=False, modules_to_serialize=None):
    """Save the module.

    Save the module to a directory or a file. There are two modes:
    - `save_program=False`: Save only the state of the module to a json or pickle file, based on the value of
        the file extension.
    - `save_program=True`: Save the whole module to a directory via cloudpickle, which contains both the state and
        architecture of the model.

    If `save_program=True` and `modules_to_serialize` are provided, it will register those modules for serialization
    with cloudpickle's `register_pickle_by_value`. This causes cloudpickle to serialize the module by value rather
    than by reference, ensuring the module is fully preserved along with the saved program. This is useful
    when you have custom modules that need to be serialized alongside your program. If None, then no modules
    will be registered for serialization.

    We also save the dependency versions, so that the loaded model can check if there is a version mismatch on
    critical dependencies or DSPy version.

    Args:
        path (str): Path to the saved state file, which should be a .json or .pkl file when `save_program=False`,
            and a directory when `save_program=True`.
        save_program (bool): If True, save the whole module to a directory via cloudpickle, otherwise only save
            the state.
        modules_to_serialize (list): A list of modules to serialize with cloudpickle's `register_pickle_by_value`.
            If None, then no modules will be registered for serialization.

    """
    metadata = {}
    metadata["dependency_versions"] = get_dependency_versions()
    path = Path(path)

    if save_program:
        if path.suffix:
            raise ValueError(
                f"`path` must point to a directory without a suffix when `save_program=True`, but received: {path}"
            )
        if path.exists() and not path.is_dir():
            raise NotADirectoryError(f"The path '{path}' exists but is not a directory.")

        if not path.exists():
            # Create the directory (and any parent directories)
            path.mkdir(parents=True)
        logger.warning("Loading untrusted .pkl files can run arbitrary code, which may be dangerous. To avoid "
                      'this, prefer saving using json format using module.save("module.json").')
        try:
            modules_to_serialize = modules_to_serialize or []
            for module in modules_to_serialize:
                cloudpickle.register_pickle_by_value(module)

            with open(path / "program.pkl", "wb") as f:
                cloudpickle.dump(self, f)
        except Exception as e:
            raise RuntimeError(
                f"Saving failed with error: {e}. Please remove the non-picklable attributes from your DSPy program, "
                "or consider using state-only saving by setting `save_program=False`."
            )
        with open(path / "metadata.json", "wb") as f:
            f.write(orjson.dumps(metadata, option=orjson.OPT_INDENT_2 | orjson.OPT_APPEND_NEWLINE))

        return

    if path.suffix == ".json":
        state = self.dump_state()
        state["metadata"] = metadata
        try:
            with open(path, "wb") as f:
                f.write(orjson.dumps(state, option=orjson.OPT_INDENT_2 | orjson.OPT_APPEND_NEWLINE))
        except Exception as e:
            raise RuntimeError(
                f"Failed to save state to {path} with error: {e}. Your DSPy program may contain non "
                "json-serializable objects, please consider saving the state in .pkl by using `path` ending "
                "with `.pkl`, or saving the whole program by setting `save_program=True`."
            )
    elif path.suffix == ".pkl":
        logger.warning("Loading untrusted .pkl files can run arbitrary code, which may be dangerous. To avoid "
                      'this, prefer saving using json format using module.save("module.json").')
        state = self.dump_state(json_mode=False)
        state["metadata"] = metadata
        with open(path, "wb") as f:
            cloudpickle.dump(state, f)
    else:
        raise ValueError(f"`path` must end with `.json` or `.pkl` when `save_program=False`, but received: {path}")
set_lm(lm)
Source code in .venv/lib/python3.14/site-packages/dspy/primitives/module.py
def set_lm(self, lm):
    for _, param in self.named_predictors():
        param.lm = lm

:::