Hi. I'm trying to create a simple ReAct (Reason-Ac...
# koog-agentic-framework
f
Hi. I'm trying to create a simple ReAct (Reason-Act) agent, where an LLM is used in two different nodes. However I want to use a different "system prompt" on each of these two steps. Is there any example available on customising the "system prompt" on each node? I could only find examples that use a single system prompt per agent. Thanks.
a
It's definitely possible! Would you please try
rewritePrompt
inside the
llm.writeSession
in your strategy? Like this:
Copy code
val strategy = strategy<String, String>("test") {
            val subgraphFirst by subgraph<String, Unit>("first") {
                val definePromptOne by node<Unit, Unit> {
                    llm.writeSession {
                        rewritePrompt {
                            prompt("system instructions") {
                                system(
                                    "First instruction"
                                )
                            }
                        }
                    }
                }


                val callLLM by nodeLLMRequest(allowToolCalls = true)
                val callTool by nodeExecuteTool()
                val sendToolResult by nodeLLMSendToolResult()


                edge(nodeStart forwardTo definePromptOne transformed {})
                edge(definePromptOne forwardTo callLLM transformed { agentInput<String>() })
              <other edges>...

            val subgraphSecond by subgraph("second") {
                val definePromptTwo by node<Unit, Unit> {
                    llm.writeSession {
                        rewritePrompt {
                            prompt("system instructions updated") {
                                system(
                                    "Some new task"
                                )
                            }
                        }
                    }
                }

                val callLLM by nodeLLMRequest(allowToolCalls = true)
                val callTool by nodeExecuteTool()
                val sendToolResult by nodeLLMSendToolResult()


                edge(nodeStart forwardTo definePromptTwo)
                edge(definePromptTwo forwardTo callLLM transformed { agentInput<String>() })
                <...other edges...>

            nodeStart then subgraphFirst then subgraphSecond then nodeFinish
        }
f
Thanks! I was currently reading about updatePrompt
rewritePrompt
in https://docs.koog.ai/sessions/ 🙂 However, if I'm understanding it correctly, this means that I'm constantly mutating the prompt saved in the agent context (i.e. always swapping between two different system prompts). Would it make sense to have the ability to create a custom prompt to use in the LLM interaction, derived from the context saved prompt, and without storing it back on the context? Kind of a custom disposable prompt just for use in a single LLM interaction.
I noticed you used subgraphs. Does each subgraph have an isolated agent context?
In the Reason-Act strategy, both the Reason and Act nodes needs access to the previous message history, however the prompt (i.e. message history) should start with different system messages.
a
Does each subgraph have an isolated agent context?
Yes and no at the same time 😅 Normally, each subgraph has its own isolated context, indeed. But the history is passed between them after the execution, so the second subgraph is aware of the previous messages history (or it's TL;DR, if you use a
nodeLLMCompressHistory
between them 🙂 ).
In the Reason-Act strategy, both the Reason and Act nodes needs access to the previous message history
My bad – I suggested the wrong method, in your case, it's better to use the
updatePrompt
. It will update the messages history with a new system message, but won't clear the previous history. The
rewritePrompt
method will completely rewrite the prompt (including history).
Would it make sense to have the ability to create a custom prompt to use in the LLM interaction, derived from the context saved prompt, and without storing it back on the context? Kind of a custom disposable prompt just for use in a single LLM interaction.
Oh, like an "incognito" prompt? 🙂 Sound interesting. Could you please elaborate on the use case of such situation? Also, do you mean a prompt like a
data class Prompt
in Koog, or just as a message (like User/Assistant/System one)?
f
I mean prompt as in
data class Prompt
, i.e., a list of messages. The idea is that the interaction with the LLM could use a custom-built ephemeral
Prompt
without needing to change the prompt in the context. Something like this is already done for instance in
PromptExecutor.executeStructured
, where a new
Prompt
is created from the context's prompt, without being stored back to the context:
Copy code
executeStructured(
    prompt: Prompt,
    mainModel: LLModel,
    structure: StructuredData<T>,
    retries: Int = 1,
    fixingModel: LLModel = OpenAIModels.Chat.GPT4o
): Result<StructuredResponse<T>> {
    val prompt = prompt(prompt) {
        user {
            markdown {
                StructuredOutputPrompts.output(this, structure)
            }
        }
    }
AIAgentLLMSession
already has a
preparePrompt
but it is protected.
a
btw, we already have a ReAct strategy in koog, however it does not substitute a different system message and ensures reasoning using a simpler approach. If you want to use a different system message, I would suggest doing it using rewritePrompt (assuming that the system message is the first message in the history):
Copy code
private fun AIAgentLLMWriteSession.updateSystemPrompt(newSystemPrompt: Message) {
    rewritePrompt {
        prompt -> prompt.withMessages {
            messages -> listOf(newSystemPrompt) + messages.drop(1)
        }
    }
}

private suspend fun AIAgentLLMWriteSession.reasoningIteration(reasoningSystemPrompt: Message): Unit {
    val initialSystemPrompt = prompt.messages[0]
    updateSystemPrompt(reasoningSystemPrompt)
    requestLLMWithoutTools()
    updateSystemPrompt(initialSystemPrompt)
}
then you can call the
reasoningIteration
inside the
llm.writeSession
, and the history will be shared between the reasoning and acting components