Felix
07/02/2025, 11:59 AMAnastasiia Zarechneva
07/02/2025, 12:24 PMrewritePrompt
inside the llm.writeSession
in your strategy? Like this:
val strategy = strategy<String, String>("test") {
val subgraphFirst by subgraph<String, Unit>("first") {
val definePromptOne by node<Unit, Unit> {
llm.writeSession {
rewritePrompt {
prompt("system instructions") {
system(
"First instruction"
)
}
}
}
}
val callLLM by nodeLLMRequest(allowToolCalls = true)
val callTool by nodeExecuteTool()
val sendToolResult by nodeLLMSendToolResult()
edge(nodeStart forwardTo definePromptOne transformed {})
edge(definePromptOne forwardTo callLLM transformed { agentInput<String>() })
<other edges>...
val subgraphSecond by subgraph("second") {
val definePromptTwo by node<Unit, Unit> {
llm.writeSession {
rewritePrompt {
prompt("system instructions updated") {
system(
"Some new task"
)
}
}
}
}
val callLLM by nodeLLMRequest(allowToolCalls = true)
val callTool by nodeExecuteTool()
val sendToolResult by nodeLLMSendToolResult()
edge(nodeStart forwardTo definePromptTwo)
edge(definePromptTwo forwardTo callLLM transformed { agentInput<String>() })
<...other edges...>
nodeStart then subgraphFirst then subgraphSecond then nodeFinish
}
Felix
07/02/2025, 12:28 PMrewritePrompt
in https://docs.koog.ai/sessions/ 🙂 However, if I'm understanding it correctly, this means that I'm constantly mutating the prompt saved in the agent context (i.e. always swapping between two different system prompts). Would it make sense to have the ability to create a custom prompt to use in the LLM interaction, derived from the context saved prompt, and without storing it back on the context? Kind of a custom disposable prompt just for use in a single LLM interaction.Felix
07/02/2025, 12:31 PMFelix
07/02/2025, 12:32 PMAnastasiia Zarechneva
07/02/2025, 1:41 PMDoes each subgraph have an isolated agent context?Yes and no at the same time 😅 Normally, each subgraph has its own isolated context, indeed. But the history is passed between them after the execution, so the second subgraph is aware of the previous messages history (or it's TL;DR, if you use a
nodeLLMCompressHistory
between them 🙂 ).
In the Reason-Act strategy, both the Reason and Act nodes needs access to the previous message historyMy bad – I suggested the wrong method, in your case, it's better to use the
updatePrompt
. It will update the messages history with a new system message, but won't clear the previous history. The rewritePrompt
method will completely rewrite the prompt (including history).Anastasiia Zarechneva
07/02/2025, 1:46 PMWould it make sense to have the ability to create a custom prompt to use in the LLM interaction, derived from the context saved prompt, and without storing it back on the context? Kind of a custom disposable prompt just for use in a single LLM interaction.Oh, like an "incognito" prompt? 🙂 Sound interesting. Could you please elaborate on the use case of such situation? Also, do you mean a prompt like a
data class Prompt
in Koog, or just as a message (like User/Assistant/System one)?Felix
07/02/2025, 2:02 PMdata class Prompt
, i.e., a list of messages.
The idea is that the interaction with the LLM could use a custom-built ephemeral Prompt
without needing to change the prompt in the context.
Something like this is already done for instance in PromptExecutor.executeStructured
, where a new Prompt
is created from the context's prompt, without being stored back to the context:
executeStructured(
prompt: Prompt,
mainModel: LLModel,
structure: StructuredData<T>,
retries: Int = 1,
fixingModel: LLModel = OpenAIModels.Chat.GPT4o
): Result<StructuredResponse<T>> {
val prompt = prompt(prompt) {
user {
markdown {
StructuredOutputPrompts.output(this, structure)
}
}
}
AIAgentLLMSession
already has a preparePrompt
but it is protected.Antonii Belyshev
07/02/2025, 3:52 PMprivate fun AIAgentLLMWriteSession.updateSystemPrompt(newSystemPrompt: Message) {
rewritePrompt {
prompt -> prompt.withMessages {
messages -> listOf(newSystemPrompt) + messages.drop(1)
}
}
}
private suspend fun AIAgentLLMWriteSession.reasoningIteration(reasoningSystemPrompt: Message): Unit {
val initialSystemPrompt = prompt.messages[0]
updateSystemPrompt(reasoningSystemPrompt)
requestLLMWithoutTools()
updateSystemPrompt(initialSystemPrompt)
}
then you can call the reasoningIteration
inside the llm.writeSession
, and the history will be shared between the reasoning and acting components