Ofek Teken
07/16/2025, 9:52 PMnode
that returns a structured output sealed class
that has it's own system prompt.
At first I thought about using subgraphWithTask
because you can very easily write the prompt task there and supply my own finishTool
to achieve so. I looked at ProvideStringSubgraphResult
, but I didn't find a way to express sealed hierarchies
when writing the ToolDescriptor
. I might've missed though.
I'd be happy to hear if this even sounds like the correct direction for this use-case.
I've responded inside the thread with what I currently did to achieve so. Thanks!Ofek Teken
07/16/2025, 9:53 PMupdatePrompt
here instead of rewritePrompt
and it will supply the whole history, but that's just as an example
context(agent: AIAgentStrategyBuilder<String, String>)
fun responseGeneratorNode() = agent.node<SomeSubgraphResult, MySealedResult?> { input ->
val userInput = storage.getValue(UserInputStorageKey)
llm.writeSession {
rewritePrompt {
prompt("response-gen") {
system(
"""
You are an expert giving insightful yet concise responses.
The user wrote: "$userInput"
Operation status: ${input.status}
Operation result info: ${input.result}
SOME MORE INSTRUCTIONS HERE..
""".trimIndent()
)
}
}
requestLLMStructured(
structure = JsonStructuredData.createJsonStructure<MySealedResult>(),
retries = 2,
fixingModel = GoogleModels.Gemini2_0Flash,
).getOrNull()?.structure
}
}
Anastasiia Zarechneva
07/17/2025, 12:12 PMProvideSubgraphResult
implementation.
At first, you can define your sealed class hierarchy with the serialization annotations. Let's say there is one common field and a couple of fields that differ:
@Serializable
sealed class MySealedResult {
abstract val id: String
@Serializable
@SerialName("Success")
data class Success(
override val id: String,
val data: String
) : MySealedResult()
@Serializable
@SerialName("Error")
data class Error(
override val id: String,
val errorCode: Int,
val errorMessage: String
) : MySealedResult()
}
Then, you need to add a custom implementation of ProvideSubgraphResult
for your sealed class (it's also necessary to add a field that'll help LLM to define which subtype of the sealed class to instantiate):
object ProvideMySealedResult : ProvideSubgraphResult<MySealedResult>() {
override val argsSerializer: KSerializer<MySealedResult> = MySealedResult.serializer()
override val descriptor: ToolDescriptor = ToolDescriptor(
name = "finish_task_execution",
description = "Call this tool when you have completed the task and want to provide the final result.",
requiredParameters = listOf(
// Common field for all subtypes
ToolParameterDescriptor(
name = "id",
description = "Request ID",
type = ToolParameterType.String
),
// Success subtype parameters
ToolParameterDescriptor(
name = "data",
description = "Data for successful operation (only provide if operation was successful)",
type = ToolParameterType.String,
required = false
),
// Error subtype parameters
ToolParameterDescriptor(
name = "errorCode",
description = "Error code (only provide if operation failed)",
type = ToolParameterType.Integer,
required = false
),
ToolParameterDescriptor(
name = "errorMessage",
description = "Error message (only provide if operation failed)",
type = ToolParameterType.String,
required = false
),
// Add a discriminator field to determine which subtype to use
ToolParameterDescriptor(
name = "resultType",
description = "Type of result: 'Success' or 'Error'",
type = ToolParameterType.String
)
)
)
override suspend fun execute(args: MySealedResult): MySealedResult {
return args
}
}
And then you can use your custom ProvideMySealedResult
with `subgraphWithTask`:
val toolRegistry = ToolRegistry {
tool(ProvideTaskResult)
}
val strategy = strategy<String, TaskResult>("my-strategy") {
val finishSubgraph by subgraphWithTask<String, TaskResult>(
tools = listOf(ProvideMySealedResult),
finishTool = ProvideMySealedResult,
llmModel = model
) { input ->
"do something with $input..."
}
edge(nodeStart forwardTo finishSubgraph)
edge(finishSubgraph forwardTo nodeFinish)
}
val agentConfig = AIAgentConfig(...)
val agent = AIAgent<String, TaskResult>(
promptExecutor = executor,
strategy = strategy,
agentConfig = agentConfig,
toolRegistry = toolRegistry
)
@Vadim Briliantov could you please correct me if I occasionally mixed something up? Thanks!Vadim Briliantov
07/17/2025, 1:50 PMsubgraphWithTask
because internally, subgraphWithTask
runs a whole loop strategy inside until the task is solved. Although, you may use it as @Anastasiia Zarechneva suggested — this way you’ll kinda rely on the native tool calling capability to get a structured result (which might work better for some LLMs).
But if you want to use structured output — your approach is also very much valid. The only thing worth considering is — restoring the original prompt after you get what you want with the structured inputVadim Briliantov
07/17/2025, 1:55 PMllm.writeSession {
val originalPrompt = llm.prompt
rewritePrompt {
prompt("response-gen") {
system(
"""
You are an expert giving insightful yet concise responses.
The user wrote: "$userInput"
Operation status: ${input.status}
Operation result info: ${input.result}
SOME MORE INSTRUCTIONS HERE..
""".trimIndent()
)
}
}
requestLLMStructured(
structure = JsonStructuredData.createJsonStructure<MySealedResult>(),
retries = 2,
fixingModel = GoogleModels.Gemini2_0Flash,
).getOrNull()?.structure.also {
rewritePrompt { originalPrompt }
}
}
Ofek Teken
07/17/2025, 2:10 PMVadim Briliantov
07/17/2025, 2:22 PMretrieveFactsFromHistory
is implemented here:
https://github.com/JetBrains/koog/blob/develop/agents/agents-features/agents-featu[…]/commonMain/kotlin/ai/koog/agents/memory/feature/AgentMemory.kt
Our ML engineers while working on some agent for a product have encountered that:
1. System message should be changed (and then rolled-back) if you want to retrieve smth. specific right now based on history, otherwise LLM might continue following the original one.
2. It’s better to combine history into a single message (markdown-style) compared to a request/response thing — this would reduce the risk of LLM post-training to your pattern with a few-shot example
3. After you’ve retrieved what you want, it works better if:
a. You restore the original system message
b. Then, original conversation
c. Then, some hand-crafted user’s question like “Can you give me some information in the following structure: …”
d. And then, LLM response
(so that it keeps the natural flow of conversation so that LLM won’t see some out-of-thin-air information at the end of the history)