Didier Villevalois
06/07/2025, 11:08 PMonMultipleToolCalls { true })
and transformed { it.first() } onAssistantMessage { true })
(copied from the only example using onMultipleToolCalls
) is not very solid without adding a lot of prompt to force the LLM to either only make calls or only make chat.
What would be the best approach with the current graph/edge system to send calls on one edge and chat messages on another edge?
[Code in the thread]Didier Villevalois
06/07/2025, 11:09 PMstrategy("qa") {
val qa by qaWithTools { calls -> vetToolCalls(calls, tools) }
nodeStart then qa then nodeFinish
}
private fun AIAgentSubgraphBuilderBase<*, *>.qaWithTools(
vetToolCalls: suspend (List<Message.Tool.Call>) -> List<Boolean>,
) = subgraph {
val initialRequest by nodeLLMRequestMultiple()
val processResponses by nodeDoNothing<List<Message.Response>>()
val vetToolCalls by nodeVetToolCalls(vetToolCalls = vetToolCalls)
val executeTools by nodeExecuteVettedToolCalls(parallelTools = true)
val toolResultsRequest by nodeLLMSendMultipleToolResults()
edge(nodeStart forwardTo initialRequest)
edge(initialRequest forwardTo processResponses)
edge(processResponses forwardTo vetToolCalls onMultipleToolCalls { true })
edge(processResponses forwardTo nodeFinish transformed { it.first() } onAssistantMessage { true })
edge(vetToolCalls forwardTo executeTools)
edge(executeTools forwardTo toolResultsRequest)
edge(toolResultsRequest forwardTo processResponses)
}
nodeVetToolCalls
and nodeExecuteVettedToolCalls
are custom nodes. nodeVetToolCalls
does user-interaction to vet calls to certain tools and nodeExecuteVettedToolCalls
only execute the accepted tool calls and rejects the others.
I am using llama4:16x17b (aka. Scout). And it does want to explain his reasoning before calling tools. I don't want to deprive it of that, because I think that reasoning is important content that have to be added to the context.Didier Villevalois
06/07/2025, 11:21 PMnodeVetToolCalls
and nodeExecuteVettedToolCalls
nodes, you have to manually partition messages and create intermediate data containers. It gets really cumbersome.
Or am I missing something obvious? Any help is greatly appreciated.Didier Villevalois
06/08/2025, 12:22 AMonAnyToolCalls
:
infix fun <IncomingOutput, IntermediateOutput, OutgoingInput>
AIAgentEdgeBuilderIntermediate<IncomingOutput, IntermediateOutput, OutgoingInput>.onAnyToolCalls(
block: suspend (List<Message.Tool.Call>) -> Boolean
): AIAgentEdgeBuilderIntermediate<IncomingOutput, List<Message.Tool.Call>, OutgoingInput> {
return onIsInstance(List::class)
.transformed { it.filterIsInstance<Message.Tool.Call>() }
.onCondition { toolCalls -> toolCalls.isNotEmpty() && block(toolCalls) }
}
Maybe, that should replace/complement onMultipleToolCalls
(which checks that all messages are tool calls).Vadim Briliantov
06/10/2025, 10:31 AMtoolChoice
parameter (which can tell the LLM to ONLY call tools, for example, see ai.koog.prompt.params.LLMParams.ToolChoice
) as bigger models (OpenAI, Anthropic, or in Gemini it’s called functionCallingConfig
) do, you need to workaround this.
There are options:
a) adding a giveFeedbackToCallTools
node by hand, something like:
val giveFeedbackToCallTools by node<String, Message.Response> { input ->
llm.writeSession {
updatePrompt {
user("Don't chat with plain text! Call one of the available tools, instead: ${tools.joinToString(", ") { it.name }}")
}
requestLLM()
}
}
b) You can actually write onCondition
and check if tool calls are present in the response, and make your vetToolCalls
node receive List<Message.Response> and do the filtering by itself.
c) Your approach with onAnyToolCalls
or similar.
Probably, we should add this to our API and document this. Please feel free to make a PR. I would probably suggest a name like onToolCallsPresent
. Maybe something like this:
infix fun <IncomingOutput, IntermediateOutput, OutgoingInput>
AIAgentEdgeBuilderIntermediate<IncomingOutput, IntermediateOutput, OutgoingInput>.onToolCallsPresent(
block: suspend (List<Message.Tool.Call>) -> Boolean
): AIAgentEdgeBuilderIntermediate<IncomingOutput, List<Message.Tool.Call>, OutgoingInput> =
onToolCallsPresent(filterOutAssistantMessages = true) {
block(it as List<Message.Tool.Call>)
}.transformed { it as List<Message.Tool.Call> }
fun <IncomingOutput, IntermediateOutput, OutgoingInput>
AIAgentEdgeBuilderIntermediate<IncomingOutput, IntermediateOutput, OutgoingInput>.onToolCallsPresent(
filterOutAssistantMessages: Boolean,
block: suspend (List<Message.Response>) -> Boolean
): AIAgentEdgeBuilderIntermediate<IncomingOutput, List<Message.Response>, OutgoingInput> {
return onIsInstance(List::class)
.transformed { it.filterIsInstance<Message.Response>() }
.transformed {
if (filterOutAssistantMessages) it.filterIsInstance<Message.Tool.Call>() else it
}
.onCondition { toolCalls -> toolCalls.isNotEmpty() && block(toolCalls) }
}
Vadim Briliantov
06/10/2025, 10:32 AMonSingleAssistantMessage
so that we can change
edge(processResponses forwardTo nodeFinish transformed { it.first() } onAssistantMessage { true })
to
edge(processResponses forwardTo nodeFinish onSingleAssistantMessage { true })
WDYT?Didier Villevalois
06/10/2025, 7:19 PMonToolCallsPresent
. Maybe something like this:
I like it. I will submit some tentative PR for it.Didier Villevalois
06/10/2025, 7:25 PMAnd also probably we need to addI really wonder what would we gain to mute the other assistant messages (and which to choose, the first, the last, etc.) and only keep one (even if all are added to the prompt byso that we can changeonSingleAssistantMessage
nodeLLMRequest*
). Also, this seems very related to what you guys will decide about the thinking messages. So I guess I should wait on what you guys design about that first. For now, I just dump all the messages and it is fine for my use-case.