caelum19
08/01/2025, 1:35 PMwhile (true) { agent.step() }
). I want the chat history to persist between steps (windowing or self-redacting is fine). What's the best way to do this?
For context, I want the agent to be more autonomous and run continuously, using tools to wait for periods of inactivitySergei Dubov
08/01/2025, 2:35 PMstep()
in your example? Koog agents does not have an ability to execute a particular step. You can run the whole agent strategy. Depending on the strategy, you can persist the context between runs or not.
If you need to repeat some step until it mean some conditions, you can take a look on subgraphWithRetry
that will run as part of the agent itself.caelum19
08/01/2025, 3:28 PMcaelum19
08/01/2025, 3:30 PM<https://docs.camel-ai.org/reference/camel.agents.chat_agent#step>
)caelum19
08/01/2025, 3:34 PMAria
08/01/2025, 4:12 PMAria
08/01/2025, 4:37 PMstep()
execution, there’s a PR in the works for a non-graph Koog strategy API that makes it easier to write agents imperatively for simpler cases that don’t need full graph strategies. Example from the PR:
val agent = AIAgent(
executor = executor,
llmModel = OpenAIModels.Chat.GPT4o,
strategy = simpleStrategy("calculator") { input ->
val responses = requestLLMMultiple(input)
while (responses.containsToolCalls()) {
val tools = extractToolCalls(responses)
if (latestTokenUsage(tools) > 100500) {
compressHistory()
}
val results = executeMultipleTools(tools)
sendMultipleToolResults(results)
}
responses.single().asAssistantMessage().content
},
systemPrompt = "You are a calculator.",
toolRegistry = toolRegistry
)
caelum19
08/01/2025, 4:38 PMAria
08/01/2025, 4:48 PMcaelum19
08/01/2025, 4:53 PMAIAgentNodeDelegate<String, String>
right now, though I probably just don't get this graph paradigm enough and I should let them be until I understand better.
If anyone is free to do some pair programming on that PR where I can ask many questions then I'd love to 🙂Aria
08/01/2025, 5:11 PM<String, String>
is just the node input/output types used in the examples/tests (representing text prompt -> text LLM output), but the input/outputs in Koog's API are generic and can be anything (e.g. <UserQueryDataClass, SearchResultsDataClass>
) in real use cases