Sergio Casero
07/10/2025, 4:24 PMAIAgent
through an API with ktor? The idea is to provide an API to the devs so the mobile team can integrate it in the appsMark Tkachenko
07/10/2025, 4:29 PMSergio Casero
07/10/2025, 4:30 PMSergio Casero
07/10/2025, 4:31 PMMark Tkachenko
07/10/2025, 4:34 PMSergio Casero
07/10/2025, 4:36 PMrunAndGetResult
, right?Mark Tkachenko
07/10/2025, 4:41 PMIn fact the idea is yes, share the agent between requests, as a singletonOkay, this is not the intended approach, because each agent running the one session at the time and you cannot run already running agent. At the same time agent is not something heavy and it’s okay to create agent-per request or pool agents if you want to save some memory on agent creation
I believe I should callYeah, that’s the way to run one., right?runAndGetResult
Sergio Casero
07/10/2025, 4:41 PMMark Tkachenko
07/10/2025, 4:44 PMSergio Casero
07/10/2025, 4:44 PMSergio Casero
07/10/2025, 4:45 PMMark Tkachenko
07/10/2025, 4:46 PMSergio Casero
07/10/2025, 4:51 PMSergio Casero
07/10/2025, 4:52 PMevents = graph.stream({"messages": ("user", message)}, config, stream_mode="values")
Lang graph just "append" the message with the role to the llmSergio Casero
07/10/2025, 4:53 PMMark Tkachenko
07/10/2025, 4:55 PMMark Tkachenko
07/10/2025, 4:55 PMSergio Casero
07/10/2025, 5:31 PMVadim Briliantov
07/11/2025, 12:24 PMVadim Briliantov
07/11/2025, 12:24 PMVadim Briliantov
07/11/2025, 12:26 PMinstall(Koog) {
llm {
openAI(apiKey = "sk-1234567890")
}
agent {
mcp {
sse("url")
}
prompt {
system("You are professional joke generator based on user's request")
}
install(OpenTelemetry) {
addSpanExporter(MySpanExporter)
}
}
}
And then you can just use call.agentRespond("Your input")
from any routes (default agent strategy will be used, or you can pass a custom strategy as a second param to the call.agentRespond
). Or use direct LLM calls via askLLM("Some question")
inside routes.
And it’s a route-scoped plugin, so you can have different agents for different routes, and keep for example the common llm { … }
configuration on the top-level.
Disclamer: that’s just a rough API proposal for now 🙂