Hello everyone :wave: I wanted to practice using K...
# koog-agentic-framework
g
Hello everyone 👋 I wanted to practice using Koog to create real world application, when I stumbled upon this article it seemed like an interesting use case for Koog So I created this project: It's a multi-agent research tool built on Koog + Open router With exposure as MCP and REST. Please feel free to submit issues, create PRs or provide guidance as there is so much to learn from you K
K 4
🚀 3
m
Thanks for sharing this @Gemy! I have wondered if parallel nodes or sub agents would be better. I still have to review your implementation. I am also curious about feedback from the community.
g
Great question @Michael Wills! I think both are considered subagents, whether if it's parallel nodes or a tool call. What matters is what's inside. I could create a new agent instance inside a node block or inside a tool call function, the main agent won't know either way. This new topic "Agent system design" is a new territory, and I'm sure there will be books about it in a couple of years and It's really great to witness this (Unless AGI annihilates us all 💥) So, I think what dictates what makes a sub agent a sub..agent is not clearly defined. In this repo for example a subagent is an agent without any previous context. And that's the beauty of it, if the lead agents deployed a sub agent to the 51st time that agent no. 51 won't know what happened before it and what his peers are working on (Other subagents). All it knows, is that it's on an ephemeral mission, from and to the ether it goes, unaware of the grand scheme of things 😌 But please let me know what you think.
👍 1
m
@Gemy I haven't had a chance to look at the code yet but the README is 🚀 . Two thoughts I had, and again this is without looking at your implementation, is to add a confidence score to intermediate results that the sub agents are returning. I am not sure a good budget friendly way to do this at the moment. The other is more what I have been thinking about intermediate results in general, to not really attempt to keep them in the context. Just stash them in a store, but with that score. Context compression is needed in some places I am sure. Just thinking out loud a bit here.
g
@Michael Wills Nice idea! But the issue is the whole search idea depends on actual web fetching. So I can't imagine how confidence would fit in actual web page content?
m
🤔 I was thinking of ways to help rank that's not dependent on an LLM for ranking so a combination of BM25 + semantic scoring. Right now, per the docs, the lead agent does analysis? And Gemini Pro does a good chunk of analysis. One example, in addition to BM25 and embeddings could be like doing NER on both the query and for each of the docs returned, boosting those docs that matching entities. Though that gets challenging when you're negating a search term... Definitely no silver bullet but depending on the need, it could be helpful. So the idea is to see if there is some sort of scoring mechanism (modular, pluggable) that makes sense for the use case.
g
The article mentions memory usage, perhaps those two idea could be combined (Memory + results with ranking)