Hey guys, first of all I want to say - thanks for ...
# koog-agentic-framework
o
Hey guys, first of all I want to say - thanks for this great framework, showing great potential and absolutely leveraging all of K's best features: • Great DSL • First class support of coroutines (obviously!) • And much more Kotlin stuff that I looked for and didn't find on relevant projects such as LangChain4j/LangGraph4j. After my initial exploration of Koog, I noticed that Structured Outputs are being "injected" into the prompt and not provided as a
json_schema response format
like OpenAI and more providers state. Libraries like LangChain4j allow you to essentially give these providers the
response_format
. As far as I see, this isn't the current approach in Koog which seems rather more error-prone for my understanding? (Thus, the rising need of the
fixingModel
technique in
PromptExecutor.executeStructured
). Is there any specific reason for not sending the
response_format
for models that support it? e.g. in via Koog's
OpenAIRequest
(OpenAI response_schema reference). I can only assume it would remove the need for
fixingModel
and similar techniques, allowing a reliable
type-safe
response for models that indeed support it. One more question, out of pure interest and because I didn't catch that in the awesome KotlinConf talk - Was this project open-sourced after begin used internally in Jetbrains? If not, are there any future plans to use Koog in Jetbrains products?
b
Oh, that's really bad! Thanks for your finding. It's absolutely must have feature Additional info for koog's team: https://platform.openai.com/docs/guides/structured-outputs?api-mode=chat#introduction https://openai.com/index/introducing-structured-outputs-in-the-api/ OpenAI using constrained decoding. That's means they have constrains on the vocabulary of the LLM for each token during decoding to get a valid output. It also working faster As far as I know Anthropic didn't support this feature, but openai and google does And fixing model looks like anti pattern
o
Hey @Boris Zubarev, thanks for the additional info, always happy to read more regarding these kind of stuff! Hopefully Koog's team see this and agree with us that this is indeed an absolute must, here's the Github issue regarding this
❤️ 1
a
Hi, thank you for creating the issue, we definitely have to support it properly, we’ll take a look at it.
I can only assume it would remove the need for
fixingModel
and similar techniques, allowing a reliable
type-safe
response for models that indeed support it.
AFAIK (based on the experience of some of our colleagues) there’s still a chance to get malformed response, even in the “strict mode”, so it would be nice to keep
fixingModel
to offer more flexibility and reliability. And there’s already PromptExecutor.executeStructuredOneShot that offers simplified approach without
fixingModel
, essentially assuming that the structured response would be valid after the first try
One more question, out of pure interest and because I didn’t catch that in the awesome KotlinConf talk - Was this project open-sourced after begin used internally in Jetbrains?
Yes, it was developed initially as an internal SDK to help us integrate AI features (AI agents especially) into our products.
o
Hey, appreciate the response! > AFAIK (based on the experience of some of our colleagues) there’s still a chance to get malformed response, even in the “strict mode” That's super interesting, can't say I've encountered such problems myself (Using mainly OpenAI but also Gemini), but yeah, I can totally agree that the
fixingModel
approach has it's place and can keep being used in
PromptExecutor.executeStructured
and the ones that "trust" these models may simply use
PromptExecutor.executeStructuredOneShot
as you've suggested. Definitely waiting on this one 👀