Hi Team, I’m currently trying to set up Koog along...
# koog-agentic-framework
m
Hi Team, I’m currently trying to set up Koog along with LangChain4j. While LangChain4j works fine for me, I’ve been running into some issues specifically with Koog. • I expect the model to return JSON responses. Sometimes it does, but often the JSON is incomplete, and in many cases, I get an empty response altogether. • With my existing LangChain4j setup, I don’t face these problems, so I suspect I might be missing some crucial steps when configuring or using Koog. Could anyone help me understand what I might be doing wrong? I’d really like to try Koog further to explore its capabilities. Thanks in advance!
p
Hi Mohammad Thank you for you feedback Could you please describe your use case in more detail? Which provider are you using? Are you calling executor directly, or do you have an agent strategy? If you could share some reproducible, that would help understand what the issue might be
m
I have a generic implementation to use multiple agentic frameworks
Copy code
class KoogClientWrapper(
    private val client: ai.koog.prompt.executor.clients.LLMClient,
    private val model: LLModel
) : LLMClient {

    private val logger = LoggerFactory.getLogger(KoogClientWrapper::class.java)

    override suspend fun generateResponse(prompt: String): String {
        val basePrompt = prompt("cards") {
            system(
                """
                   Some Prompt Here
            """.trimIndent()
            )
        }
        val resilientClient = RetryingLLMClient(
            client,
            RetryConfig.PRODUCTION
        )
        val promptExecutor = SingleLLMPromptExecutor(resilientClient)
        val extendedPrompt = prompt(basePrompt) {
            user(prompt)
        }
        return try {
            val response = promptExecutor.execute(
                extendedPrompt,
                model
            )

            val content = response.firstOrNull()?.content ?: ""
            <http://logger.info|logger.info>("Generated response: $content")

            content.ifBlank {
                logger.warn("Received empty response from LLM. Falling back to default JSON.")
                "{}"
            }
        } catch (e: Exception) {
            logger.error("LLM operation failed for prompt: $prompt", e)

            when {
                e.message?.contains("rate limit", ignoreCase = true) == true -> {
                    logger.warn("Rate limit hit. Scheduling retry later.")
                    "{}"
                }

                e.message?.contains("invalid api key", ignoreCase = true) == true -> {
                    logger.error("Authentication failed. Notifying administrator.")
                    "{}"
                }

                else -> {
                    logger.warn("Unknown error occurred. Falling back to safe default.")
                    useDefaultResponse()
                }
            }
        }
    }

    private fun useDefaultResponse(): String {
        return """{"status":"fallback","data":[]}"""
    }
}
This allows me to call generateResponse where i pass my prompt . For client
Copy code
class GeminiKoogConfig(
    private val apiKey: String,
    private val modelName: LLModel
) : KoogConfig {
    override fun build(): KoogClientWrapper {
        val client = GoogleLLMClient(apiKey)
        return KoogClientWrapper(
            client,
            modelName
        )
    }
}
I am trying to use Gemini2_5Flash.
a
Are you trying to get structured output? Currently you are not communicating it to the LLM when building and executing your prompt. You have several options how to properly specify this. Automatically (recommended) Using Structured Output feature. It’s available at multiple levels, from low to more high-level: 1.
PromptExecutor
, as in your case 2.
requestLLMStructured
on
AIAgentLLMSession
, that you acquire when implementing custom
node
by using
llm.writeSession
or
llm.readSession
3. Dedicated
nodeLLMRequestStructured
You can check these examples for more info: https://github.com/JetBrains/koog/tree/develop/examples/src/main/kotlin/ai/koog/agents/example/structuredoutput In your case, it might look something like this:
Copy code
@Serializable
@LLMDescription("My structure description")
data class MyClass(
  @property:LLMDescription("Foo property")
  val foo: String,
  @property:LLMDescription("Bar property")
  val bar: Int
)

// ...

promptExecutor.executeStructured<MyClass>(prompt, model) // also optional examples and fixingParser

/*
 Or for a more advanced usage with more manual controls you can use version of the method that takes StructuredOutput<T> as an argument, allowing you to configure manually certain aspects of the structured output
 */
Manually If you have a custom use case, you can of course specify the schema manually via
schema
property in
LLMParams
in your
Prompt
params
:
Copy code
prompt("my-prompt", params = LLMParams(schema = LLMParams.Schema.JSON.Standard(...))) { ... }