Mohammad Zaki
09/09/2025, 3:44 PMPavel Gorgulov
09/09/2025, 3:53 PMMohammad Zaki
09/09/2025, 3:59 PMclass KoogClientWrapper(
private val client: ai.koog.prompt.executor.clients.LLMClient,
private val model: LLModel
) : LLMClient {
private val logger = LoggerFactory.getLogger(KoogClientWrapper::class.java)
override suspend fun generateResponse(prompt: String): String {
val basePrompt = prompt("cards") {
system(
"""
Some Prompt Here
""".trimIndent()
)
}
val resilientClient = RetryingLLMClient(
client,
RetryConfig.PRODUCTION
)
val promptExecutor = SingleLLMPromptExecutor(resilientClient)
val extendedPrompt = prompt(basePrompt) {
user(prompt)
}
return try {
val response = promptExecutor.execute(
extendedPrompt,
model
)
val content = response.firstOrNull()?.content ?: ""
<http://logger.info|logger.info>("Generated response: $content")
content.ifBlank {
logger.warn("Received empty response from LLM. Falling back to default JSON.")
"{}"
}
} catch (e: Exception) {
logger.error("LLM operation failed for prompt: $prompt", e)
when {
e.message?.contains("rate limit", ignoreCase = true) == true -> {
logger.warn("Rate limit hit. Scheduling retry later.")
"{}"
}
e.message?.contains("invalid api key", ignoreCase = true) == true -> {
logger.error("Authentication failed. Notifying administrator.")
"{}"
}
else -> {
logger.warn("Unknown error occurred. Falling back to safe default.")
useDefaultResponse()
}
}
}
}
private fun useDefaultResponse(): String {
return """{"status":"fallback","data":[]}"""
}
}
This allows me to call generateResponse where i pass my prompt .
For client
class GeminiKoogConfig(
private val apiKey: String,
private val modelName: LLModel
) : KoogConfig {
override fun build(): KoogClientWrapper {
val client = GoogleLLMClient(apiKey)
return KoogClientWrapper(
client,
modelName
)
}
}
I am trying to use Gemini2_5Flash.Andrey Bragin
09/09/2025, 9:39 PMPromptExecutor
, as in your case
2. requestLLMStructured
on AIAgentLLMSession
, that you acquire when implementing custom node
by using llm.writeSession
or llm.readSession
3. Dedicated nodeLLMRequestStructured
You can check these examples for more info:
https://github.com/JetBrains/koog/tree/develop/examples/src/main/kotlin/ai/koog/agents/example/structuredoutput
In your case, it might look something like this:
@Serializable
@LLMDescription("My structure description")
data class MyClass(
@property:LLMDescription("Foo property")
val foo: String,
@property:LLMDescription("Bar property")
val bar: Int
)
// ...
promptExecutor.executeStructured<MyClass>(prompt, model) // also optional examples and fixingParser
/*
Or for a more advanced usage with more manual controls you can use version of the method that takes StructuredOutput<T> as an argument, allowing you to configure manually certain aspects of the structured output
*/
Manually
If you have a custom use case, you can of course specify the schema manually via schema
property in LLMParams
in your Prompt
params
:
prompt("my-prompt", params = LLMParams(schema = LLMParams.Schema.JSON.Standard(...))) { ... }