Hi team, it seems like custom nodes don’t have acc...
# koog-agentic-framework
n
Hi team, it seems like custom nodes don’t have access to the
executor
directly, this prevents me from using my own extension functions on
PromptExecutor
(example in 🧵 ). Should I just switch my extension function to use
requestLLM
directly within the
llm.writeSession
? I was hoping it would be more composable (e.g I can write my own
requestLLMStructuredAndValidated
that leverages my PromptExecutor ext fn)
Copy code
suspend fun <T> PromptExecutor.executeStructuredWithValidation(
  prompt: Prompt,
  mainModel: LLModel,
  structure: StructuredData<T>,
  retries: Int = 1,
  fixingModel: LLModel,
  validators: List<Validator<T>>
): Result<StructuredResponse<T>> {
  val prompt = prompt(prompt) {
    user {
      markdown {
        StructuredOutputPrompts.output(this, structure)
      }
    }
  }

  val structureParser = StructureParser(this, fixingModel)
  val structureValidator = StructureValidator(
    executor = this,
    fixingModel = fixingModel,
    validators = validators,
  )

  repeat(retries) { attempt ->
    logger.debug { "Execute the prompt: <$prompt>" }
    val response = execute(prompt = prompt, model = mainModel)

    try {
      logger.debug { "$attempt/$retries: Try to parse LLM response content: <${response.content}>" }

      val parsed = structureParser.parse(structure, response.content)
      val validatedOutput = structureValidator.validate(
        parsed = parsed,
        structure = structure,
        content = response.content
      )

      return Result.success(
        StructuredResponse(
          structure = validatedOutput,
          raw = response.content,
        )
      )
    } catch (t: SerializationException) {
      logger.warn(t) { "Failed to Unable to parse structure from content: <${response.content}>" }
    }
  }

  return Result.failure(
    exception = LLMStructuredParsingError("Unable to parse structure after <$retries> retries")
  )
}
s
You could always create a custom node for a strategy graph that calls
requestLLMStructured
, like this:
Copy code
fun AIAgentSubgraphBuilderBase<*, *>.nodeProcessLLMResponseToStructured(
    name: String? = null
): AIAgentNodeDelegate<Message.Response, ConciergeStructuredResponse> =
    node(name) { llmResponse ->
        val result = llm.writeSession {
            rewritePrompt { _ ->
                prompt("response-formatter-prompt") {
                    system(
                        """
            You are a response formatter. Your task is to split the response into sections: main body, links, images, and carousels.
            Ensure the main body is coherent and does not duplicate information present in other sections.
            """.trimIndent()
                    )
                    user(llmResponse.content)
                }
            }
            requestLLMStructured(
                structure = ConciergeStructuredData.conciergeStructuredData,
                retries = 3,
                fixingModel = BedrockModels.AnthropicClaude4Sonnet
            )
        }
        val structuredResponse = result.fold(
            onSuccess = { it.structure },
            onFailure = { ConciergeStructuredResponse("Sorry, something went wrong. Please try again later.", emptyList(), emptyList(), null)
            }
        )
        structuredResponse
    }
And then include that node in your graph. I know it's not using the
PromptExecutor
directly, but that's how I'm achieving essentially the same thing but with a strategy graph node
n
The problem is I want to run custom validation on the structured output alongside schema validation, seems like I may have to split these two alas
s
The nodes might not have access to the executor, but they do have access to
llm.writeSession
, which in turn has access to the executor, so you could write an extension function for that instead:
Copy code
suspend fun <T> AIAgentLLMWriteSession.requestLLMStructuredAndValidated(
    structure: StructuredData<T>,
    retries: Int,
    fixingModel: LLModel,
    validators: List<Validator<T>> // some validator class
): Result<StructuredResponse<T>> {
    return this.requestLLMStructured(structure, retries, fixingModel).also { result ->
        result.onSuccess { response ->
            // Custom validation logic
            validators.forEach { validator ->
                validator.validate(response.structure)
            }
            updatePrompt {
                assistant(response.raw)
            }
        }
    }
}
And then include it in a node in your graph like this:
Copy code
fun AIAgentSubgraphBuilderBase<*, *>.nodeProcessLLMResponseToStructuredAndValidated(
    name: String? = null,
    validators: List<Validator<ConciergeStructuredResponse>>
): AIAgentNodeDelegate<Message.Response, ConciergeStructuredResponse> =
    node(name) { llmResponse ->
        val result = llm.writeSession {
            rewritePrompt { _ ->
                prompt("response-formatter-prompt") {
                    system(
                        """
            You are a response formatter. Your task is to split the response into sections: main body, links, images, and carousels.
            Ensure the main body is coherent and does not duplicate information present in other sections.
            """.trimIndent()
                    )
                    user(llmResponse.content)
                }
            }
            requestLLMStructuredAndValidated(
                structure = ConciergeStructuredData.conciergeStructuredData,
                retries = 3,
                fixingModel = BedrockModel(
                    model = BedrockModels.AnthropicClaude4Sonnet,
                    inferenceProfilePrefix = BedrockInferencePrefixes.EU.prefix
                ).effectiveModel,
                validators = validators
            )
        }
        val structuredResponse = result.fold(
            onSuccess = { it.structure },
            onFailure = { ConciergeStructuredResponse("Sorry, something went wrong. Please try again later.", emptyList(), emptyList(), null) }
        )
        structuredResponse
    }
You'd have to do some re-jigging to modify the structured response based on validation failures, but you get the jist
v
Hi! Which version of Koog are you using? In
0.3.0
there was introduced
llm.promptExecutor
that is available with the Opt-In. It allows you to use the prompt executor directly (but with a warning that it will be detached from the agent logic — such calls won’t be present in the conversation history
Otherwise, most of the time you should actually use
llm.writeSession
for all your LLM requests