Nayeem Zen
08/05/2025, 8:20 PMexecutor
directly, this prevents me from using my own extension functions on PromptExecutor
(example in 🧵 ). Should I just switch my extension function to use requestLLM
directly within the llm.writeSession
? I was hoping it would be more composable (e.g I can write my own requestLLMStructuredAndValidated
that leverages my PromptExecutor ext fn)Nayeem Zen
08/05/2025, 8:20 PMsuspend fun <T> PromptExecutor.executeStructuredWithValidation(
prompt: Prompt,
mainModel: LLModel,
structure: StructuredData<T>,
retries: Int = 1,
fixingModel: LLModel,
validators: List<Validator<T>>
): Result<StructuredResponse<T>> {
val prompt = prompt(prompt) {
user {
markdown {
StructuredOutputPrompts.output(this, structure)
}
}
}
val structureParser = StructureParser(this, fixingModel)
val structureValidator = StructureValidator(
executor = this,
fixingModel = fixingModel,
validators = validators,
)
repeat(retries) { attempt ->
logger.debug { "Execute the prompt: <$prompt>" }
val response = execute(prompt = prompt, model = mainModel)
try {
logger.debug { "$attempt/$retries: Try to parse LLM response content: <${response.content}>" }
val parsed = structureParser.parse(structure, response.content)
val validatedOutput = structureValidator.validate(
parsed = parsed,
structure = structure,
content = response.content
)
return Result.success(
StructuredResponse(
structure = validatedOutput,
raw = response.content,
)
)
} catch (t: SerializationException) {
logger.warn(t) { "Failed to Unable to parse structure from content: <${response.content}>" }
}
}
return Result.failure(
exception = LLMStructuredParsingError("Unable to parse structure after <$retries> retries")
)
}
Sam
08/06/2025, 8:53 AMrequestLLMStructured
, like this:
fun AIAgentSubgraphBuilderBase<*, *>.nodeProcessLLMResponseToStructured(
name: String? = null
): AIAgentNodeDelegate<Message.Response, ConciergeStructuredResponse> =
node(name) { llmResponse ->
val result = llm.writeSession {
rewritePrompt { _ ->
prompt("response-formatter-prompt") {
system(
"""
You are a response formatter. Your task is to split the response into sections: main body, links, images, and carousels.
Ensure the main body is coherent and does not duplicate information present in other sections.
""".trimIndent()
)
user(llmResponse.content)
}
}
requestLLMStructured(
structure = ConciergeStructuredData.conciergeStructuredData,
retries = 3,
fixingModel = BedrockModels.AnthropicClaude4Sonnet
)
}
val structuredResponse = result.fold(
onSuccess = { it.structure },
onFailure = { ConciergeStructuredResponse("Sorry, something went wrong. Please try again later.", emptyList(), emptyList(), null)
}
)
structuredResponse
}
And then include that node in your graph. I know it's not using the PromptExecutor
directly, but that's how I'm achieving essentially the same thing but with a strategy graph nodeNayeem Zen
08/06/2025, 1:24 PMSam
08/06/2025, 1:52 PMllm.writeSession
, which in turn has access to the executor, so you could write an extension function for that instead:
suspend fun <T> AIAgentLLMWriteSession.requestLLMStructuredAndValidated(
structure: StructuredData<T>,
retries: Int,
fixingModel: LLModel,
validators: List<Validator<T>> // some validator class
): Result<StructuredResponse<T>> {
return this.requestLLMStructured(structure, retries, fixingModel).also { result ->
result.onSuccess { response ->
// Custom validation logic
validators.forEach { validator ->
validator.validate(response.structure)
}
updatePrompt {
assistant(response.raw)
}
}
}
}
And then include it in a node in your graph like this:
fun AIAgentSubgraphBuilderBase<*, *>.nodeProcessLLMResponseToStructuredAndValidated(
name: String? = null,
validators: List<Validator<ConciergeStructuredResponse>>
): AIAgentNodeDelegate<Message.Response, ConciergeStructuredResponse> =
node(name) { llmResponse ->
val result = llm.writeSession {
rewritePrompt { _ ->
prompt("response-formatter-prompt") {
system(
"""
You are a response formatter. Your task is to split the response into sections: main body, links, images, and carousels.
Ensure the main body is coherent and does not duplicate information present in other sections.
""".trimIndent()
)
user(llmResponse.content)
}
}
requestLLMStructuredAndValidated(
structure = ConciergeStructuredData.conciergeStructuredData,
retries = 3,
fixingModel = BedrockModel(
model = BedrockModels.AnthropicClaude4Sonnet,
inferenceProfilePrefix = BedrockInferencePrefixes.EU.prefix
).effectiveModel,
validators = validators
)
}
val structuredResponse = result.fold(
onSuccess = { it.structure },
onFailure = { ConciergeStructuredResponse("Sorry, something went wrong. Please try again later.", emptyList(), emptyList(), null) }
)
structuredResponse
}
Sam
08/06/2025, 1:57 PMVadim Briliantov
08/06/2025, 7:58 PM0.3.0
there was introduced llm.promptExecutor
that is available with the Opt-In.
It allows you to use the prompt executor directly (but with a warning that it will be detached from the agent logic — such calls won’t be present in the conversation historyVadim Briliantov
08/06/2025, 8:01 PMllm.writeSession
for all your LLM requests