Finn Jensen
06/29/2025, 10:45 PMinputTokensCount
and outputTokensCount
using the OpenAIModels.CostOptimized
model family. Is there any flag I need to toggle to have the LLMClient return the tokens?Finn Jensen
06/30/2025, 10:39 AMpipeline.interceptAfterLLMCall(this, featureImpl) { prompt, tools, model, responses, sessionId ->
val records = responses.map {
val inputTokens = it.metaInfo.inputTokensCount ?: run {
logger.warn { "No inputTokens found for usage record." }
0
}
val outputTokens = it.metaInfo.outputTokensCount ?: run {
logger.warn { "No outputTokens found for usage record." }
0
}
UsageRecord(
userId = config.userId!!,
sessionId = sessionId.toString(),
featureUsed = config.feature!!,
inputTokens = inputTokens,
outputTokens = outputTokens,
modelName = model.id,
estimatedCost = null,
timestamp = it.metaInfo.timestamp
)
}
config.handleRecords(records)