Dirk
07/03/2025, 11:44 AMFinn Jensen
07/03/2025, 12:58 PMSingleLLMPrompt
and runAndGetResult
all the time with OpenAI with no issues. And I couldn't reproduce the bug with the code you gave with a minimal example. Probably sharing the prompt would be most helpful.
val client = OpenAILLMClient(API_KEY)
val prompt = prompt("test") {
user("Finish this sentence. I am a...")
}
val responses = SingleLLMPromptExecutor(client).execute(prompt, OpenAIModels.Chat.GPT4_1, emptyList())
println(responses)
Dirk
07/03/2025, 1:19 PMval client = OpenAILLMClient(apiKey = apiKey, settings = settings)
val prompt = prompt("test_prompt", LLMParams()) {
user("Whats the captial of germany?")
}
val model = LLModel(LLMProvider.OpenAI, "meta-llama/Llama-3.3-70B-Instruct", listOf(LLMCapability.Completion))
val response = SingleLLMPromptExecutor(client).execute(prompt, model, listOf())
The default OpenAI chat completion API (https://platform.openai.com/docs/api-reference/chat/create) uses the messages format where content is just a string.
But debugging the request it becomes:
"messages" : [ {
"role" : "system",
"content" : "...."
}, {
"role" : "user",
"content" : [ {
"text" : "...",
"type" : "text"
} ]
} ]
I looked into the code and found that the system message becomes a Content.Text unlike the user message which will be a ContentPart and ends up in an array of content for the user message.
I would expect it uses the normal messages format if I am just passing a text prompt.Finn Jensen
07/03/2025, 1:48 PMDirk
07/03/2025, 2:25 PMv1/chat/completions
I don’t see any reason to not support that format for simple text prompts and use the richer format for media input etc.
The only option for now is to abuse the system message for user input as its rendered with content string instead of list.Andrey Bragin
07/04/2025, 8:55 AMAndrey Bragin
07/04/2025, 9:03 AMDirk
07/04/2025, 11:46 AM