You can paste the command below into your terminal to run your first API request. Make sure to replace $OPENAI_API_KEY
with your secret API key. If you are using a legacy user key and you have multiple projects, you will also need to specify the Project Id. For improved security, we recommend transitioning to project based keys instead.This request queries the gpt-4o-mini
model (which under the hood points to a gpt-4o-mini
model variant) to complete the text starting with a prompt of "Say this is a test". You should get a response back that resembles the following:{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1677858242,
"model": "gpt-4o-mini",
"usage": {
"prompt_tokens": 13,
"completion_tokens": 7,
"total_tokens": 20,
"completion_tokens_details": {
"reasoning_tokens": 0,
"accepted_prediction_tokens": 0,
"rejected_prediction_tokens": 0
}
},
"choices": [
{
"message": {
"role": "assistant",
"content": "\n\nThis is a test!"
},
"logprobs": null,
"finish_reason": "stop",
"index": 0
}
]
}
Now that you've generated your first chat completion, let's break down the response object. We can see the finish_reason
is stop
which means the API returned the full chat completion generated by the model without running into any limits. In the choices list, we only generated a single message but you can set the n
parameter to generate multiple messages choices. Modified atΒ 2024-12-11 08:01:14