Context Window
The maximum number of tokens a language model can process in a single request, encompassing both the input prompt and the generated output.
learn more?
Subscribe and we'll send new content to your inbox.
The maximum number of tokens a language model can process in a single request, encompassing both the input prompt and the generated output.
Subscribe and we'll send new content to your inbox.