OptionalapiOptionalmaxThe maximum number of tokens that the model can process in a single response. This limits ensures computational efficiency and resource management.
OptionalmodelThe name of the model to use.
OptionalmodelThe name of the model to use.
Alias for model
OptionalstopUp to 4 sequences where the API will stop generating further tokens. The
returned text will not contain the stop sequence.
Alias for stopSequences
OptionalstopUp to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
OptionalstreamingWhether or not to stream responses.
OptionaltemperatureThe temperature to use for sampling.
The Groq API key to use for requests.