conneroisu / groq-go

groq api package for interacting with language models avaliable on cloudgroq api.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

groq-go

Go Reference Go Report Card Coverage Status

Features

  • Supports all models from Groq in a type-safe way.
  • Supports streaming.
  • Supports moderation.
  • Supports audio transcription.
  • Supports audio translation.
  • Supports Tool Use.
  • Supports Function Calling.

Installation

go get github.com/conneroisu/groq-go

Examples

For introductory examples, see the examples directory.

External Repositories using groq-go:

Documentation

groq

import "github.com/conneroisu/groq-go"

Package groq provides a unofficial client for the Groq API.

With specially designed hardware, the Groq API is a super fast way to query open source llms.

API Documentation: https://console.groq.com/docs/quickstart

Features:

- Supports all models from [Groq](https://wow.groq.com/\) in a type-safe way. - Supports streaming. - Supports moderation. - Supports audio transcription. - Supports audio translation. - Supports Tool Use. - Supports Function Calling.

Index

Constants

const (
    ChatMessageRoleSystem    Role = "system"    // ChatMessageRoleSystem is the system chat message role.
    ChatMessageRoleUser      Role = "user"      // ChatMessageRoleUser is the user chat message role.
    ChatMessageRoleAssistant Role = "assistant" // ChatMessageRoleAssistant is the assistant chat message role.
    ChatMessageRoleFunction  Role = "function"  // ChatMessageRoleFunction is the function chat message role.
    ChatMessageRoleTool      Role = "tool"      // ChatMessageRoleTool is the tool chat message role.

    ImageURLDetailHigh                         ImageURLDetail                   = "high"           // ImageURLDetailHigh is the high image url detail.
    ImageURLDetailLow                          ImageURLDetail                   = "low"            // ImageURLDetailLow is the low image url detail.
    ImageURLDetailAuto                         ImageURLDetail                   = "auto"           // ImageURLDetailAuto is the auto image url detail.
    ChatMessagePartTypeText                    ChatMessagePartType              = "text"           // ChatMessagePartTypeText is the text chat message part type.
    ChatMessagePartTypeImageURL                ChatMessagePartType              = "image_url"      // ChatMessagePartTypeImageURL is the image url chat message part type.
    ChatCompletionResponseFormatTypeJSONObject ChatCompletionResponseFormatType = "json_object"    // ChatCompletionResponseFormatTypeJSONObject is the json object chat completion response format type.
    ChatCompletionResponseFormatTypeJSONSchema ChatCompletionResponseFormatType = "json_schema"    // ChatCompletionResponseFormatTypeJSONSchema is the json schema chat completion response format type.
    ChatCompletionResponseFormatTypeText       ChatCompletionResponseFormatType = "text"           // ChatCompletionResponseFormatTypeText is the text chat completion response format type.
    ToolTypeFunction                           ToolType                         = "function"       // ToolTypeFunction is the function tool type.
    FinishReasonStop                           FinishReason                     = "stop"           // FinishReasonStop is the stop finish reason.
    FinishReasonLength                         FinishReason                     = "length"         // FinishReasonLength is the length finish reason.
    FinishReasonFunctionCall                   FinishReason                     = "function_call"  // FinishReasonFunctionCall is the function call finish reason.
    FinishReasonToolCalls                      FinishReason                     = "tool_calls"     // FinishReasonToolCalls is the tool calls finish reason.
    FinishReasonContentFilter                  FinishReason                     = "content_filter" // FinishReasonContentFilter is the content filter finish reason.
    FinishReasonNull                           FinishReason                     = "null"           // FinishReasonNull is the null finish reason.
)

const (
    AudioResponseFormatJSON        AudioResponseFormat = "json"         // AudioResponseFormatJSON is the JSON format of some audio.
    AudioResponseFormatText        AudioResponseFormat = "text"         // AudioResponseFormatText is the text format of some audio.
    AudioResponseFormatSRT         AudioResponseFormat = "srt"          // AudioResponseFormatSRT is the SRT format of some audio.
    AudioResponseFormatVerboseJSON AudioResponseFormat = "verbose_json" // AudioResponseFormatVerboseJSON is the verbose JSON format of some audio.
    AudioResponseFormatVTT         AudioResponseFormat = "vtt"          // AudioResponseFormatVTT is the VTT format of some audio.

    TranscriptionTimestampGranularityWord    TranscriptionTimestampGranularity = "word"                                  // TranscriptionTimestampGranularityWord is the word timestamp granularity.
    TranscriptionTimestampGranularitySegment TranscriptionTimestampGranularity = "segment"                               // TranscriptionTimestampGranularitySegment is the segment timestamp granularity.
    Llama370B8192                            Model                             = "llama3-70b-8192"                       // Llama370B8192 is an AI text model provided by Meta. It has 8192 context window.
    DistilWhisperLargeV3En                   Model                             = "distil-whisper-large-v3-en"            // DistilWhisperLargeV3En is an AI audio model provided by Hugging Face. It has 448 context window.
    Gemma7BIt                                Model                             = "gemma-7b-it"                           // Gemma7BIt is an AI text model provided by Google. It has 8192 context window.
    LlavaV157B4096Preview                    Model                             = "llava-v1.5-7b-4096-preview"            // LlavaV157B4096Preview is an AI text model provided by Other. It has 4096 context window.
    Llama3170BVersatile                      Model                             = "llama-3.1-70b-versatile"               // Llama3170BVersatile is an AI text model provided by Meta. It has 131072 context window.
    Llama38B8192                             Model                             = "llama3-8b-8192"                        // Llama38B8192 is an AI text model provided by Meta. It has 8192 context window.
    Llama318BInstant                         Model                             = "llama-3.1-8b-instant"                  // Llama318BInstant is an AI text model provided by Meta. It has 131072 context window.
    WhisperLargeV3                           Model                             = "whisper-large-v3"                      // WhisperLargeV3 is an AI audio model provided by OpenAI. It has 448 context window.
    Llama3Groq8B8192ToolUsePreview           Model                             = "llama3-groq-8b-8192-tool-use-preview"  // Llama3Groq8B8192ToolUsePreview is an AI text model provided by Groq. It has 8192 context window.
    Gemma29BIt                               Model                             = "gemma2-9b-it"                          // Gemma29BIt is an AI text model provided by Google. It has 8192 context window.
    Mixtral8X7B32768                         Model                             = "mixtral-8x7b-32768"                    // Mixtral8X7B32768 is an AI text model provided by Mistral AI. It has 32768 context window.
    Llama3Groq70B8192ToolUsePreview          Model                             = "llama3-groq-70b-8192-tool-use-preview" // Llama3Groq70B8192ToolUsePreview is an AI text model provided by Groq. It has 8192 context window.
    LlamaGuard38B                            Model                             = "llama-guard-3-8b"                      // LlamaGuard38B is an AI moderation model provided by Meta. It has 8192 context window.
)

Variables

var (
    // SectionMap is a map of sections to categories.
    //
    // S1: Violent Crimes.
    //
    // AI models should not create content that enables, encourages, or excuses the commission of violent crimes. Examples of violent crimes include, but are not limited to:
    //                 - Unlawful violence toward people (ex: terrorism, genocide, murder, hate crimes, child abuse, assault, battery, kidnapping)
    //                 - Unlawful violence toward animals (ex: animal abuse)
    //
    // S2: Non-Violent Crimes.
    //
    // AI models should not create content that enables, encourages, or excuses the commission of non-violent crimes. Examples of non-violent crimes include, but are not limited to:
    //                 - Financial crimes (ex: fraud, scams, money laundering)
    //                 - Property crimes (ex: burglary, robbery, arson, vandalism)
    //                 - Drug crimes (ex: creating or trafficking narcotics)
    //                 - Weapons crimes (ex: producing unlicensed firearms)
    //                 - Cyber crimes (ex: hacking, spyware, malware)
    //
    // S3: Sex Crimes.
    //
    // AI models should not create content that enables, encourages, or excuses the commission of sex crimes. Examples of sex crimes include, but are not limited to:
    //                 - Human trafficking
    //                 - Sexual assault (ex: rape)
    //                 - Sexual harassment (ex: groping)
    //                 - Lewd conduct
    //
    // S4: Child Exploitation.
    //
    // AI models should not create content that depicts child nudity or that enables, encourages, excuses, or depicts the sexual abuse of children.
    //
    // S5: Defamation.
    //
    // AI models should not create content about a real, living person that is verifiably false and likely to injure the person's reputation.
    //
    // S6: Specialized Advice.
    //
    // AI models should not create content that contains specialized financial, medical, or legal advice.
    //
    // S7: Privacy.
    //
    // AI models should not create content that contains sensitive, personal information about private individuals.
    //
    // S8: Intellectual Property.
    //
    // AI models should not create content that violates the intellectual property rights of third parties.
    //
    // S9: Indiscriminate Weapons.
    // I models should not create content that enables the creation of weapons of mass destruction. Examples of such weapons include, but are not limited to:
    //                 - Chemical weapons (ex: nerve gas)
    //                 - Biological weapons (ex: anthrax)
    //                 - Radiological weapons (ex: salted bombs)
    //                 - Nuclear weapons (ex: atomic warheads)
    //                 - High-yield explosive weapons (ex: cluster munitions)
    //
    // S10: Hate.
    //
    // AI models should not create content that is hateful toward people on the basis of their protected characteristics (race, color, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease). AI models should not create content that perpetuates negative stereotypes.
    //
    // S11: Self-Harm.
    //
    // AI models should not create content that enables, encourages, or excuses acts of intentional self-harm. Examples of such actions include, but are not limited to:
    //                 - Suicide
    //                 - Self-injury (ex: cutting)
    //                 - Disordered eating
    // S12: Sexual Content.
    //
    // AI models should not create depictions of nude adults or content that contains erotic descriptions or explicit depictions of sex acts.
    //
    // S13: Elections.
    //
    // AI models should not create content that contain factually incorrect information about electoral systems and processes, including in the time, place, or manner of voting in civic elections.
    //
    // S14: Code Interpreter Abuse.
    //
    // AI models should not create content that attempts to abuse code interpreters. Examples of code interpreter abuse include, but are not limited to:
    //                 - Denial of service attacks
    //                 - Container escapes or privilege escalation.
    SectionMap = map[string]HarmfulCategory{
        "S1":  CategoryViolentCrimes,
        "S2":  CategoryNonviolentCrimes,
        "S3":  CategorySexRelatedCrimes,
        "S4":  CategoryChildSexualExploitation,
        "S5":  CategoryDefamation,
        "S6":  CategorySpecializedAdvice,
        "S7":  CategoryPrivacy,
        "S8":  CategoryIntellectualProperty,
        "S9":  CategoryIndiscriminateWeapons,
        "S10": CategoryHate,
        "S11": CategorySuicideAndSelfHarm,
        "S12": CategorySexualContent,
        "S13": CategoryElections,
        "S14": CategoryCodeInterpreterAbuse,
    }
)

APIError provides error information returned by the Groq API.

type APIError struct {
    Code           any     `json:"code,omitempty"`  // Code is the code of the error.
    Message        string  `json:"message"`         // Message is the message of the error.
    Param          *string `json:"param,omitempty"` // Param is the param of the error.
    Type           string  `json:"type"`            // Type is the type of the error.
    HTTPStatusCode int     `json:"-"`               // HTTPStatusCode is the status code of the error.
}

func (*APIError) Error

func (e *APIError) Error() string

Error implements the error interface.

func (*APIError) UnmarshalJSON

func (e *APIError) UnmarshalJSON(data []byte) (err error)

UnmarshalJSON implements the json.Unmarshaler interface.

AudioRequest represents a request structure for audio API.

type AudioRequest struct {
    Model                  Model                               // Model is the model to use for the transcription.
    FilePath               string                              // FilePath is either an existing file in your filesystem or a filename representing the contents of Reader.
    Reader                 io.Reader                           // Reader is an optional io.Reader when you do not want to use an existing file.
    Prompt                 string                              // Prompt is the prompt for the transcription.
    Temperature            float32                             // Temperature is the temperature for the transcription.
    Language               string                              // Language is the language for the transcription. Only for transcription.
    Format                 AudioResponseFormat                 // Format is the format for the response.
    TimestampGranularities []TranscriptionTimestampGranularity // Only for transcription. TimestampGranularities is the timestamp granularities for the transcription.
}

AudioResponse represents a response structure for audio API.

type AudioResponse struct {
    Task     string   `json:"task"`     // Task is the task of the response.
    Language string   `json:"language"` // Language is the language of the response.
    Duration float64  `json:"duration"` // Duration is the duration of the response.
    Segments Segments `json:"segments"` // Segments is the segments of the response.
    Words    Words    `json:"words"`    // Words is the words of the response.
    Text     string   `json:"text"`     // Text is the text of the response.

    Header http.Header // Header is the header of the response.
}

func (*AudioResponse) SetHeader

func (r *AudioResponse) SetHeader(header http.Header)

SetHeader sets the header of the response.

AudioResponseFormat is the response format for the audio API.

Response formatted using AudioResponseFormatJSON by default.

string

type AudioResponseFormat string

ChatCompletionChoice represents the chat completion choice.

type ChatCompletionChoice struct {
    Index   int                   `json:"index"`   // Index is the index of the choice.
    Message ChatCompletionMessage `json:"message"` // Message is the chat completion message of the choice.
    // FinishReason is the finish reason of the choice.
    //
    // stop: API returned complete message,
    // or a message terminated by one of the stop sequences provided via the stop parameter
    // length: Incomplete model output due to max_tokens parameter or token limit
    // function_call: The model decided to call a function
    // content_filter: Omitted content due to a flag from our content filters
    // null: API response still in progress or incomplete
    FinishReason FinishReason `json:"finish_reason"`      // FinishReason is the finish reason of the choice.
    LogProbs     *LogProbs    `json:"logprobs,omitempty"` // LogProbs is the log probs of the choice.
}

ChatCompletionMessage represents the chat completion message.

type ChatCompletionMessage struct {
    Role         Role              `json:"role"`    // Role is the role of the chat completion message.
    Content      string            `json:"content"` // Content is the content of the chat completion message.
    MultiContent []ChatMessagePart // MultiContent is the multi content of the chat completion message.

    // This property isn't in the official documentation, but it's in
    // the documentation for the official library for python:
    //
    //	- https://github.com/openai/openai-python/blob/main/chatml.md
    //	- https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb
    Name string `json:"name,omitempty"`

    // FunctionCall setting for Role=assistant prompts this may be set to the function call generated by the model.
    FunctionCall *FunctionCall `json:"function_call,omitempty"`

    // ToolCalls setting for Role=assistant prompts this may be set to the tool calls generated by the model, such as function calls.
    ToolCalls []ToolCall `json:"tool_calls,omitempty"`

    // ToolCallID is setting for Role=tool prompts this should be set to the ID given in the assistant's prior request to call a tool.
    ToolCallID string `json:"tool_call_id,omitempty"`
}

func (ChatCompletionMessage) MarshalJSON

func (m ChatCompletionMessage) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaler interface.

func (*ChatCompletionMessage) UnmarshalJSON

func (m *ChatCompletionMessage) UnmarshalJSON(bs []byte) (err error)

UnmarshalJSON implements the json.Unmarshaler interface.

ChatCompletionRequest represents a request structure for the chat completion API.

type ChatCompletionRequest struct {
    Model            Model                         `json:"model"`                       // Model is the model of the chat completion request.
    Messages         []ChatCompletionMessage       `json:"messages"`                    // Messages is the messages of the chat completion request.
    MaxTokens        int                           `json:"max_tokens,omitempty"`        // MaxTokens is the max tokens of the chat completion request.
    Temperature      float32                       `json:"temperature,omitempty"`       // Temperature is the temperature of the chat completion request.
    TopP             float32                       `json:"top_p,omitempty"`             // TopP is the top p of the chat completion request.
    N                int                           `json:"n,omitempty"`                 // N is the n of the chat completion request.
    Stream           bool                          `json:"stream,omitempty"`            // Stream is the stream of the chat completion request.
    Stop             []string                      `json:"stop,omitempty"`              // Stop is the stop of the chat completion request.
    PresencePenalty  float32                       `json:"presence_penalty,omitempty"`  // PresencePenalty is the presence penalty of the chat completion request.
    ResponseFormat   *ChatCompletionResponseFormat `json:"response_format,omitempty"`   // ResponseFormat is the response format of the chat completion request.
    Seed             *int                          `json:"seed,omitempty"`              // Seed is the seed of the chat completion request.
    FrequencyPenalty float32                       `json:"frequency_penalty,omitempty"` // FrequencyPenalty is the frequency penalty of the chat completion request.
    // LogitBias is must be a token id string (specified by their token ID in the tokenizer), not a word string.
    // incorrect: `"logit_bias":{"You": 6}`, correct: `"logit_bias":{"1639": 6}`
    // refs: https://platform.openai.com/docs/api-reference/chat/create#chat/create-logit_bias
    LogitBias map[string]int `json:"logit_bias,omitempty"`
    // LogProbs indicates whether to return log probabilities of the output tokens or not.
    // If true, returns the log probabilities of each output token returned in the content of message.
    // This option is currently not available on the gpt-4-vision-preview model.
    LogProbs bool `json:"logprobs,omitempty"`
    // TopLogProbs is an integer between 0 and 5 specifying the number of most likely tokens to return at each
    // token position, each with an associated log probability.
    // logprobs must be set to true if this parameter is used.
    TopLogProbs int    `json:"top_logprobs,omitempty"`
    User        string `json:"user,omitempty"`
    Tools       []Tool `json:"tools,omitempty"`       // Tools is the tools of the chat completion message.
    ToolChoice  any    `json:"tool_choice,omitempty"` // ToolChoice is the tool choice of the chat completion message.
    // Options for streaming response. Only set this when you set stream: true.
    StreamOptions *StreamOptions `json:"stream_options,omitempty"`
    // Disable the default behavior of parallel tool calls by setting it: false.
    ParallelToolCalls any `json:"parallel_tool_calls,omitempty"`
}

ChatCompletionResponse represents a response structure for chat completion API.

type ChatCompletionResponse struct {
    ID                string                 `json:"id"`                 // ID is the id of the response.
    Object            string                 `json:"object"`             // Object is the object of the response.
    Created           int64                  `json:"created"`            // Created is the created time of the response.
    Model             string                 `json:"model"`              // Model is the model of the response.
    Choices           []ChatCompletionChoice `json:"choices"`            // Choices is the choices of the response.
    Usage             Usage                  `json:"usage"`              // Usage is the usage of the response.
    SystemFingerprint string                 `json:"system_fingerprint"` // SystemFingerprint is the system fingerprint of the response.

    http.Header // Header is the header of the response.
}

func (*ChatCompletionResponse) SetHeader

func (r *ChatCompletionResponse) SetHeader(h http.Header)

SetHeader sets the header of the response.

ChatCompletionResponseFormat is the chat completion response format.

type ChatCompletionResponseFormat struct {
    Type       ChatCompletionResponseFormatType        `json:"type,omitempty"`        // Type is the type of the chat completion response format.
    JSONSchema *ChatCompletionResponseFormatJSONSchema `json:"json_schema,omitempty"` // JSONSchema is the json schema of the chat completion response format.
}

ChatCompletionResponseFormatJSONSchema is the chat completion response format json schema.

type ChatCompletionResponseFormatJSONSchema struct {
    Name        string         `json:"name"`                  // Name is the name of the chat completion response format json schema.
    Description string         `json:"description,omitempty"` // Description is the description of the chat completion response format json schema.
    Schema      json.Marshaler `json:"schema"`                // Schema is the schema of the chat completion response format json schema.
    Strict      bool           `json:"strict"`                // Strict is the strict of the chat completion response format json schema.
}

ChatCompletionResponseFormatType is the chat completion response format type.

string

type ChatCompletionResponseFormatType string

ChatCompletionStream is a stream of ChatCompletionStreamResponse.

Note: Perhaps it is more elegant to abstract Stream using generics.

type ChatCompletionStream struct {
    // contains filtered or unexported fields
}

ChatCompletionStreamChoice represents a response structure for chat completion API.

type ChatCompletionStreamChoice struct {
    Index        int                             `json:"index"`
    Delta        ChatCompletionStreamChoiceDelta `json:"delta"`
    FinishReason FinishReason                    `json:"finish_reason"`
}

ChatCompletionStreamChoiceDelta represents a response structure for chat completion API.

type ChatCompletionStreamChoiceDelta struct {
    Content      string        `json:"content,omitempty"`
    Role         string        `json:"role,omitempty"`
    FunctionCall *FunctionCall `json:"function_call,omitempty"`
    ToolCalls    []ToolCall    `json:"tool_calls,omitempty"`
}

ChatCompletionStreamResponse represents a response structure for chat completion API.

type ChatCompletionStreamResponse struct {
    ID                  string                       `json:"id"`                              // ID is the identifier for the chat completion stream response.
    Object              string                       `json:"object"`                          // Object is the object type of the chat completion stream response.
    Created             int64                        `json:"created"`                         // Created is the creation time of the chat completion stream response.
    Model               Model                        `json:"model"`                           // Model is the model used for the chat completion stream response.
    Choices             []ChatCompletionStreamChoice `json:"choices"`                         // Choices is the choices for the chat completion stream response.
    SystemFingerprint   string                       `json:"system_fingerprint"`              // SystemFingerprint is the system fingerprint for the chat completion stream response.
    PromptAnnotations   []PromptAnnotation           `json:"prompt_annotations,omitempty"`    // PromptAnnotations is the prompt annotations for the chat completion stream response.
    PromptFilterResults []PromptFilterResult         `json:"prompt_filter_results,omitempty"` // PromptFilterResults is the prompt filter results for the chat completion stream response.
    // Usage is an optional field that will only be present when you set stream_options: {"include_usage": true} in your request.
    //
    // When present, it contains a null value except for the last chunk which contains the token usage statistics
    // for the entire request.
    Usage *Usage `json:"usage,omitempty"`
}

ChatMessageImageURL represents the chat message image url.

type ChatMessageImageURL struct {
    URL    string         `json:"url,omitempty"`    // URL is the url of the image.
    Detail ImageURLDetail `json:"detail,omitempty"` // Detail is the detail of the image url.
}

ChatMessagePart represents the chat message part of a chat completion message.

type ChatMessagePart struct {
    Type     ChatMessagePartType  `json:"type,omitempty"`
    Text     string               `json:"text,omitempty"`
    ImageURL *ChatMessageImageURL `json:"image_url,omitempty"`
}

ChatMessagePartType is the chat message part type.

string

type ChatMessagePartType string

type Client

Client is a Groq api client.

type Client struct {
    EmptyMessagesLimit uint // EmptyMessagesLimit is the limit for the empty messages.
    // contains filtered or unexported fields
}

func NewClient(groqAPIKey string, opts ...Opts) (*Client, error)

NewClient creates a new Groq client.

func (*Client) CreateChatCompletion

func (c *Client) CreateChatCompletion(ctx context.Context, request ChatCompletionRequest) (response ChatCompletionResponse, err error)

CreateChatCompletion is an API call to create a chat completion.

func (c *Client) CreateChatCompletionStream(ctx context.Context, request ChatCompletionRequest) (stream *ChatCompletionStream, err error)

CreateChatCompletionStream is an API call to create a chat completion w/ streaming support.

If set, tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.

func (*Client) CreateCompletion

func (c *Client) CreateCompletion(ctx context.Context, request CompletionRequest) (response CompletionResponse, err error)

CreateCompletion — API call to create a completion. This is the main endpoint of the API. Returns new text as well as, if requested, the probabilities over each alternative token at each position.

If using a fine-tuned model, simply provide the model's ID in the CompletionRequest object, and the server will use the model's parameters to generate the completion.

func (*Client) CreateCompletionStream

func (c *Client) CreateCompletionStream(ctx context.Context, request CompletionRequest) (*CompletionStream, error)

CreateCompletionStream — API call to create a completion w/ streaming support.

Recv receives a response from the stream. It sets whether to stream back partial progress.

If set, tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.

func (*Client) CreateTranscription

func (c *Client) CreateTranscription(ctx context.Context, request AudioRequest) (response AudioResponse, err error)

CreateTranscription calls the transcriptions endpoint with the given request.

Returns transcribed text in the response_format specified in the request.

func (*Client) CreateTranslation

func (c *Client) CreateTranslation(ctx context.Context, request AudioRequest) (response AudioResponse, err error)

CreateTranslation calls the translations endpoint with the given request.

Returns the translated text in the response_format specified in the request.

func (*Client) Moderate

func (c *Client) Moderate(ctx context.Context, request ModerationRequest) (response Moderation, err error)

Moderate — perform a moderation api call over a string. Input can be an array or slice but a string will reduce the complexity.

CompletionChoice represents one of possible completions.

type CompletionChoice struct {
    Text         string        `json:"text"`          // Text is the text of the completion.
    Index        int           `json:"index"`         // Index is the index of the completion.
    FinishReason string        `json:"finish_reason"` // FinishReason is the finish reason of the completion.
    LogProbs     LogprobResult `json:"logprobs"`      // LogProbs is the log probabilities of the completion.
}

CompletionRequest represents a request structure for completion API.

type CompletionRequest struct {
    Model            Model          `json:"model"`                       // Model is the model to use for the completion.
    Prompt           any            `json:"prompt,omitempty"`            // Prompt is the prompt for the completion.
    BestOf           int            `json:"best_of,omitempty"`           // BestOf is the number of completions to generate.
    Echo             bool           `json:"echo,omitempty"`              // Echo is whether to echo back the prompt in the completion.
    FrequencyPenalty float32        `json:"frequency_penalty,omitempty"` // FrequencyPenalty is the frequency penalty for the completion.
    LogitBias        map[string]int `json:"logit_bias,omitempty"`        // LogitBias is must be a token id string (specified by their token ID in the tokenizer), not a word string. incorrect: `"logit_bias":{"You": 6}`, correct: `"logit_bias":{"1639": 6}` refs: https://platform.openai.com/docs/api-reference/completions/create#completions/create-logit_bias
    LogProbs         int            `json:"logprobs,omitempty"`          // LogProbs is whether to include the log probabilities in the response.
    MaxTokens        int            `json:"max_tokens,omitempty"`        // MaxTokens is the maximum number of tokens to generate.
    N                int            `json:"n,omitempty"`                 // N is the number of completions to generate.
    PresencePenalty  float32        `json:"presence_penalty,omitempty"`  // PresencePenalty is the presence penalty for the completion.
    Seed             *int           `json:"seed,omitempty"`              // Seed is the seed for the completion.
    Stop             []string       `json:"stop,omitempty"`              // Stop is the stop sequence for the completion.
    Stream           bool           `json:"stream,omitempty"`            // Stream is whether to stream the response.
    Suffix           string         `json:"suffix,omitempty"`            // Suffix is the suffix for the completion.
    Temperature      float32        `json:"temperature,omitempty"`       // Temperature is the temperature for the completion.
    TopP             float32        `json:"top_p,omitempty"`             // TopP is the top p for the completion.
    User             string         `json:"user,omitempty"`              // User is the user for the completion.
}

CompletionResponse represents a response structure for completion API.

type CompletionResponse struct {
    ID      string             `json:"id"`      // ID is the ID of the completion.
    Object  string             `json:"object"`  // Object is the object of the completion.
    Created int64              `json:"created"` // Created is the created time of the completion.
    Model   Model              `json:"model"`   // Model is the model of the completion.
    Choices []CompletionChoice `json:"choices"` // Choices is the choices of the completion.
    Usage   Usage              `json:"usage"`   // Usage is the usage of the completion.

    Header http.Header // Header is the header of the response.
}

func (*CompletionResponse) SetHeader

func (r *CompletionResponse) SetHeader(header http.Header)

SetHeader sets the header of the response.

CompletionStream is a stream of completions.

type CompletionStream struct {
    // contains filtered or unexported fields
}

DefaultErrorAccumulator is a default implementation of ErrorAccumulator

type DefaultErrorAccumulator struct {
    Buffer errorBuffer
}

func (*DefaultErrorAccumulator) Bytes

func (e *DefaultErrorAccumulator) Bytes() (errBytes []byte)

Bytes returns the bytes of the error accumulator.

func (*DefaultErrorAccumulator) Write

func (e *DefaultErrorAccumulator) Write(p []byte) error

Write writes bytes to the error accumulator.

Endpoint is the endpoint for the groq api. string

type Endpoint string

ErrChatCompletionInvalidModel is an error that occurs when the model is not supported with the CreateChatCompletion method.

type ErrChatCompletionInvalidModel struct {
    Model    Model
    Endpoint Endpoint
}

func (ErrChatCompletionInvalidModel) Error

func (e ErrChatCompletionInvalidModel) Error() string

Error implements the error interface.

ErrChatCompletionStreamNotSupported is an error that occurs when streaming is not supported with the CreateChatCompletionStream method.

type ErrChatCompletionStreamNotSupported struct {
    // contains filtered or unexported fields
}

func (ErrChatCompletionStreamNotSupported) Error

func (e ErrChatCompletionStreamNotSupported) Error() string

Error implements the error interface.

ErrCompletionRequestPromptTypeNotSupported is an error that occurs when the type of CompletionRequest.Prompt only supports string and []string.

type ErrCompletionRequestPromptTypeNotSupported struct{}

func (ErrCompletionRequestPromptTypeNotSupported) Error

func (e ErrCompletionRequestPromptTypeNotSupported) Error() string

Error implements the error interface.

ErrCompletionStreamNotSupported is an error that occurs when streaming is not supported with the CreateCompletionStream method.

type ErrCompletionStreamNotSupported struct{}

func (ErrCompletionStreamNotSupported) Error

func (e ErrCompletionStreamNotSupported) Error() string

Error implements the error interface.

ErrCompletionUnsupportedModel is an error that occurs when the model is not supported with the CreateCompletion method.

type ErrCompletionUnsupportedModel struct{ Model Model }

func (ErrCompletionUnsupportedModel) Error

func (e ErrCompletionUnsupportedModel) Error() string

Error implements the error interface.

ErrContentFieldsMisused is an error that occurs when both Content and MultiContent properties are set.

type ErrContentFieldsMisused struct {
    // contains filtered or unexported fields
}

func (ErrContentFieldsMisused) Error

func (e ErrContentFieldsMisused) Error() string

Error implements the error interface.

ErrTooManyEmptyStreamMessages is returned when the stream has sent too many empty messages.

type ErrTooManyEmptyStreamMessages struct{}

func (ErrTooManyEmptyStreamMessages) Error

func (e ErrTooManyEmptyStreamMessages) Error() string

Error returns the error message.

FinishReason is the finish reason. string

type FinishReason string

func (FinishReason) MarshalJSON

func (r FinishReason) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaler interface.

type Format

Format is the format of a response. string

type Format string

const (
    // FormatText is the text format. It is the default format of a
    // response.
    FormatText Format = "text"
    // FormatJSON is the JSON format. There is no support for streaming with
    // JSON format selected.
    FormatJSON Format = "json"
)

FunctionCall represents a function call.

type FunctionCall struct {
    Name      string `json:"name,omitempty"`      // Name is the name of the function call.
    Arguments string `json:"arguments,omitempty"` // Arguments is the arguments of the function call in JSON format.
}

FunctionDefinition represents the function definition.

type FunctionDefinition struct {
    Name        string `json:"name"`                  // Name is the name of the function.
    Description string `json:"description,omitempty"` // Description is the description of the function.
    Strict      bool   `json:"strict,omitempty"`      // Strict is the strict of the function.
    // Parameters is an object describing the function.
    // You can pass json.RawMessage to describe the schema,
    // or you can pass in a struct which serializes to the proper JSON schema.
    // The jsonschema package is provided for convenience, but you should
    // consider another specialized library if you require more complex schemas.
    Parameters any `json:"parameters"`
}

HarmfulCategory is a category of harmful content.

[Llama Guard 3](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama-guard-3/\)

Llama Guard 3 provides content safety support for the following languages: English, French, German, Hindi, Italian, Portuguese, Spanish, and Thai.

string

type HarmfulCategory string

const (
    // ModerationTextStable is the text-moderation-stable model.
    ModerationTextStable = "text-moderation-stable"
    // ModerationTextLatest is the text-moderation-latest model.
    ModerationTextLatest = "text-moderation-latest"

    // CategoryViolentCrimes (S1) is the violent crimes category. Responses that enable, encourage, or endorse the commission of violent crimes, including: (1) unlawful violence toward people (ex: terrorism, genocide, murder, hate-motivated violent crimes, child abuse, assault, battery, kidnapping) and (2) unlawful violence toward animals (ex: animal abuse)
    CategoryViolentCrimes HarmfulCategory = "violent_crimes"
    // CategoryNonviolentCrimes (S2) is the non-violent crimes category.
    CategoryNonviolentCrimes HarmfulCategory = "non_violent_crimes"
    // CategorySexRelatedCrimes (S3) is the sex-related crimes category.
    CategorySexRelatedCrimes HarmfulCategory = "sex_related_crimes"
    // CategoryChildSexualExploitation (S4) is the child sexual exploitation category. Responses that contain, describe, enable, encourage, or endorse the sexual abuse of children.
    CategoryChildSexualExploitation HarmfulCategory = "child_sexual_exploitation"
    // CategoryDefamation (S5) is the defamation category. Responses that contain, describe, enable, encourage, or endorse defamation.
    CategoryDefamation HarmfulCategory = "defamation"
    // CategorySpecializedAdvice (S6) is the specialized advice category. Responses that contain, describe, enable, encourage, or endorse specialized advice.
    CategorySpecializedAdvice HarmfulCategory = "specialized_advice"
    // CategoryPrivacy (S7) is the privacy category. Responses that contain, describe, enable, encourage, or endorse privacy.
    CategoryPrivacy HarmfulCategory = "privacy"
    // CategoryIntellectualProperty (S8) is the intellectual property category. Responses that contain, describe, enable, encourage, or endorse intellectual property.
    CategoryIntellectualProperty HarmfulCategory = "intellectual_property"
    // CategoryIndiscriminateWeapons (S9) is the indiscriminate weapons category. Responses that contain, describe, enable, encourage, or endorse indiscriminate weapons.
    CategoryIndiscriminateWeapons HarmfulCategory = "indiscriminate_weapons"
    // CategoryHate (S10) is the hate category. Responses that contain, describe, enable, encourage, or endorse hate.
    CategoryHate HarmfulCategory = "hate"
    // CategorySuicideAndSelfHarm (S11) is the suicide/self-harm category. Responses that contain, describe, enable, encourage, or endorse suicide or self-harm.
    CategorySuicideAndSelfHarm HarmfulCategory = "suicide_and_self_harm"
    // CategorySexualContent (S12) is the sexual content category. Responses that contain, describe, enable, encourage, or endorse sexual content.
    CategorySexualContent HarmfulCategory = "sexual_content"
    // CategoryElections (S13) is the elections category. Responses that contain factually incorrect information about electoral systems and processes, including in the time, place, or manner of voting in civic elections.
    CategoryElections HarmfulCategory = "elections"
    // CategoryCodeInterpreterAbuse (S14) is the code interpreter abuse category. Responses that contain, describe, enable, encourage, or endorse code interpreter abuse.
    CategoryCodeInterpreterAbuse HarmfulCategory = "code_interpreter_abuse"
)

ImageURLDetail is the image url detail.

string

type ImageURLDetail string

type LogProb

LogProb represents the probability information for a token.

type LogProb struct {
    Token       string        `json:"token"`           // Token is the token of the log prob.
    LogProb     float64       `json:"logprob"`         // LogProb is the log prob of the log prob.
    Bytes       []byte        `json:"bytes,omitempty"` // Omitting the field if it is null
    TopLogProbs []TopLogProbs `json:"top_logprobs"`    // TopLogProbs is a list of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned.
}

LogProbs is the top-level structure containing the log probability information.

type LogProbs struct {
    Content []LogProb `json:"content"` // Content is a list of message content tokens with log probability information.
}

LogprobResult represents logprob result of Choice.

type LogprobResult struct {
    Tokens        []string             `json:"tokens"`         // Tokens is the tokens of the completion.
    TokenLogprobs []float32            `json:"token_logprobs"` // TokenLogprobs is the token log probabilities of the completion.
    TopLogprobs   []map[string]float32 `json:"top_logprobs"`   // TopLogprobs is the top log probabilities of the completion.
    TextOffset    []int                `json:"text_offset"`    // TextOffset is the text offset of the completion.
}

type Model

Model is the type for models present on the groq api. string

type Model string

Moderation represents one of possible moderation results.

type Moderation struct {
    Categories []HarmfulCategory `json:"categories"` // Categories is the categories of the result.
    Flagged    bool              `json:"flagged"`    // Flagged is the flagged of the result.
}

ModerationRequest represents a request structure for moderation API.

type ModerationRequest struct {
    Input string `json:"input,omitempty"` // Input is the input text to be moderated.
    Model Model  `json:"model,omitempty"` // Model is the model to use for the moderation.
}

type Opts

Opts is a function that sets options for a Groq client.

type Opts func(*Client)

func WithBaseURL(baseURL string) Opts

WithBaseURL sets the base URL for the Groq client.

func WithClient(client *http.Client) Opts

WithClient sets the client for the Groq client.

func WithLogger(logger zerolog.Logger) Opts

WithLogger sets the logger for the Groq client.

PromptAnnotation represents the prompt annotation.

type PromptAnnotation struct {
    PromptIndex int `json:"prompt_index,omitempty"`
}

PromptFilterResult represents a response structure for chat completion API.

type PromptFilterResult struct {
    Index int `json:"index"`
}

RateLimitHeaders struct represents Groq rate limits headers.

type RateLimitHeaders struct {
    LimitRequests     int       `json:"x-ratelimit-limit-requests"`     // LimitRequests is the limit requests of the rate limit headers.
    LimitTokens       int       `json:"x-ratelimit-limit-tokens"`       // LimitTokens is the limit tokens of the rate limit headers.
    RemainingRequests int       `json:"x-ratelimit-remaining-requests"` // RemainingRequests is the remaining requests of the rate limit headers.
    RemainingTokens   int       `json:"x-ratelimit-remaining-tokens"`   // RemainingTokens is the remaining tokens of the rate limit headers.
    ResetRequests     ResetTime `json:"x-ratelimit-reset-requests"`     // ResetRequests is the reset requests of the rate limit headers.
    ResetTokens       ResetTime `json:"x-ratelimit-reset-tokens"`       // ResetTokens is the reset tokens of the rate limit headers.
}

RawResponse is a response from the raw endpoint.

type RawResponse struct {
    io.ReadCloser

    http.Header
}

ResetTime is a time.Time wrapper for the rate limit reset time. string

type ResetTime string

func (ResetTime) String

func (r ResetTime) String() string

String returns the string representation of the ResetTime.

func (ResetTime) Time

func (r ResetTime) Time() time.Time

Time returns the time.Time representation of the ResetTime.

type Role

Role is the role of the chat completion message.

string

type Role string

Segments is the segments of the response.

type Segments []struct {
    ID               int     `json:"id"`                // ID is the ID of the segment.
    Seek             int     `json:"seek"`              // Seek is the seek of the segment.
    Start            float64 `json:"start"`             // Start is the start of the segment.
    End              float64 `json:"end"`               // End is the end of the segment.
    Text             string  `json:"text"`              // Text is the text of the segment.
    Tokens           []int   `json:"tokens"`            // Tokens is the tokens of the segment.
    Temperature      float64 `json:"temperature"`       // Temperature is the temperature of the segment.
    AvgLogprob       float64 `json:"avg_logprob"`       // AvgLogprob is the avg log prob of the segment.
    CompressionRatio float64 `json:"compression_ratio"` // CompressionRatio is the compression ratio of the segment.
    NoSpeechProb     float64 `json:"no_speech_prob"`    // NoSpeechProb is the no speech prob of the segment.
    Transient        bool    `json:"transient"`         // Transient is the transient of the segment.
}

StreamOptions represents the stream options.

type StreamOptions struct {
    // If set, an additional chunk will be streamed before the data: [DONE] message.
    // The usage field on this chunk shows the token usage statistics for the entire request,
    // and the choices field will always be an empty array.
    // All other chunks will also include a usage field, but with a null value.
    IncludeUsage bool `json:"include_usage,omitempty"`
}

type Tool

Tool represents the tool.

type Tool struct {
    Type     ToolType            `json:"type"`               // Type is the type of the tool.
    Function *FunctionDefinition `json:"function,omitempty"` // Function is the function of the tool.
}

ToolCall represents a tool call.

type ToolCall struct {
    // Index is not nil only in chat completion chunk object
    Index    *int         `json:"index,omitempty"` // Index is the index of the tool call.
    ID       string       `json:"id"`              // ID is the id of the tool call.
    Type     ToolType     `json:"type"`            // Type is the type of the tool call.
    Function FunctionCall `json:"function"`        // Function is the function of the tool call.
}

ToolChoice represents the tool choice.

type ToolChoice struct {
    Type     ToolType     `json:"type"`               // Type is the type of the tool choice.
    Function ToolFunction `json:"function,omitempty"` // Function is the function of the tool choice.
}

ToolFunction represents the tool function.

type ToolFunction struct {
    Name string `json:"name"` // Name is the name of the tool function.
}

ToolType is the tool type.

string

type ToolType string

TopLogProbs represents the top log probs.

type TopLogProbs struct {
    Token   string  `json:"token"`           // Token is the token of the top log probs.
    LogProb float64 `json:"logprob"`         // LogProb is the log prob of the top log probs.
    Bytes   []byte  `json:"bytes,omitempty"` // Bytes is the bytes of the top log probs.
}

TranscriptionTimestampGranularity is the timestamp granularity for the transcription.

string

type TranscriptionTimestampGranularity string

type Usage

Usage Represents the total token usage per request to Groq.

type Usage struct {
    PromptTokens     int `json:"prompt_tokens"`
    CompletionTokens int `json:"completion_tokens"`
    TotalTokens      int `json:"total_tokens"`
}

type Words

Words is the words of the response.

type Words []struct {
    Word  string  `json:"word"`  // Word is the word of the words.
    Start float64 `json:"start"` // Start is the start of the words.
    End   float64 `json:"end"`   // End is the end of the words.
}

Generated by gomarkdoc

About

groq api package for interacting with language models avaliable on cloudgroq api.

License:MIT License


Languages

Language:Go 95.2%Language:Shell 4.3%Language:Makefile 0.6%