There are five main endpoints:
/v1/chat/completions
.openai.base_url = 'https://api.discord.rocks'
['claude-3-haiku-20240307', 'claude-3-sonnet-20240229', 'claude-3-5-sonnet-20240620', 'claude-3-opus-20240229', 'gpt-4', 'gpt-4-turbo', 'gpt-3.5-turbo', 'gpt-4o', 'llama-3-70b-chat', 'llama-3-8b-chat', 'llama-2-70b-chat', 'llama-2-13b-chat', 'llama-2-7b-chat', 'LlamaGuard-2-8b', 'Yi-34B-Chat', 'Yi-34B', 'Yi-6B', 'Mixtral-8x7B-v0.1', 'Mixtral-8x22B', 'Mixtral-8x7B-Instruct-v0.1', 'Mixtral-8x22B-Instruct-v0.1', 'Mistral-7B-Instruct-v0.1', 'Mistral-7B-Instruct-v0.2', 'Mistral-7B-Instruct-v0.3', 'openchat-3.5', 'WizardLM-13B-V1.2', 'WizardCoder-Python-34B-V1.0', 'Qwen1.5-0.5B-Chat', 'Qwen1.5-1.8B-Chat', 'Qwen1.5-4B-Chat', 'Qwen1.5-7B-Chat', 'Qwen1.5-14B-Chat', 'Qwen1.5-72B-Chat', 'Qwen1.5-110B-Chat', 'gemma-2b-it', 'gemma-7b-it', 'gemma-2b', 'gemma-7b', 'dbrx-instruct', 'vicuna-7b-v1.5', 'vicuna-13b-v1.5', 'dolphin-2.5-mixtral-8x7b', 'deepseek-coder-33b-instruct', 'deepseek-coder-67b-instruct', 'Nous-Capybara-7B-V1p9', 'Nous-Hermes-2-Mixtral-8x7B-DPO', 'Nous-Hermes-2-Mixtral-8x7B-SFT', 'Nous-Hermes-llama-2-7b', 'Nous-Hermes-Llama2-13b', 'Nous-Hermes-2-Yi-34B', 'Mistral-7B-OpenOrca', 'alpaca-7b', 'OpenHermes-2-Mistral-7B', 'OpenHermes-2.5-Mistral-7B', 'phi-2', 'WizardLM-2-8x22B', 'NexusRaven-V2-13B', 'Phind-CodeLlama-34B-v2', 'CodeLlama-7b-Python-hf', 'CodeLlama-13b-Python-hf', 'CodeLlama-34b-Python-hf', 'CodeLlama-70b-Python-hf', 'snowflake-arctic-instruct', 'SOLAR-10.7B-Instruct-v1.0', 'StripedHyena-Hessian-7B', 'StripedHyena-Nous-7B']
The /v1/chat/completions
and /chat/completions
endpoints are used to send a prompt to the specified LLM model and receive a response.
Send a POST request to /v1/chat/completions
or /chat/completions
with the following JSON payload:
{
'model': 'claude-3-opus',
'messages': [
{'role': 'system', 'content': 'System prompt (only the first message, once)'},
{'role': 'user', 'content': 'Message content'},
{'role': 'assistant', 'content': 'Assistant response'}
],
'max_tokens': 2048,
'stream': false,
'temperature': 0.7,
'top_p': 0.5,
'top_k': 0
}
The response will be in the following format:
{
'id': 'chatcmpl-123',
'object': 'chat.completion',
'created': 1677652288,
'model': 'claude-3-opus',
'system_fingerprint': 'fp_44709d6fcb',
'choices': [{
'index': 0,
'message': {
'role': 'assistant',
'content': 'Response content'
},
'logprobs': null,
'finish_reason': 'stop'
}],
'usage': {
'prompt_tokens': 9,
'completion_tokens': 12,
'total_tokens': 21
}
}
The /v1/models
and /models
endpoints are used to list all available models.
Send a GET request to /v1/models
or /models
to retrieve a list of available models.
The response will be in the following format:
{
'object': 'list',
'data': [
{
'id': 'model-id',
'object': 'model',
'created': 1686935002,
'owned_by': 'provider'
},
...
]
}
The /v1/images/generations
and /images/generations
endpoints are used to generate images using DALL-E models.
Send a POST request to /v1/images/generations
or /images/generations
with the following JSON payload:
{
'prompt': 'A futuristic cityscape',
'model': 'dall-e-3',
'n': 1,
'quality': 'hd',
'response_format': 'url',
'size': '1024x1024'
}
The response will be in the following format:
{
'created': 1686935002,
'data': [
{
'url': 'https://example.com/generated_image.png'
}
]
}
The /v1/imagine
and /imagine
endpoints are used to generate images.
Send a GET request to /v1/imagine
or /imagine
with the following query parameters:
{
'prompt': 'A beautiful landscape',
'negative_prompt': 'rain',
'width': 1024,
'height': 1024,
'steps': 50,
'seed': 123456,
'model': 'sdxl' or 'pluto'
}
Note: The Pluto model only accepts the "prompt" and the "negative_prompt" parameter.
The response will be an image in PNG format.
The response will be in the following format:
Content-Type: image/png
(binary image data)
Example GET request:
GET /v1/imagine?prompt=A+beautiful+landscape&negative_prompt=rain&width=1024&height=1024&steps=50&seed=123456&model=sdxl
The /v1/ask
and /ask
endpoints are using the GlobalAsk api which performs a request to chatgpt with web search and image generation.
Send a GET request to /v1/ask
or /ask
with the following query parameter:
{
'prompt': 'What is ChatGPT'
}
The response will be a text/event-stream consisting of multiple json chunks.
Every line is sent seperatly and contains a json code in the following format:
{
'message': STRING,
'url': STRING
}
You can try the GlobalAsk API here.