You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
streaming AI responses - totally expected in chats
get a list of models for each/selected provider - as models change frequently nowadays, it's better to ask for a list from the provider than keep my own list up-to-date
add LM Studio - nice local LLM server with lots of models
add OpenRouter - 200+ models, and one api key to get to them all
add Google's free Gemini in additional to Advanced and Vertex
Thanks for your efforts to "normalize" the LLM requests/responses to "fit" OpenAI's API ...
why all of the AI providers can not agree on a standard API is puzzling (or is it?).
The text was updated successfully, but these errors were encountered:
cleesmith
changed the title
Add support for: LM Studio, OpenRouter, Google's free Gemini, streaming AI responses, and list of models for each provider
Add support for: streaming, model lists, LM Studio, OpenRouter, more Google Gemini's
Nov 29, 2024
It would be great to have support for these:
streaming AI responses - totally expected in chats
get a list of models for each/selected provider - as models change frequently nowadays, it's better to ask for a list from the provider than keep my own list up-to-date
add LM Studio - nice local LLM server with lots of models
add OpenRouter - 200+ models, and one api key to get to them all
add Google's free Gemini in additional to Advanced and Vertex
Thanks for your efforts to "normalize" the LLM requests/responses to "fit" OpenAI's API ...
why all of the AI providers can not agree on a standard API is puzzling (or is it?).
The text was updated successfully, but these errors were encountered: