Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(Feature Request) Allow setting a custom OpenAI API endpoint to experiment with locally hosted LLMs #32

Open
PicoPlanetDev opened this issue Jul 10, 2023 · 2 comments

Comments

@PicoPlanetDev
Copy link

Some open-source, locally run LLMs have the capability to emulate key features of the OpenAI API to "appear" to an application as GPT. Can you add support for a custom API endpoint to experiment with this and see if the performance of something like WizardLM or Falcon may be up to the task of generating commands?
I would then envision an input field in the Settings window with the default OpenAI API endpoint URL replaceable with your own.

@FireCubeStudios
Copy link
Owner

Would i tbe possible to test this "see if the performance of something like WizardLM or Falcon may be up to the task of generating commands" in a seperate prototype app first?

I have another WIP all in one LLM app which could support custom LLM but thats only for that app. I plan to stick to GPT 3/4 for Run

@EFLKumo
Copy link

EFLKumo commented Jan 11, 2025

Also need to customize the endpoint so I can call the OpenAI API from a third party instead of using the official one (they have rate limits, etc. and I haven't used it in a while)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants