You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some open-source, locally run LLMs have the capability to emulate key features of the OpenAI API to "appear" to an application as GPT. Can you add support for a custom API endpoint to experiment with this and see if the performance of something like WizardLM or Falcon may be up to the task of generating commands?
I would then envision an input field in the Settings window with the default OpenAI API endpoint URL replaceable with your own.
The text was updated successfully, but these errors were encountered:
Would i tbe possible to test this "see if the performance of something like WizardLM or Falcon may be up to the task of generating commands" in a seperate prototype app first?
I have another WIP all in one LLM app which could support custom LLM but thats only for that app. I plan to stick to GPT 3/4 for Run
Also need to customize the endpoint so I can call the OpenAI API from a third party instead of using the official one (they have rate limits, etc. and I haven't used it in a while)
Some open-source, locally run LLMs have the capability to emulate key features of the OpenAI API to "appear" to an application as GPT. Can you add support for a custom API endpoint to experiment with this and see if the performance of something like WizardLM or Falcon may be up to the task of generating commands?
I would then envision an input field in the Settings window with the default OpenAI API endpoint URL replaceable with your own.
The text was updated successfully, but these errors were encountered: