-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for LMStudio #3973
Comments
Hey @EDLLT How are you? If I understand correctly, all you have to do is update the base url in the OpenAI component. I set it to |
Oh. Apologies, I had tried setting it before but not setting the API key was causing it to not work. |
Hey, so I managed to get the LM Studio's LLMs working but what about embedding models? Edit: Apparently it does have an option but it doesn't work from the get go with LM Studio like the previous one Edit2: I managed to get it to work with an nvidia embedding, simply changed the base url and supplied the correct model name |
I've reopened the issue because I'm planning on adding an lmstudio component |
@dosu |
To add an LM Studio component to Langflow, you can follow these steps:
By following these steps, you can successfully add an LM Studio component to Langflow and extend its functionality to support LM Studio's capabilities [1][2][3]. |
Why am I getting a CI error? CI error doesn't make sense to me
Line it's complaining about
Every other component I'd seen(OpenAIModel.py, OllamaModel.py) are structured like that but they don't get errors..? |
The CI error you're encountering, In your code, you are concatenating lists using the inputs = [
*LCModelComponent._base_inputs,
IntInput(
name="max_tokens",
display_name="Max Tokens",
advanced=True,
info="The maximum number of tokens to generate. Set to 0 for unlimited tokens.",
range_spec=RangeSpec(min=0, max=128000),
),
# ... other inputs
] This change should address the |
@dosu What did ogabrielluiz mean by that comment? What should be changed? |
The comment by @ogabrielluiz suggests that the code should use To address this, you should consider updating the code to replace |
Feature Request
https://lmstudio.ai/
Could LM Studio support be added. LM Studio is much more versatile than ollama and allows you to download and run GGUF models directly from Hugging Face
Motivation
Ollama simply doesn't work with my Nvidia GPU. It utilizes the CPU only so it's extremely slow.
Your Contribution
Perhaps we could look at how AnythingLLM integrates with LM Studio
The text was updated successfully, but these errors were encountered: