Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for LMStudio #3973

Open
EDLLT opened this issue Oct 1, 2024 · 11 comments · May be fixed by #4021
Open

Support for LMStudio #3973

EDLLT opened this issue Oct 1, 2024 · 11 comments · May be fixed by #4021
Labels
enhancement New feature or request

Comments

@EDLLT
Copy link

EDLLT commented Oct 1, 2024

Feature Request

https://lmstudio.ai/

Could LM Studio support be added. LM Studio is much more versatile than ollama and allows you to download and run GGUF models directly from Hugging Face

Motivation

Ollama simply doesn't work with my Nvidia GPU. It utilizes the CPU only so it's extremely slow.

Your Contribution

Perhaps we could look at how AnythingLLM integrates with LM Studio

@EDLLT EDLLT added the enhancement New feature or request label Oct 1, 2024
@EDLLT
Copy link
Author

EDLLT commented Oct 1, 2024

Relevant messages I found from LM Studio's discord

image

image

@ogabrielluiz
Copy link
Contributor

Hey @EDLLT

How are you?

If I understand correctly, all you have to do is update the base url in the OpenAI component. I set it to http://localhost:1234/v1 and it worked first try.

@EDLLT
Copy link
Author

EDLLT commented Oct 2, 2024

Hey @EDLLT

How are you?

If I understand correctly, all you have to do is update the base url in the OpenAI component. I set it to http://localhost:1234/v1 and it worked first try.

Oh. Apologies, I had tried setting it before but not setting the API key was causing it to not work.
Thanks for your help!

@EDLLT EDLLT closed this as completed Oct 2, 2024
@EDLLT
Copy link
Author

EDLLT commented Oct 3, 2024

Hey @EDLLT

How are you?

If I understand correctly, all you have to do is update the base url in the OpenAI component. I set it to http://localhost:1234/v1 and it worked first try.

Hey, so I managed to get the LM Studio's LLMs working but what about embedding models?
Unlike the OpenAI LLM component, the OpenAI embeddings doesn't allow me to change the base url

Edit: Apparently it does have an option but it doesn't work from the get go with LM Studio like the previous one
I'll attempt to fix it

Edit2: I managed to get it to work with an nvidia embedding, simply changed the base url and supplied the correct model name

@EDLLT EDLLT reopened this Oct 4, 2024
@EDLLT
Copy link
Author

EDLLT commented Oct 4, 2024

I've reopened the issue because I'm planning on adding an lmstudio component

@EDLLT EDLLT linked a pull request Oct 4, 2024 that will close this issue
@EDLLT
Copy link
Author

EDLLT commented Oct 4, 2024

@dosu
Hello, are you there?

Copy link

dosubot bot commented Oct 4, 2024

To add an LM Studio component to Langflow, you can follow these steps:

  1. Create a Custom Component: Write a new class for your LM Studio component that inherits from the CustomComponent class provided by Langflow. This class should include a build method to define the component's logic.

  2. Define Attributes: Add attributes like display_name and description to provide information about your component.

  3. Implement Methods: Implement the build_config method to specify configuration options for your component. This method should return a dictionary with field names as keys and their configurations as values. The build method should take input parameters specified in the build_config method and return the desired output.

  4. Organize Your Code: Place your component code in a folder structure that follows the conventions of the config.yaml file. Use a custom_components subdirectory for your components.

  5. Load Custom Components: Set the LANGFLOW_COMPONENTS_PATH environment variable to the path of your custom components directory, or use the --components-path argument when running the Langflow CLI to specify the path.

  6. Test and Debug: Ensure your component code is error-free before loading it into Langflow. Langflow will print error messages for components that fail to load.

By following these steps, you can successfully add an LM Studio component to Langflow and extend its functionality to support LM Studio's capabilities [1][2][3].

To continue talking to Dosu, mention @dosu.

@EDLLT
Copy link
Author

EDLLT commented Oct 4, 2024

@dosu

Why am I getting a CI error?

CI error doesn't make sense to me

Error: src/backend/base/langflow/components/models/LMStudioModel.py:57:14: RUF005 Consider iterable unpacking instead of concatenation
Error: Process completed with exit code 1.

Line it's complaining about

57    inputs = LCModelComponent._base_inputs + [
58        IntInput(
59            name="max_tokens",
60            display_name="Max Tokens",
61            advanced=True,
62            info="The maximum number of tokens to generate. Set to 0 for unlimited tokens.",
63            range_spec=RangeSpec(min=0, max=128000),
64        ),
...

Every other component I'd seen(OpenAIModel.py, OllamaModel.py) are structured like that but they don't get errors..?

Copy link

dosubot bot commented Oct 4, 2024

The CI error you're encountering, RUF005, suggests using iterable unpacking instead of concatenation. This error is likely raised by a linter or code style checker that enforces specific coding practices.

In your code, you are concatenating lists using the + operator. To resolve this error, you can use iterable unpacking, which is a more modern and often preferred approach in Python. Here's how you can modify your code:

inputs = [
    *LCModelComponent._base_inputs,
    IntInput(
        name="max_tokens",
        display_name="Max Tokens",
        advanced=True,
        info="The maximum number of tokens to generate. Set to 0 for unlimited tokens.",
        range_spec=RangeSpec(min=0, max=128000),
    ),
    # ... other inputs
]

This change should address the RUF005 error by using iterable unpacking (*) to combine the lists. If other components like OpenAIModel.py and OllamaModel.py are not raising this error, they might not be subject to the same linting rules, or they might already be using iterable unpacking.

To continue talking to Dosu, mention @dosu.

@EDLLT
Copy link
Author

EDLLT commented Oct 4, 2024

@dosu
src/backend/base/langflow/components/models/LMStudioModel.py
Comment on lines +139 to +140
if output_schema_dict:
output = output.with_structured_output(schema=output_schema_dict, method="json_mode") # type: ignore
Contributor
@ogabrielluiz ogabrielluiz 2 hours ago
I'm using it today (loving it, btw) and it seems it has to be json_schema. We need to update this (and the OpenAI component) to make this work better with BaseModel

What did ogabrielluiz mean by that comment? What should be changed?

Copy link

dosubot bot commented Oct 4, 2024

The comment by @ogabrielluiz suggests that the code should use json_schema instead of output_schema_dict for better compatibility with BaseModel. This implies that the current implementation using output_schema_dict might not be fully aligned with the intended design or functionality, particularly in relation to how structured outputs are handled.

To address this, you should consider updating the code to replace output_schema_dict with json_schema in the relevant parts of the LMStudioModel and OpenAI components. This change would likely involve ensuring that json_schema is correctly defined and used to enable structured output generation, aligning with the expected behavior when interacting with BaseModel.

To continue talking to Dosu, mention @dosu.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants