-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ECONNREFUSED on localhost #15
Comments
Hi @rtdean93 – Thanks for reaching out! Have you started the BaseAI dev server in your project using npx baseai@latest dev If not, please run it and try it again. Here is a quick guide to help you how you can do it: https://baseai.dev/docs/pipe/quickstart#step-6-start-baseai-server |
We need to handle this paper cut better. It's on our todo list. @msaaddev @ahmadbilaldev, please prioritize this. @rtdean93 glad to see you're already trying it out. As @msaaddev mentioned. You are not running your So you'll be running Next.js dev server in one terminal and BaseAI dev server in another. At the same time. Closing the issue. Let us know if you still got a problem. |
Thank you Ahmad,
Yes my local server was running… I even tested with a curl command..

I was following the docs you provided — but to be sure - I started again with a fresh install.. and the same thing is occurring.
I am attaching the ZIP file of my code (with node_modules and API key removed).

This is the error I am still getting.
"(base) ***@***.*** my2-ai-project % npx tsx index.ts
pipe.run summararizer RUN
pipe.run.baseUrl.endpoint http://localhost:9000/beta/pipes/run
pipe.run.options
{
pipe: Pipe {
request: Request {
config: { apiKey: undefined, baseUrl: 'http://localhost:9000 <http://localhost:9000/>' }
},
pipe: {
name: 'summararizer',
description: '',
status: 'private',
meta: { stream: true, json: false, store: true, moderate: true },
model: {
name: 'gpt-4o-mini',
provider: 'OpenAI',
params: {
top_p: 1,
max_tokens: 1000,
temperature: 0.7,
presence_penalty: 1,
frequency_penalty: 1,
stop: []
},
tool_choice: 'auto',
parallel_tool_calls: true
},
messages: [
{
role: 'system',
content: 'You are a content summarizer. You will summarize content without loosing context into less wordy to the point version.\n'
}
],
variables: [],
tools: [],
functions: [],
memorysets: []
},
tools: {},
maxCalls: 100,
hasTools: false
},
messages: [
{
role: 'user',
content: '\n' +
'Langbase studio is your playground to build, collaborate, and deploy AI. It allows you to experiment with your pipes in real-time, with real data, store messages, version your prompts, and truly helps you take your idea from building prototypes to deployed in production with LLMOps on usage, cost, and quality.\n' +
'A complete AI developers platform.\n' +
'- Collaborate: Invite all team members to collaborate on the pipe. Build AI together.\n' +
"- Developers & Stakeholders: All your R&D team, engineering, product, GTM (marketing and sales), literally invlove every stakeholder can collaborate on the same pipe. It's like a powerful version of GitHub x Google Docs for AI. A complete AI developers platform.\n"
}
],
stream: true
}
Request.post.options
{
endpoint: '/beta/pipes/run',
body: {
pipe: {
name: 'summararizer',
description: '',
status: 'private',
meta: { stream: true, json: false, store: true, moderate: true },
model: {
name: 'gpt-4o-mini',
provider: 'OpenAI',
params: {
top_p: 1,
max_tokens: 1000,
temperature: 0.7,
presence_penalty: 1,
frequency_penalty: 1,
stop: []
},
tool_choice: 'auto',
parallel_tool_calls: true
},
messages: [
{
role: 'system',
content: 'You are a content summarizer. You will summarize content without loosing context into less wordy to the point version.\n'
}
],
variables: [],
tools: [],
functions: [],
memorysets: []
},
messages: [
{
role: 'user',
content: '\n' +
'Langbase studio is your playground to build, collaborate, and deploy AI. It allows you to experiment with your pipes in real-time, with real data, store messages, version your prompts, and truly helps you take your idea from building prototypes to deployed in production with LLMOps on usage, cost, and quality.\n' +
'A complete AI developers platform.\n' +
'- Collaborate: Invite all team members to collaborate on the pipe. Build AI together.\n' +
"- Developers & Stakeholders: All your R&D team, engineering, product, GTM (marketing and sales), literally invlove every stakeholder can collaborate on the same pipe. It's like a powerful version of GitHub x Google Docs for AI. A complete AI developers platform.\n"
}
],
stream: true,
llmApiKey: 'sk-proj-XXXXXXXXXXXXXXeUsQh27RvM623jDdE6D6hBMFMPhooiaq2XT3BlbkFJHab8F0hRxoMoMwnWyYqKulsOMG6i5Zrbn-sdDuDC9-dvYke5oQFVNcZ_oA'
}
}
***@***.***/core/src/common/request.ts:55
throw new APIConnectionError({
^
APIConnectionError: Connection error.
at Request.send ***@***.***/core/src/common/request.ts:55:10)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at Pipe.run ***@***.***/core/src/pipes/pipes.ts:192:18)
at main (/Users/bobby/Developer/my2-ai-project/index.ts:15:21) {
status: undefined,
headers: undefined,
request_id: undefined,
error: undefined,
code: undefined,
cause: TypeError: fetch failed
at node:internal/deps/undici/undici:12618:11
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at Request.makeRequest ***@***.***/core/src/common/request.ts:103:16)
at Request.send ***@***.***/core/src/common/request.ts:49:15)
at Pipe.run ***@***.***/core/src/pipes/pipes.ts:192:18)
at main (/Users/bobby/Developer/my2-ai-project/index.ts:15:21) {
cause: Error: connect ECONNREFUSED ::1:9000
at __node_internal_captureLargerStackTrace (node:internal/errors:496:5)
at __node_internal_exceptionWithHostPort (node:internal/errors:671:12)
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1555:16) {
errno: -61,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '::1',
port: 9000
}
}
}
Node.js v18.20.4
(base) ***@***.*** my2-a”
I will use your web UI and learn more about your framework - and I know you guys have bigger things to sort out than this… but as a developer - I’d want to know if someone was trying to follow my docs and still not getting the app running. I will also trying this on another MacBook.
Thanks and Cheers,
Bobby Dean
… On Sep 30, 2024, at 3:49 PM, Ahmad Awais ⌘ ***@***.***> wrote:
We need to handle this paper cut better. It's on our todo list. @msaaddev <https://github.com/msaaddev> @ahmadbilaldev <https://github.com/ahmadbilaldev>, please prioritize this.
@rtdean93 <https://github.com/rtdean93> glad to see you're already trying it out. As @msaaddev <https://github.com/msaaddev> mentioned. You are not running your baseai server. Which currently only support 9000 port and can be run by npx ***@***.*** dev command in a new terminal.
So you'll be running Next.js dev server in one terminal and BaseAI dev server in another. At the same time.
Closing the issue. Let us know if you still got a problem.
—
Reply to this email directly, view it on GitHub <#15 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAHAKDSPLM4VZCJSE5F2SSDZZG2MXAVCNFSM6AAAAABPEEVUUCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGOBUGEYTSMRRGI>.
You are receiving this because you were mentioned.
|
It looks like something else is running on port Have you tried killing the
From Claude: To find and terminate a process using port 9000 on a Mac, you can follow these steps:
This will display information about any process using port 9000, including its PID.
Replace
This forcefully terminates the process.
If no output is displayed, the port is now free. Then try again. Let me know how it goes. |
Description
I have followed your docs at https://baseai.dev/learn/run-pipe and https://github.com/LangbaseInc/baseai/tree/main/examples/nextjs
I entered my OpenAI key (and even regenerated a new project key) and also added a LangBase key.
I continue to get the following error connecting to :9000 even though the service is running.
APIConnectionError: Connection error.
at Request.send (/Users/bobby/Developer/my-ai-project/node_modules/@baseai/core/src/common/request.ts:55:10)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at Pipe.run (/Users/bobby/Developer/my-ai-project/node_modules/@baseai/core/src/pipes/pipes.ts:192:18)
at main (/Users/bobby/Developer/my-ai-project/index.ts:15:21) {
status: undefined,
headers: undefined,
request_id: undefined,
error: undefined,
code: undefined,
cause: TypeError: fetch failed
at node:internal/deps/undici/undici:12618:11
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at Request.makeRequest (/Users/bobby/Developer/my-ai-project/node_modules/@baseai/core/src/common/request.ts:103:16)
at Request.send (/Users/bobby/Developer/my-ai-project/node_modules/@baseai/core/src/common/request.ts:49:15)
at Pipe.run (/Users/bobby/Developer/my-ai-project/node_modules/@baseai/core/src/pipes/pipes.ts:192:18)
at main (/Users/bobby/Developer/my-ai-project/index.ts:15:21) {
cause: Error: connect ECONNREFUSED ::1:9000
at __node_internal_captureLargerStackTrace (node:internal/errors:496:5)
at __node_internal_exceptionWithHostPort (node:internal/errors:671:12)
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1555:16) {
errno: -61,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '::1',
port: 9000
}
Code example
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: