Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Potential deadlock with Vaadin #12272

Open
mperktold opened this issue Sep 13, 2024 · 18 comments
Open

Potential deadlock with Vaadin #12272

mperktold opened this issue Sep 13, 2024 · 18 comments
Labels
Bug For general bugs on Jetty side

Comments

@mperktold
Copy link

Jetty version(s)
Jetty 12.0.13

Jetty Environment
core, ee10

Java version/vendor (use: java -version)
openjdk version "21.0.3" 2024-04-16 LTS
OpenJDK Runtime Environment Temurin-21.0.3+9 (build 21.0.3+9-LTS)
OpenJDK 64-Bit Server VM Temurin-21.0.3+9 (build 21.0.3+9-LTS, mixed mode, sharing)

OS type/version
Windows 11

Description
This is a repost of vaadin/flow#19938

We have several reports of our application not being able to shutdown. Apparently, some VaadinSessions stay alive and cannot be destroyed. Here are the thread dumps of two such cases:
StackTraces1.txt
StackTraces2.txt

I found some common patterns in these dumps:

One of the threads blocks on a Jetty semaphore while reading the request content of an UIDL request. Note that this thread holds the lock on the VaadinSession while blocking.

[email protected]/jdk.internal.misc.Unsafe.park(Native Method)
[email protected]/java.util.concurrent.locks.LockSupport.park(Unknown Source)
[email protected]/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionNode.block(Unknown Source)
[email protected]/java.util.concurrent.ForkJoinPool.unmanagedBlock(Unknown Source)
[email protected]/java.util.concurrent.ForkJoinPool.managedBlock(Unknown Source)
[email protected]/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(Unknown Source)
app//org.eclipse.jetty.ee10.servlet.AsyncContentProducer$LockedSemaphore.acquire(AsyncContentProducer.java:393)
app//org.eclipse.jetty.ee10.servlet.BlockingContentProducer.nextChunk(BlockingContentProducer.java:119)
app//org.eclipse.jetty.ee10.servlet.HttpInput.read(HttpInput.java:245)
app//org.eclipse.jetty.ee10.servlet.HttpInput.read(HttpInput.java:226)
[email protected]/sun.nio.cs.StreamDecoder.readBytes(Unknown Source)
[email protected]/sun.nio.cs.StreamDecoder.implRead(Unknown Source)
[email protected]/sun.nio.cs.StreamDecoder.lockedRead(Unknown Source)
[email protected]/sun.nio.cs.StreamDecoder.read(Unknown Source)
[email protected]/java.io.InputStreamReader.read(Unknown Source)
[email protected]/java.io.BufferedReader.read1(Unknown Source)
[email protected]/java.io.BufferedReader.implRead(Unknown Source)
[email protected]/java.io.BufferedReader.read(Unknown Source)
[email protected]/java.io.Reader.read(Unknown Source)
app//com.vaadin.flow.server.communication.ServerRpcHandler.getMessage(ServerRpcHandler.java:503)
app//com.vaadin.flow.server.communication.ServerRpcHandler.handleRpc(ServerRpcHandler.java:253)
app//com.vaadin.flow.server.communication.UidlRequestHandler.synchronizedHandleRequest(UidlRequestHandler.java:114)
app//com.vaadin.flow.server.SynchronizedRequestHandler.handleRequest(SynchronizedRequestHandler.java:40)
app//com.vaadin.flow.server.VaadinService.handleRequest(VaadinService.java:1584)
app//com.vaadin.flow.server.VaadinServlet.service(VaadinServlet.java:398)
app//jakarta.servlet.http.HttpServlet.service(HttpServlet.java:614)
app//org.eclipse.jetty.ee10.servlet.ServletHolder.handle(ServletHolder.java:736)

A second thread blocks on the VaadinSession while trying to close the websocket:

[email protected]/jdk.internal.misc.Unsafe.park(Native Method)
[email protected]/java.util.concurrent.locks.LockSupport.park(Unknown Source)
[email protected]/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(Unknown Source)
[email protected]/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(Unknown Source)
[email protected]/java.util.concurrent.locks.ReentrantLock$Sync.lock(Unknown Source)
[email protected]/java.util.concurrent.locks.ReentrantLock.lock(Unknown Source)
app//com.vaadin.flow.server.VaadinService.lockSession(VaadinService.java:792)
app//com.vaadin.flow.server.VaadinService.findOrCreateVaadinSession(VaadinService.java:839)
app//com.vaadin.flow.server.VaadinService.findVaadinSession(VaadinService.java:684)
app//com.vaadin.flow.server.communication.PushHandler.handleConnectionLost(PushHandler.java:408)
app//com.vaadin.flow.server.communication.PushHandler.connectionLost(PushHandler.java:368)
app//com.vaadin.flow.server.communication.PushAtmosphereHandler.onStateChange(PushAtmosphereHandler.java:62)
app//org.atmosphere.cpr.AsynchronousProcessor.invokeAtmosphereHandler(AsynchronousProcessor.java:538)
app//org.atmosphere.cpr.AsynchronousProcessor.completeLifecycle(AsynchronousProcessor.java:480)
app//org.atmosphere.cpr.AsynchronousProcessor.endRequest(AsynchronousProcessor.java:584)
app//org.atmosphere.websocket.DefaultWebSocketProcessor.close(DefaultWebSocketProcessor.java:639)
app//org.atmosphere.container.JSR356Endpoint.onClose(JSR356Endpoint.java:318)
[email protected]/java.lang.invoke.LambdaForm$DMH/0x000000001e1a4000.invokeVirtual(LambdaForm$DMH)
[email protected]/java.lang.invoke.LambdaForm$MH/0x000000001f292000.invoke(LambdaForm$MH)
[email protected]/java.lang.invoke.LambdaForm$MH/0x000000001ee44800.invoke_MT(LambdaForm$MH)
app//org.eclipse.jetty.ee10.websocket.jakarta.common.JakartaWebSocketFrameHandler.notifyOnClose(JakartaWebSocketFrameHandler.java:295)
app//org.eclipse.jetty.ee10.websocket.jakarta.common.JakartaWebSocketFrameHandler.onClose(JakartaWebSocketFrameHandler.java:267)
app//org.eclipse.jetty.ee10.websocket.jakarta.common.JakartaWebSocketFrameHandler.onFrame(JakartaWebSocketFrameHandler.java:255)
app//org.eclipse.jetty.websocket.core.WebSocketCoreSession$IncomingAdaptor.onFrame(WebSocketCoreSession.java:680)
app//org.eclipse.jetty.websocket.core.AbstractExtension.nextIncomingFrame(AbstractExtension.java:145)
app//org.eclipse.jetty.websocket.core.internal.PerMessageDeflateExtension.nextIncomingFrame(PerMessageDeflateExtension.java:239)
app//org.eclipse.jetty.websocket.core.internal.PerMessageDeflateExtension$IncomingFlusher$$Lambda/0x000000001e90bd08.onFrame(Unknown Source)
app//org.eclipse.jetty.websocket.core.util.DemandingFlusher.emitFrame(DemandingFlusher.java:143)
app//org.eclipse.jetty.websocket.core.internal.PerMessageDeflateExtension$IncomingFlusher.handle(PerMessageDeflateExtension.java:382)
app//org.eclipse.jetty.websocket.core.util.DemandingFlusher.process(DemandingFlusher.java:167)
app//org.eclipse.jetty.util.IteratingCallback.processing(IteratingCallback.java:262)
app//org.eclipse.jetty.util.IteratingCallback.succeeded(IteratingCallback.java:401)
app//org.eclipse.jetty.websocket.core.util.DemandingFlusher.onFrame(DemandingFlusher.java:105)
app//org.eclipse.jetty.websocket.core.internal.PerMessageDeflateExtension.onFrame(PerMessageDeflateExtension.java:96)
app//org.eclipse.jetty.websocket.core.ExtensionStack.onFrame(ExtensionStack.java:113)
app//org.eclipse.jetty.websocket.core.WebSocketCoreSession.onFrame(WebSocketCoreSession.java:463)
app//org.eclipse.jetty.websocket.core.WebSocketConnection.onFrame(WebSocketConnection.java:254)
app//org.eclipse.jetty.websocket.core.WebSocketConnection.fillAndParse(WebSocketConnection.java:447)
app//org.eclipse.jetty.websocket.core.WebSocketConnection.onFillable(WebSocketConnection.java:332)
app//org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:322)
app//org.eclipse.jetty.http2.HTTP2StreamEndPoint.process(HTTP2StreamEndPoint.java:497)
app//org.eclipse.jetty.http2.HTTP2StreamEndPoint.processDataAvailable(HTTP2StreamEndPoint.java:484)
app//org.eclipse.jetty.http2.server.internal.ServerHTTP2StreamEndPoint.onDataAvailable(ServerHTTP2StreamEndPoint.java:40)
app//org.eclipse.jetty.http2.server.internal.HTTP2ServerConnection.onDataAvailable(HTTP2ServerConnection.java:158)
app//org.eclipse.jetty.http2.server.HTTP2ServerConnectionFactory$HTTPServerSessionListener.onDataAvailable(HTTP2ServerConnectionFactory.java:153)
app//org.eclipse.jetty.http2.HTTP2Stream.notifyDataAvailable(HTTP2Stream.java:861)
app//org.eclipse.jetty.http2.HTTP2Stream.processData(HTTP2Stream.java:543)
app//org.eclipse.jetty.http2.HTTP2Stream.onData(HTTP2Stream.java:461)
app//org.eclipse.jetty.http2.HTTP2Stream.process(HTTP2Stream.java:368)
app//org.eclipse.jetty.http2.HTTP2Session.onData(HTTP2Session.java:280)
app//org.eclipse.jetty.http2.HTTP2Connection.onData(HTTP2Connection.java:246)
app//org.eclipse.jetty.http2.parser.BodyParser.notifyData(BodyParser.java:103)
app//org.eclipse.jetty.http2.parser.DataBodyParser.onData(DataBodyParser.java:145)
app//org.eclipse.jetty.http2.parser.DataBodyParser.onData(DataBodyParser.java:140)
app//org.eclipse.jetty.http2.parser.DataBodyParser.parse(DataBodyParser.java:106)
app//org.eclipse.jetty.http2.parser.Parser.parseBody(Parser.java:229)
app//org.eclipse.jetty.http2.parser.Parser.parse(Parser.java:156)
app//org.eclipse.jetty.http2.parser.ServerParser.parse(ServerParser.java:121)
app//org.eclipse.jetty.http2.HTTP2Connection$HTTP2Producer.produce(HTTP2Connection.java:342)
app//org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.produceTask(AdaptiveExecutionStrategy.java:512)
app//org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.tryProduce(AdaptiveExecutionStrategy.java:258)
app//org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.run(AdaptiveExecutionStrategy.java:201)
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:311)
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:979)
app//org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.doRunJob(QueuedThreadPool.java:1209)
app//org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1164)
[email protected]/java.lang.Thread.runWith(Unknown Source)
[email protected]/java.lang.Thread.run(Unknown Source)

A third thread also blocks on the VaadinSession while handling a connection loss, but this one comes from the HeartbeatInterception:

[email protected]/jdk.internal.misc.Unsafe.park(Native Method)
[email protected]/java.util.concurrent.locks.LockSupport.park(Unknown Source)
[email protected]/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(Unknown Source)
[email protected]/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(Unknown Source)
[email protected]/java.util.concurrent.locks.ReentrantLock$Sync.lock(Unknown Source)
[email protected]/java.util.concurrent.locks.ReentrantLock.lock(Unknown Source)
app//com.vaadin.flow.server.VaadinService.lockSession(VaadinService.java:798)
app//com.vaadin.flow.server.VaadinService.findOrCreateVaadinSession(VaadinService.java:845)
app//com.vaadin.flow.server.VaadinService.findVaadinSession(VaadinService.java:690)
app//com.vaadin.flow.server.communication.PushHandler.handleConnectionLost(PushHandler.java:414)
app//com.vaadin.flow.server.communication.PushHandler.connectionLost(PushHandler.java:368)
app//com.vaadin.flow.server.communication.PushAtmosphereHandler$AtmosphereResourceListener.onDisconnect(PushAtmosphereHandler.java:113)
app//org.atmosphere.cpr.AtmosphereResourceImpl.onDisconnect(AtmosphereResourceImpl.java:752)
app//org.atmosphere.cpr.AtmosphereResourceImpl.notifyListeners(AtmosphereResourceImpl.java:644)
app//org.atmosphere.cpr.AtmosphereResponseImpl.handleException(AtmosphereResponseImpl.java:732)
app//org.atmosphere.cpr.AtmosphereResponseImpl.access$1500(AtmosphereResponseImpl.java:57)
app//org.atmosphere.cpr.AtmosphereResponseImpl$Stream.write(AtmosphereResponseImpl.java:958)
app//org.atmosphere.cpr.AtmosphereResponseImpl.write(AtmosphereResponseImpl.java:805)
app//org.atmosphere.interceptor.HeartbeatInterceptor.lambda$clock$0(HeartbeatInterceptor.java:367)
app//org.atmosphere.interceptor.HeartbeatInterceptor$$Lambda/0x0000000021c1cf78.call(Unknown Source)
[email protected]/java.util.concurrent.FutureTask.run(Unknown Source)
[email protected]/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
[email protected]/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
[email protected]/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
[email protected]/java.lang.Thread.runWith(Unknown Source)
[email protected]/java.lang.Thread.run(Unknown Source)

I'm not sure where things go wrong, but this does look a bit suspicious to me.

The blocked thread is waiting in BlockingContentProducer.nextChunk until content is available.

From what I see, the semaphore should eventually be released in BlockingContentProducer.onContentAvailable when new content is available, which should be called from HttpInput.run.

I'm not actually seeing a deadlock here, but I find it suspicious that some threads wait on the VaadinSession and another holds it while blocking for some other reason.

How to reproduce?
Unfortunately, I don't have a reproducer. However, we have several reports of this on shutdown, so that might have to do something with it. I hop you can do something with the thread dumps.

@mperktold mperktold added the Bug For general bugs on Jetty side label Sep 13, 2024
@sbordet
Copy link
Contributor

sbordet commented Sep 13, 2024

I don't think this is a Jetty issue.

Vaadin should not grab a lock and then perform a blocking operation like ServletInputStream.read() while holding the lock (what the first stack trace does).

The other 2 stack traces are now waiting on VaadinService.lockSession().

Did you report this issue to Vaadin?

@mperktold
Copy link
Author

Did you report this issue to Vaadin?

Yes, I did. I referenced the issue in the beginning of the description.

Vaadin should not grab a lock and then perform a blocking operation like ServletInputStream.read() while holding the lock (what the first stack trace does).

That was my impression as well. I cannot say for sure that this is what causes the deadlock, but it might be a good idea to avoid this regardless. Do you think the same is true for writing back the response?

@sbordet
Copy link
Contributor

sbordet commented Sep 14, 2024

Do you think the same is true for writing back the response?

Yes, if a blocking API is used.

@mperktold
Copy link
Author

Alright, I will forward these suggestions to Vaadin, thanks!

I am still interested, do you think that this is really the cause of the deadlock? Specifically, if you look at the threads waiting on the VaadinSession, are those the threads that would eventually release the BlockingContentProducer? Because to me, it looks like when content isn't immediately available, like it seems to be the case here, a read interested is registered in the selector, which should notify HttpInput when content becomes available. Can that really be affected by some request handler being blocked on a lock?

I can see some threads are waiting in ManagedSelector.select, wile the thread holding the VaadinSession also comes from a call to FillInterest.fillable in SelectableChannelEndPoint. Who would be responsible for releasing the BlockingContentProducer in this case?

@sbordet
Copy link
Contributor

sbordet commented Sep 16, 2024

A blocker read will be eventually woken up, either when data is available, or when a timeout occurs.

The problem is that in both cases the wait can be really long (10s of seconds, minutes or even more), so if something else happens in the system that requires access to the Vaadin session, then these threads will be blocked for a long time.

The proper solution is to not perform blocking API calls with locks held.

@Legioth
Copy link

Legioth commented Sep 16, 2024

A blocker read will be eventually woken up, either when data is available, or when a timeout occurs.

Does this mean that you don't see any reason for why either of the two threads waiting for the Vaadin session would prevent the first thread from eventually proceeding? This in turn would imply that shutdown would still be blocked by that first thread regardless of whether that thread holds the Vaadin session lock while reading the request body (except if the client for some reason delays sending more bytes until one of the other threads have made progress)?

@Legioth
Copy link

Legioth commented Sep 16, 2024

...or is this a form of head-of-line blocking with a TCP-level buffer that full of bytes for one of the threads waiting for the lock which prevents the demultiplexer from reaching the bytes that would allow the first thread to proceed?

@sbordet
Copy link
Contributor

sbordet commented Sep 16, 2024

@Legioth I am not sure I understand your comments.

A blocking operation performed with locks held is typically a mistake.

In this case, there is nothing that Jetty can do, it's a Vaadin issue.

Note that this will happen with any Servlet Container, not just Jetty, and for that matter with any InputStream or other class that exposes a blocking API.

@Legioth
Copy link

Legioth commented Sep 16, 2024

I understand the theoretical issues with dining philosophers and so on. Holding a lock while doing a blocking operation (e.g. acquiring another lock or blocking I/O) is generally fine when it comes to deadlocks as long as that blocking operation is independent from the lock. That's usually the case when a high-level abstraction calls down to a lower-level abstraction since the lower-level abstraction doesn't have any direct code path that leads to that lock.

It's a bit surprising that a HTTP request wouldn't be independent but I guess that's what TPC head-of-line blocking is all about. Could you confirm that this would be the likely underlying reason also from your perspective or is there some other factor at play here as well? If TCP head-of-line blocking is the expected cause, then this issue could be closed on my (and thus also Vaadin's) behalf.

I'm not saying that Vaadin's implementation might not be a mistake, but it's then at least a mistake that has taken more than 10 years for anyone to discover 😄. I would just prefer to have a proper understanding of the underlying mechanisms before I start digging into whether anything could be changed on Vaadin's side.

@sbordet
Copy link
Contributor

sbordet commented Sep 16, 2024

@Legioth I think you have the locking understanding the other way around.
High level abstractions holding locks while calling low level abstractions is a recipe for problems.

It's a bit surprising that a HTTP request wouldn't be independent

They are.
It's Vaadin that it is establishing a dependency by grabbing a lock on the same Vaadin session.

that's what TPC head-of-line blocking is all about

TCP HoL blocking is just one of the multiple possible cases of blocking, but any blocking would show the problem.

I would not concentrate on TCP HoL blocking, as this is a problem at a different level.
Even if the protocol you are using does not suffer from TCP HoL blocking, for example HTTP/3, you would have problems.

The API called by Vaadin are blocking, so if the remote client is not sending data then the API will block, no matter whether the data is transported by a protocol that suffers from TCP HoL blocking.

I would not close the issue on the Vaadin side without a fix, no matter if it took 10 years to discover.

@Legioth
Copy link

Legioth commented Sep 17, 2024

I see the possibility of TCP HoL blocking as "proof" that there's nothing Jetty could do in this case regardless of whether that's actually what goes on in this specific case. This also means that this issue could be closed (which can only be done by the original reporter or project maintainers) while keeping the Vaadin issue open.

But... I'm also curious to gain a more in-depth understanding even though I also realize that it's not your duty to educate me. 😄

My statement about holding locks while calling a lower abstraction is maybe not accurate - it goes for calling any code where you are not sure about what shared resources it might try to use. A typical deadlock has two separate shared resources that are acquired in an inconsistent order. Vaadin's session lock is one of those resources but there also has to be some other shared resource for a deadlock to happen. Treating blocking I/O in general as a shared resource might be a good heuristic but I would like to have something more specific for my mental model. In the case of TCP HoL blocking, that specific other resource is the read buffer of the shared TCP connection which causes seemingly independent HTTP/2 requests to actually depend on each other. What other shared resources could there be that might cause a deadlock between concurrent HTTP requests or responses?

@sbordet
Copy link
Contributor

sbordet commented Sep 17, 2024

@Legioth this is technically not a deadlock because there is no circular wait, but instead it is just a "hold and wait" situation, which would not happen if I/O reads/writes never block.

For HTTP/1.1, the wait in reads/writes is caused by TCP HoL blocking and TCP congestion, respectively.
For HTTP/2, the wait is typically caused by lack of data (for reads) or flow control stalling (for both reads and writes), but TCP may be involved too.
For HTTP/3, the wait is also caused by flow control stalling like HTTP/2, but TCP is not involved.

As for your mental model, I/O operations and blocking access to bounded resources (e.g. a connection pool towards a JDBC database, but also adding items to a bounded queue) are the "wait" part of the "hold and wait" situation, and you should avoid to "hold" when performing those "wait" operations.

In the case of TCP HoL blocking, that specific other resource is the read buffer of the shared TCP connection which causes seemingly independent HTTP/2 requests to actually depend on each other

While it is possible that few requests cause the HTTP/2 session flow control blocking (for reads and writes), or TCP HoL blocking, or TCP flow control blocking, so that other requests on the same connection would depend on the few ones, it does not seem to be what happens here.

This is just one request failing to provide the request content, and other requests from the same client waiting on the Vaadin session lock (which I assume be per-client).

Other requests from other clients should be able to proceed without problem, so there is no dependency.

@Legioth
Copy link

Legioth commented Sep 17, 2024

Thanks for the clarification. I think we have partially talked across each other based on slightly different initial assumptions.

I have been assuming that the first thread was stalled because the bytes it needed were held up by something from one of the two other threads, i.e. that the whole situation could be avoided if either of the two other threads could acquire the session lock and proceed to a point where they would indirectly release the bytes needed by the first thread.

It seems like your assumption is that the first thread was stalled because the needed bytes never even reached the server, i.e. that the first stalled thread is inevitable but a better locking strategy in Vaadin could prevent the two other threads from also being stalled.

@sbordet
Copy link
Contributor

sbordet commented Sep 17, 2024

I have been assuming that the first thread was stalled because the bytes it needed were held up by something from one of the two other threads, i.e. that the whole situation could be avoided if either of the two other threads could acquire the session lock and proceed to a point where they would indirectly release the bytes needed by the first thread.

I doubt this is the case, looking at the stack traces.

It seems like your assumption is that the first thread was stalled because the needed bytes never even reached the server, i.e. that the first stalled thread is inevitable but a better locking strategy in Vaadin could prevent the two other threads from also being stalled.

Correct, this seems likely the case from the stack traces.

@Legioth
Copy link

Legioth commented Sep 17, 2024

I realized the same now when looking more closely at the stack traces. Thread 2 is a websocket close frame which shouldn't have any payload that remains to be read while thread 3 is a timer rather than any request handling.

@mperktold
Copy link
Author

I just realized that we also use virtual threads in some cases, and those will not show up in our handmade thread dump.

This might only be remotely relevant for this particular case, since as you said, one of the threads is probably stuck inevitably. Still, it's quite possible that those virtual threads also wait on the VaadinSession, since we use them for background threads that need to access Vaadin components.

@sbordet
Copy link
Contributor

sbordet commented Sep 17, 2024

@mperktold if you use Jetty 12, you can use VirtualThreadPool, which has built-in facilities to dump the virtual threads, including those that are suspended.
See https://jetty.org/docs/jetty/12/programming-guide/arch/threads.html#thread-pool-virtual-threads-virtual.

@joakime
Copy link
Contributor

joakime commented Sep 17, 2024

I realized the same now when looking more closely at the stack traces. Thread 2 is a websocket close frame which shouldn't have any payload that remains to be read while thread 3 is a timer rather than any request handling.

A WebSocket CLOSE Frame has payload.
It can contain up to 125 bytes of payload, of which is 2 bytes of close code, and 123 bytes of reason.
These are technically optional, but in practice, every implementation we've come across always provides the 2 byte close code at least. Most include the close reason.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug For general bugs on Jetty side
Projects
None yet
Development

No branches or pull requests

4 participants