Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

On mitigating mempool-pinning with option_gc_preimage #783

Open
ariard opened this issue Jun 5, 2020 · 4 comments
Open

On mitigating mempool-pinning with option_gc_preimage #783

ariard opened this issue Jun 5, 2020 · 4 comments

Comments

@ariard
Copy link
Contributor

ariard commented Jun 5, 2020

By leveraging mempool-pinning [0], a malicious counterparty can stuck and delay its own transactions in network mempools that way blocking a honest party to confirm a timeout transaction and breaking HTLC relay semantic.

During the last IRC meeting, a counter-measure has been suggested by @cdecker , namely
implementing a network-wise mempool monitoring and gossiping about channel close and preimage. This protocol extension, option_gc_preimages sounds at first-sight to protect a LN node against enhanced mempool-pinning attacks, where the local full-node mempool is blinded from receiving the preimage tx due to some malicious conflict. Such monitoring, widely-deployed, would be effective if you're connected to a least one peer deploying such measure.

IMO, such attactive mitigating raises 2 concerns, a slight one and a more worrying one.

First, you introduce an assumption on the peers you're connected to, that their mempool is well-sized, relay policy is permissive enough, free from blinding conflicts, .... It sounds reasonable if you're connecting to well-known routing nodes, those relying on full-nodes. However, an attacker, by guessing your LN topology (gossips propagation?), can probe and blind their mempool from the interested preimage tx.

Note, I think mapping full/LN nodes sounds far easier if mempool watching is implemented. You can probe throw a test preimage transaction and see if your LN peer announces you this preimage for an offered HTLC in a timely window.

The second concern, push the reasoning further. If you can selectively blind few selected peers
in the network, why not create conflict partitions. Such partitions would isolate miners mempools
from the rest of the network. The miners subset would be feeded with the preimage tx. The leftover subset would be feeded with a conflict to blind them from seeing the preimage tx. This tactic means that option_gc_preimage has to be supported by miners to be trustworthy.

This introduce a new assumption on miners behavior, you have to convince them to run/maintain
LN-dedicated software along their mining/Bitcoin stack, not only to be passive and not censor
LN transactions. If it's run by a majority, you need a coalition of them to cooperate to let
pinning happen, which sounds reasonable. If it's run by a minority, that sounds like a moral hazard, you rely on few miners to mitigate a class of LN attacks. I don't think reaching for the majority of miners to run LN software is realistic, and being sure in case of miners market/topologies changes we keep doing so.

With regards to identifying miners mempools, you don't need to identify all miners mempool, just
the most meaningful ones, i.e the ones which are highly-likely to mine a block during the timelock
delay you're aiming to exploit. I've not seen any research on it but it seems doable by "tainting"
mempools with conflicts and looking on block composition. Mining pools sadly sign blocks.

If we're at ease with the new strong assumption on miners for the security reduction, I concede
option_gc_preimage would make this flavor of pinning far harder.

[0] https://bitcoinops.org/en/topics/transaction-pinning/

@t-bast
Copy link
Collaborator

t-bast commented Jun 5, 2020

I don't think we can reasonably introduce the assumption that miners/mining-pool will participate in this preimage broadcast.

However, even without them, I think this mechanism could be helpful (it's still slightly better than doing nothing). But I'm concerned about the linkability issue you're raising: if broadcasting preimages in lightning makes it easier for attackers to discover your bitcoin full node, LN nodes will be disincentivised from sharing preimages with this mechanism.

There are many mitigations we could implement, but it's hard to know if they are going to be effective. For example, you could share preimages only to a random subset of your peers, and after a random delay. Intuitively it makes it harder for an attacker to figure out which LN node discovered this preimage first, but it may not be enough...

@cdecker
Copy link
Collaborator

cdecker commented Jun 8, 2020

I'm sorry to say, but I disagree on most points @ariard pointed out. Let me
see if I can address them inline:

IMO, such attactive mitigating raises 2 concerns, a slight one and a more worrying one.

First, you introduce an assumption on the peers you're connected to, that
their mempool is well-sized, relay policy is permissive enough, free from
blinding conflicts, .... It sounds reasonable if you're connecting to
well-known routing nodes, those relying on full-nodes. However, an attacker,
by guessing your LN topology (gossips propagation?), can probe and blind
their mempool from the interested preimage tx.

No, we are relying on the fact that at least someone in the wider network is
watching the mempool for preimages and that there is a broadcast path that
leads to us, in order for us to learn about the preimage. We're not pinning
our hopes on our peers, just that we're not completely surrounded by attacker
sybils. This is very similar to the situation we have with bitcoind as the
backend. Unlike with bitcoind however opening a new connection to learn
about preimages is lighter weight than processing all transactions, and can
also help us with other blockchain related information such as blockheader
relay which can defuse things like the time-dilation attack.

Regarding the inference an attacker can do based on LN gossip: besides the DoS
mitigation that is one of the reasons why gossip is a staggered broadcast with
rather large caching times. Each hop in the broadcast accumulates 60 seconds
worth of gossip updates before forwarding, introducing delays and
uncertainties in an observer as to which path a message has taken in the
network. I'd advocate to extend this staggering to the preimage and block
header broadcast as well (our timeouts for HTLCs is in the tens of blocks, not
seconds, and the same is true for time-dilation).

In addition opening a couple of feeler connections, that are not used for
channels, but to learn about the network topology is something that we should
probably do anyway.

Note, I think mapping full/LN nodes sounds far easier if mempool watching is
implemented. You can probe throw a test preimage transaction and see if your
LN peer announces you this preimage for an offered HTLC in a timely window.

Given the uncertainty we have added by staggering the broadcast, the number of
actual HTLCs you'd have to leak on-chain just to learn a part of the network
topology, that might change underneath you, is likely prohibitively expensive.

The second concern, push the reasoning further. If you can selectively blind
few selected peers in the network, why not create conflict partitions. Such
partitions would isolate miners mempools from the rest of the network. The
miners subset would be feeded with the preimage tx. The leftover subset
would be feeded with a conflict to blind them from seeing the preimage
tx. This tactic means that option_gc_preimage has to be supported by
miners to be trustworthy.

I'm under no illusion that in order for the attack to be fully mitigated we'll
have to eventually make changes to the (complex and non-linear) mempool RBF
logic. What you are describing here is a perfect attacker that no-one can
possibly protect against, selectively feeding nodes a different view of the
reality, and I'd hope that this is an incredibly difficult attack to pull off.

But it must also be said that option_gc_preimage doesn't hurt in these
situations. Nobody claimed it to be a complete solution by itself, but it can
be part of it, until we can get the mempool logic changes to disarm this
issue once and for all.

This introduce a new assumption on miners behavior, you have to convince
them to run/maintain LN-dedicated software along their mining/Bitcoin stack,
not only to be passive and not censor LN transactions. If it's run by a
majority, you need a coalition of them to cooperate to let pinning happen,
which sounds reasonable. If it's run by a minority, that sounds like a moral
hazard, you rely on few miners to mitigate a class of LN attacks. I don't
think reaching for the majority of miners to run LN software is realistic,
and being sure in case of miners market/topologies changes we keep doing so.

Again, option_gc_preimage is a partial solution, that works well, unless you
have the above absolute scenario with an omnipotent attacker. I wouldn't frame
it as an explicit reliance on a specific set of nodes to raise an alarm, but
rather hoping that someone in the network raises the alarm, which is a far
weaker assumption. Could some attacks still be successful? Sure. Will it
mitigate some of the attacks. Absolutely!

With regards to identifying miners mempools, you don't need to identify all
miners mempool, just the most meaningful ones, i.e the ones which are
highly-likely to mine a block during the timelock delay you're aiming to
exploit. I've not seen any research on it but it seems doable by "tainting"
mempools with conflicts and looking on block composition. Mining pools sadly
sign blocks.

Well, in order to reliably pull this attack off you'd have to predict all
miners that will mine a block, if you happen not to identify one of the miners
and blind them successfully (propagation races can be very unpredictable) then
your attack risks not working.

If we're at ease with the new strong assumption on miners for the security
reduction, I concede option_gc_preimage would make this flavor of pinning
far harder.

I disagree that we're introducing any new assumptions, we just add redundant
communication to the protocol in order to defuse the situation where some
nodes may node learn about relevant information in time. Adding
option_gc_preimage is strictly more secure than not having it.

P.S.: What's the gc stand for in option_gc_preimage?

@cdecker
Copy link
Collaborator

cdecker commented Jun 8, 2020

Reporting the IRC discussion from tonight here, so we have a complete track of the discussions so far:

13:06 < cdecker> ariard: sorry for the very direct answer to #783, hope it didn't come across too aggressive
13:07 -!- joostjgr [[email protected]] has joined #lightning-dev
13:07 < rusty> The original one used a DSL and had a heap of hard-coded stuff, so you really had to completely control waht your node did.  This one calcs much more stuff on the fly, so it's more flexible.
13:07 < ariard> cdecker: not at all feeled as aggressive :) It's hard to be right or wrong on this subject, there is a lot of factors to account
13:08 < cdecker> Thanks, it's sometimes hard to gauge the tone in writing, and I felt that my post could come across as too direct, so just wanted to say sorry if that'd be the case ^^
13:09 < cdecker> (also being hyped on coffee doesn't help ^^)
13:09 -!- shesek [~shesek@unaffiliated/shesek] has quit [Ping timeout: 260 seconds]
13:10 < ariard> I was answering you, but I think people would be okay to talk about engineering cost and trade-off of solution directly during meeting
13:10 < t-bast> I think it's good, it fuels the discussion for tonight
13:10 < t-bast> we don't have a huge schedule so we can start with those
13:11 < ariard> in fact if you know anyone willingly to do more research on inter-layer probing or transposing some of transaction origin inferrence tricks on LN that would awesome
13:12 < cdecker> Well, the academics are currently still catching up with stuff we knew years ago ;-p
13:12 < BlueMatt> cdecker: fwiw, I think part of the motivation of ariard (and my) concern here is that, historically, there have been a ton of issues around being able to prevent tx relay for txn that you see first. and I'm not at all confident that such bugs dont still exist.
13:12 < cdecker> It'll take some time for them to catch up with the really interesting things at the cutting edge. while they collect the low-hanging fruits
13:12 < BlueMatt> I dunno if "core bugs" is in the domain of things we should be worried about, but "likely core bugs" maybe is.
13:13  * BlueMatt also notes that, in the long term, I hope we all aggree that we cannot rely on mempool scanning for our security, and maybe the most worrying thing for me is that we don't have a currently-viable proposal to get there
13:13 < ariard> cdecker: yes but if we can help them on researching the actual stuff we need to know to shed lights that's worthy talking with them
13:13 < cdecker> BlueMatt: there are some things in Core that do have an impact on us (mempool logic first and foremost), so I think it is very much in scope, but at the same time out of reach
13:14 < BlueMatt> (I have some vague intuition that sighash_noinput may be part of the solution, but its only an intuition)
13:14 -!- shesek [[email protected]] has joined #lightning-dev
13:14 -!- shesek [[email protected]] has quit [Changing host]
13:14 -!- shesek [~shesek@unaffiliated/shesek] has joined #lightning-dev
13:14 < cdecker> So we're left with trying to fill in the gaps that we discover, and this is one of those venues imho
13:14 < BlueMatt> right.
13:14 < ariard> due to base layer being p2p and not having a concept of first-seen tx, you will always have conflicts and those will be always leveragable
13:14 < cdecker> Bitcoin propagation logic could be abused to hide information from us? Let's make sure that that information gets broadcast
13:15 < BlueMatt> but also requiring mempool scanning means, to run a lightning node you *must* either run bitcoind or have a bitcoind you can query (possibly over rest or remote)
13:15 < ariard> that's right you can improve propagation logic, but how do you avoid this turning as a DoS or privacy leak
13:15 < cdecker> BlueMatt: and many nodes will use `bitcoind` as their backend, but they can help those who can't by making that information available
13:16 < ariard> cdecker: have you read other issue on mempool-pinning on the commitment tx-level ? You may have to bump something you don't know and that maybe a moving target
13:16 < cdecker> DoS: sending a preimage is not free, it requires a channel to exist and the close to have an appropriate htlc output. We can dedupe in-network like we do with gossip
13:16 < cdecker> Privacy: this is public information, grabbed from the blockchain, how can this be an issue?
13:16 < BlueMatt> given the previous concern about bandwidth usage on the network, that is maybe a bit worrying
13:16 < t-bast> Remember that the things we want to look for in the mempool are the exceptional cases (force-close should be exceptional and only happen during bugs or attacks), so we should be able to not DoS ourselves with those
13:17 < ariard> it depends on how you implement scanning logic for mempool, what if I craft a preimage tx which is in fact not tied to any channel output but mirror template?
13:17 < cdecker> BlueMatt: you're free to opt-out, but saying that noone should use assistance when they clearly need it is definitely the wrong way around
13:17 < BlueMatt> cause maybe the biggest concern ariard points out here is that what you really need to know (and all you really need to know, I think, assuming anchor), is the commitment txid which was broadcast so you can anchor-bump it.
13:17 < ariard> it's not public information as you craft conflict to taint the preimage, like ensured there is only this node able to announce the preimage on the gossip level
13:17 < BlueMatt> which also breaks all the various hidden-channel privacy bits.
13:17 < cdecker> ariard: it's rather trivial to connect an HTLC output to the channel scid that resulted in it
13:18 < BlueMatt> (i mean kinda, you can maybe pattern-match, but that hurts)
13:18 -!- shesek [~shesek@unaffiliated/shesek] has quit [Ping timeout: 260 seconds]
13:18 -!- proofofkeags [[email protected]] has joined #lightning-dev
13:18 -!- proofofkeags [[email protected]] has quit [Read error: Connection reset by peer]
13:18 < cdecker> BlueMatt: we're already tracking channel closes, keep it in your view a bit longer, you're processing those HTLC txs anyway
13:18 < BlueMatt> cdecker: thats fair, but I think my point was also that precisely those who would opt-out due to bandwidth (mobile clients) are also the ones that cannot run bitcoind
13:18 -!- proofofkeags [[email protected]] has joined #lightning-dev
13:19 < ariard> cdecker: so you need to track the state of any channel closure, starting from the funding outpoint ? still you can batch the probing on 483 outputs on the same commitment tx to cost optimization
13:19 < BlueMatt> cdecker: I dont think we need htlc txes to be rumored to solve this issue. only anchor.
13:19 < cdecker> BlueMatt: agreed, but it's a tradeoff and it'd be a bad idea to dismiss a partial solution in the hopes of coming up with a perfect one
13:19 < BlueMatt> yea, thats totally fair.
13:20 < cdecker> ariard: 483 HTLCs? That's one expensive close ^^
13:20 < BlueMatt> (the concern is more around private channels, cause those are not tracked by everyone)
13:20 -!- shesek [[email protected]] has joined #lightning-dev
13:20 -!- shesek [[email protected]] has quit [Changing host]
13:20 -!- shesek [~shesek@unaffiliated/shesek] has joined #lightning-dev
13:20 < cdecker> Fair enough, we can come up with good schemes for lightweight nodes to sync the set of preimages selectively
13:20 < ariard> yes that's expensive but learning a full-node/LN mapping, which is assumed to be stable is likely worthy of it
13:21 < ariard> and as you looking on the mempool, I wonder if you can RBF the preimage to cheat on the cost
13:21  * BlueMatt doesn't understand the discussion around preimages, my previous workaround assuming anchor solves that issue even here, I think - you can blindly anchor-rbf-bump htlc txn which are in the mempool as long as you think they are there.
13:21 < cdecker> ariard: you can't learn the (gossip) topology due to the intentional delays we introduce
13:21 < cdecker> Did we start the meeting btw?
13:21 < BlueMatt> dont think so.
13:21 < ariard> my point is this delay is not efficient, if the attacker only announcce the preimage to your mempool and a conflict to the rest of the network
13:22 < cdecker> Feels like we just jumped in head first :-)
13:22 < BlueMatt> we kinda did
13:22 < BlueMatt> this may also be the kind of thing worth a call
13:22 < BlueMatt> I'd suggest another spec meeting to physically discuss it for about the three days it would take, but, you know, probably a bad idea.
13:22 < ariard> even but a meatspace whiteboard session but...
13:22 < cdecker> ariard: we have two complementary networks here: bitcoin and lightning. You can probably infer a relationship on bitcoin, but the delays work on lightning
13:23  * cdecker needs to dig up the bitcoin network topology papers from years ago
13:23 < ariard> I agree with you it works to mask the LN topology, if your delays are poisson distribution, which you don't if your reply but that we should implement
13:23 < cdecker> Yes, you're probably right, an in-person meeting would help clarify a lot
13:24 < ariard> what I'm concerned about is the new, cheap mapping it introduces for mapping full-node-to-LN ones
13:24 < BlueMatt> cause we're all super out of sync now, I think, cause there's five issues and 20 solutions and lots of mixing and matching to do :/
13:24 < cdecker> BlueMatt: 100%
13:24 -!- shesek [~shesek@unaffiliated/shesek] has quit [Ping timeout: 260 seconds]
13:25 < BlueMatt> anyway, meeting
13:25 < ariard> I think we can make step forward on the stuff we know we want for sure, which is IMO the peer selection logic ?
13:25 < BlueMatt> hmm?
13:25 < cdecker> ariard: I'm not sure if mapping a bitcoind to LN is at all problematic, and if anythin gossiping preimages provides an alternative source of preimages, masking whether there is a bitcoind underneath the LN node at all or not
13:26 < t-bast> can you define what you mean by peer selection logic?
13:26 -!- shesek [[email protected]] has joined #lightning-dev
13:26 -!- shesek [[email protected]] has quit [Changing host]
13:26 -!- shesek [~shesek@unaffiliated/shesek] has joined #lightning-dev
13:26 < ariard> because independently of pinning, having better peer selection, non-eclipsable it's something good to avoid abusive routing fee inflation
13:26 < ariard> t-bast: how I connect to LN peers to learn about gossips
13:26 < cdecker> ariard: definitely, a diverse set of peers can be helpful (though not necessary if we're just using the current gossip)
13:27 < t-bast> ariard: right, that one shouldn't be too hard, we can easily randomize feeler connections, that would be helpful
13:27 < ariard> cdecker: I was thinking do we have a risk of LN-native eclisping attack, like I force you to connect to my LN peers and announce you only my channels to force you routing through ?
13:28 < ariard> t-bast: I would be a little more careful on the not too hard, you may active attacker poisoning your addr map
13:28 < ariard> like it has been a lot of work on the core side to prevent this, and there are still issues with regards to addr-relay
13:28 < t-bast> ariard: you mean the initial bootstrap is hard?
13:29 < t-bast> how could we help with those? right now this is very based on real-world decisions (I
13:29 < ariard> cdecker: I think mapping a bitcoind to LN is really problematic, it's a sine qua non to make a successful pinning, time-dilation or tx-relay obstruction more efficient
13:29 < t-bast> (I'm going to open a channel to that merchant, or that one)
13:30 -!- renepick [[email protected]] has joined #lightning-dev
13:30 -!- shesek [~shesek@unaffiliated/shesek] has quit [Ping timeout: 260 seconds]
13:30 < t-bast> I think that what makes the "saf-ish" bootstrap phase easier in lightning than in bitcoin is that many nodes are associated with real-life services
13:30 < cdecker> I mean there is a well-known impossibility result that without prior knowledge you can only act on whatever your first entrypoint tells you (bootstrapping problem in distributed computing)
13:31 < t-bast> And have a public identity, so you can more easily diversify
13:31 < cdecker> So we need to make sure that the entrypoint is as diverse as possible (DNS seeds, baked in addr lists, ...)
13:31 < ariard> t-bast: agree, it's quite different here, manual peering in practice should almost mitigate this issue, unless are doing honeypots
13:31 < t-bast> I think there's a lot more manual selection than in bitcoin, which helps a lot
13:32 < t-bast> Not a technical solution though :D
13:32 < ariard> yes boostrapping problem for p2p network isn't solved, you somehow need to trust DNS seeds or hardcoded peers, or manual seeding through web-of-trust assumption
13:32 < cdecker> We can likely fall back on our goto-anti-DoS measure and enforce sybil-resistance by only picking nodes that have a channel open (they have funds and have spent some opening a channel)
13:32 < ariard> but due to the fact you're going to pick up your peers based on real-life services for LN, it's harded to exploit I think
13:33 < cdecker> Agreed, in extremis we can likely also fall back on the AS diversity library that sipa contributed to bitcoin core
13:33 < ariard> yes I think we're deviating a bit, we should first qualify if we have native eclipse-attack on LN and what an attacker can gain from then, before thinking mitigating them
13:33 < ariard> (beyond the case of garbage-collecting preimages)
13:33 -!- shesek [[email protected]] has joined #lightning-dev
13:33 -!- shesek [[email protected]] has quit [Changing host]
13:33 -!- shesek [~shesek@unaffiliated/shesek] has joined #lightning-dev
13:34 < ariard> gc_preimages -- garbage-collecting preimages to answer your question cdecker, but you may have a better name :)
13:34 < BlueMatt> right, as t-bast points out, at least in the medium term, a ton of lightning activity is based on real identies or service providers, so eclipse issues are way less likely
13:34 < cdecker> Ok, do we have a risk of an eclipse attack? Not sure, depends on the attacker and its goals. If it is to skew you into using its channels so they can gain fees, then yes
13:34 < cdecker> Otherwise we're not exchanging information that could be dangerous or is unverified
13:35 < ariard> yes that the main one I can think one right now, and if we introduce preimage garbage collecting you want to be sure you're connected to peers signaling such monitoring
13:35 < cdecker> (and option_gc_preimages doesn't change that, we just augment our knowledge of the blockchain)
13:36 < cdecker> Yep, agreed
13:36 < ariard> as it's security you should assume automatic selection for those, how do you prevent someone to trick this ?
13:36 < ariard> like manual peering doesn't guarantee you it's a monitoring node
13:37 < cdecker> Though to be fair, they are signaling that they are relaying the information, not that they are actually watching a bitcoind (disclosing that would indeed add the risk of being targeted)
13:37 < ariard> yes, but I assume that if you connect to 8 peers and at least one of them is proactive and honest that should work
13:37 < t-bast> if your manual peering almost guarantees that you have an un-eclipsed view of the whole graph, then you can randomly open feeler connections and you should regularly connect to well-behaving nodes that will share preimages
13:37 < cdecker> No, don't differentiate monitoring from non-monitoring, the signaling is just "I relay preimages" that's it
13:38 -!- shesek [~shesek@unaffiliated/shesek] has quit [Ping timeout: 260 seconds]
13:38 < cdecker> Ideally the entire network relays preimages (except leaf nodes that are on metered connections) then you can pick and chose as you please
13:38  * BlueMatt notes that he's still unconvinced preimage-discovery is an issue with anchor v2.
13:38 < cdecker> The nodes that initiate broadcasts do not have to signal anything beyond their willingness to relay
13:39 < ariard> right, so you may be connected to a) manual peered not signaling b) manual peered signaling c) automatic peers signaling 
13:39 < ariard> and c) maybe entirely malicious if it's automatic dumb selection ?
13:39 < cdecker> Ok, BlueMatt sorry for hammering onto this issue. How does anchor v2 address that?
13:39 -!- joostjgr [[email protected]] has quit [Ping timeout: 260 seconds]
13:40 < BlueMatt> cdecker: the previous discussion was (if you know that the commitment transaction is either in the mempool or on chain), you can "blind-anchor-cpfp" the htlc transactions - there's a limited number of them and you know the txid without knowing the preimage, so you can cpfp them to get them confirmed (and if they arent there, your cpfp tx will just be ignored)
13:41 < BlueMatt> we *probably* would want a similar thing whereby you can tell your lightning peers about a transaction that you want relayed, but thats maybe no different than today
13:41 < t-bast> Does that require changes to the current anchor output proposal?
13:42 < BlueMatt> t-bast: last i checked the current anchor proposal was going to be cut back to only the commitment tx and leave htlcs alone so that we could redo that later, given it didnt work properly there anyway
13:42 < cdecker> Does that require all HTLCs to have two anchor outputs (one per endpoint)?
13:42 < BlueMatt> so, yes, but last time i was a part of the discussion we were planning on doing that anyway
13:43 < ariard> I think a remote anchor ouput on a local transaciton, either HTLC or commitment is flawed due to again not mempool omniscience
13:43 < BlueMatt> cdecker: yes. though, again, last time i was a part of the discussion i think we said "well talk about that later", and I think we really are going to have no choice but to go that way, for high-value htlcs, likely splitting htlcs into "has anchor, cause its worth it" and "is barely worth putting on the chain, so we'll take the risk" buckets.
13:43 -!- shesek [[email protected]] has joined #lightning-dev
13:43 -!- shesek [[email protected]] has quit [Changing host]
13:43 -!- shesek [~shesek@unaffiliated/shesek] has joined #lightning-dev
13:43 < cdecker> But if there is no remote-on-local-tx anchor output, then I can't blind-CPFP your success tx, or am I missing something?
13:44 < ariard> okay not HTLC because you can be sure the commitment is confirmed but you may have any version of a revoked commitment tx being pinned in the network and not able to see it
13:44 < BlueMatt> ariard: I agree for commitment tx if you're worried about your counterparty broadcasting old commitment txn and that preventing you from broadcasting the latest one
13:44 < BlueMatt> which I think *is* an issue, though maybe a less important one, and one that may require being solved with something similar to gc_preimages, but for  commitment-tx-pattern-matching txids
13:45 < BlueMatt> cdecker: define "remote-on-local-tx"?
13:45 < ariard> cdecker: yes but to assume that your blind CPFP is successful you need to know the txid of commitment being pinned
13:45 < cdecker> Was just referring to what ariard said two lines above: "I think a remote anchor ouput on a local transaciton, either HTLC or commitment is flawed due to again not mempool omniscience"
13:46 < ariard> and it maybe any of the previous states of your channel
13:46 < cdecker> Don't we know the commitment TX? That one has to confirm before anything else can happen.
13:46 < cdecker> Oh wait you mean we're unlucky and the commitment TX is also in limbo until the HTLC CLTV expires?
13:46 < BlueMatt> right, so it sounds like we're on the same page. the issue with that proposal is that you have to always reliably learn commitment txids when they are broadcast (so you can look up the corresponding local state)
13:47 < cdecker> That's pure evil ;-)
13:47 < ariard> yes I'm talking about pinning the commitment tx itself to block you from timing-out or claiming a HTLC
13:47 < BlueMatt> cdecker: yes, that is precisely the situation.
13:47 < ariard> aand there is even more funny games,  a CPFP to be successful need a fresh utxo, someone may announce you a lot of commitment txid, and you may try to bump all of them
13:48 < BlueMatt> the issue is that you need to learn via the mempool when a commitment txid has been broadcast by your counterparty (kinda....i mean you only care about this at the time you go to close on-chain due to htlc timeout, not before)
13:48 < ariard> thefore exhausting fresh utxo in-flight on not pertinent commitment transactions
13:48 -!- shesek [~shesek@unaffiliated/shesek] has quit [Ping timeout: 260 seconds]
13:48 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has quit [Remote host closed the connection]
13:48 < BlueMatt> one proposal to address *that* is re-adding bitcoin core reject messages, but including the conflicting tx/txid when a transaction cannot be braodcast due to conflicts
13:49 < BlueMatt> but you very quickly get into whack-a-mole with these types of things
13:49 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has joined #lightning-dev
13:49 < ariard> reject messages are sybillable, and you can easily provoke bumping-utxo exhaustion
13:50 < BlueMatt> (sidenote: sighash_no_input is *really* elegant here - assuming a lot of complicated logic in core to do so, you could imagine blind-cpfp-bumping *any* commitment tx without knowing its there or which one it is all with one tx.......in theory)
13:50 < BlueMatt> ariard: right. I'm not a fan of that idea, only noting it for completeness
13:50 < cdecker> BlueMatt: interesting note on noinput there
13:51 < ariard> BlueMatt: yes I think noinput solves it because you can push the state forward, either a valid commitment or a revoked one, but confirming a rmeote revoked is favorable to you
13:52 < BlueMatt> (note that actually *doing* that may be intractible in bitcoin core for mempool-complexity reasons, but in theory its possible)
13:52 < ariard> but it's likely a hell to implement how do you pair witnessScript with scritpubkey without actually playing the script interpreter
13:53 < BlueMatt> right. you could imagine specifying it in some elegant way when broadcasting, though - "try to attach this to output 2 to any tx that spends output X:Y"
13:53 -!- shesek [[email protected]] has joined #lightning-dev
13:53 -!- shesek [[email protected]] has quit [Changing host]
13:53 -!- shesek [~shesek@unaffiliated/shesek] has joined #lightning-dev
13:53 < ariard> or we can tag commitment tx and cpfp-bump, likely what you're saying
13:54 < cdecker> BlueMatt: so why not just do what eltoo does in reaction to everything: broadcast the latest state. If it doesn't confirm, rebroadcast with higher fees, until it eventually confirms
13:54 < ariard> IMO, the most-viable to solve this is to push for a feerate limited-package-relay
13:54 < ariard> that way, it's only a feerate game between attacker and victim which is easy to analyze
13:55 < cdecker> Yes, absolutely. Simplifying the RBF rules to be linear and only consider feerate is the best solution imho
13:55 -!- t-bast [~t-bast@2a01:e34:efde:97d0:a0dd:b1dd:ac09:d724] has quit [Remote host closed the connection]
13:55 < cdecker> For all I care it can mandate a feerate-increase of 10%+ in order to be considered, that can act as a DoS mitigation, by forcing exponential feerate increases
13:55 < BlueMatt> cdecker: right, yea, that solves it too.
13:56 < cdecker> But the non-linearity is really painful
13:56 -!- t-bast [~t-bast@2a01:e34:efde:97d0:a0dd:b1dd:ac09:d724] has joined #lightning-dev
13:56 < ariard> cdecker: how do you broadcast higher fees with a pre-committed model ? Attacker may manipulate feerate/absolute fee of latest state through update_fee/dust htlcs
13:56 < BlueMatt> cdecker: eltoo solves this whole quagmire quite nicely....as with everything, apparently.
13:56 < ariard> cdecker: to avoid solving non-linearity for now I was meaning by limited-package-relay, one limited to one-parent child
13:57 < cdecker> Oh, I'm thinking more towards an all-anchors / eltoo world which we want anyway, right? Requiring predicting the future feerate is a bad choice
13:57 < ariard> like introduce a package-relay with a policy limitation on package topology to avoid DoS issues for now
13:57 < cdecker> BlueMatt: careful, don't become an eltoo-hypeman like me :-)
13:58 < ariard> what do you mean here by predicting a future feerate ? But yes we want all-anchors here, and maybe eltoo if we can solve implementation complexity hurdles on the mempool-side
13:58 -!- shesek [~shesek@unaffiliated/shesek] has quit [Ping timeout: 260 seconds]
13:58 < cdecker> I mean committing to a feerate by including the fee in the commitment means we have to predict what a useful feerate will be at a later time, when we need to broadcast
13:58 -!- shesek [[email protected]] has joined #lightning-dev
13:58 -!- shesek [[email protected]] has quit [Changing host]
13:58 -!- shesek [~shesek@unaffiliated/shesek] has joined #lightning-dev
13:59 < niftynei> hey all we're at about an hour now, i think we forgot to call startmeeting? lol
13:59 < ariard> okay so implementing package-relay feerate-based would alleviate this because you adjust feerate via the malleable CPFP
14:00 < ariard> at broacast when you need and you can adjust it
14:01 -!- proofofkeags [[email protected]] has quit [Remote host closed the connection]
14:01 < niftynei> was there any other urgent business that needs to be discussed?
14:01 < cdecker> Yep, anchor outputs are a huge improvement, both in terms of overallocation of fees, but also flexibility to react to things
14:01 < ariard> yes with regards to anchor outputs, I think we should split spec in 2, the congestion-part and the security-part
14:02 < cdecker> @all: sorry for taking up all the air in the room with the RBF pinning issue, but I think we made some progress :-)
14:02 < rusty> niftynei: yeah, I am planning on just reading the logs once I am awake.
14:02 < ariard> should we add a remote anchor output if the security efficieny is hard to gauge with regards to whole previous discussion
14:03 < t-bast> I think it was a very interesting discussion, don't mistake our silence for boredom, we're listening/reading closely ;)

@ariard
Copy link
Contributor Author

ariard commented Jun 8, 2020

IMO, points of discussion:

  1. ensuring efficiency of option_gc_preimage means a good peer selection to avoid LN-eclipse
  2. preimage monitoring may introduce a) LN topology vector b) full-node-to-LN mapping vector
  3. a sophisticated attacker (mass-p2p-connection and resources to pay for multiple conflicts) may bypass option_gc_preimage
  4. fixing other pinning scenarios (see On Mempool-Pinning Commitment Transaction #784) may solve this one without introduction of a new feature

I think 1) is okay, we will likely need this to keep a non-inflated routing map. 2) a) can be mitigated with random staggering. 3) sounds hard to qualify without further research but if we're honest about the risk of edge-case scenarios that's okay 4) is likely depending on noinput or package relay.

  1. b) is the really concerning point, we should really care to make it hard to map LN and full-nodes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants