Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Hello, everyone. Welcome to the July 18th stand up. You guys may have noticed a new
- person in the call today here. This is NC, he's going to be working with us as a freelance
- contributor on some enshrined PDF stuff. So everyone, welcome NC. And feel free, NC,
- if you want to make a quick intro. These are pretty much our team here. If you want to say hi.
- Yeah, thanks, Phil. Yeah, hey, guys, my name is NC. I don't know what to say. Well,
- I've been in the Ethereum space for the past, I don't know, like 10 months or 8 months or so.
- So I joined the Ethereum Protocol fellowship before and I did some contribution to the Lighthouse
- and also Besu on the execution layer side. So right now I'm just picking up this project,
- working with the folks on Lodestar to get the EPBS stuff going.
- Great, NC. Thank you. Yeah, so these are the guys here. I could make further introductions
- for you to them a bit later. But yeah, welcome to the team. Glad to have you here. And yeah,
- we'll keep going forward with standup now. So one of the things that I wanted to make sure about is
- we did want to see a demo of the prover at some point. I don't think I got like a concrete answer
- yet about when we'd like to do that. So Nazar, if you just give us a brief like update on where
- you're at with that and if you think it would make sense to do a demo maybe when people, when I guess
- like line is not a conference and stuff probably next week or something like that. How do you feel?
- I think I said on Friday, Wednesday, but I forget to send the invite. Yeah. So in my mind,
- it was like Wednesday tomorrow. Oh, okay. Um, like if everyone is here,
- if they agree and they're available, we can do tomorrow.
- I think the only person that wouldn't be like available, I guess, is line. I'm not sure
- what his schedule is like over at ECC or if he just wants to watch a recording of it.
- Do you have any preference line?
- I can do my best, otherwise.
- Okay.
- Did you got it for the line time?
- He's gonna try his best to make it.
- Okay.
- Is there any objections to tomorrow?
- I think we had said like one hour before like this current time.
- Mm-hmm. Yeah, so any objections to that?
- Okay, great. We'll send out the invite after the call or...
- Yeah, I will send it. Oh, yeah, sure. Okay. Thanks, Nazar. Okay, we had pushed a
- hotfix release v1.9.2 to our CIP fleet. It included a couple of issues in relation to
- reducing the race time. I think it could jitter through that one in. There was also a change
- that was made by Tuyen, and we also threw in that DVT PR from Nico.
- Did anybody get a good chance to look at how our CIP nodes are performing
- for the last 12-ish hours, and whether or not it is suitable for release?
- I did have a quick look before the meeting.
- It's basically the same performance to 1.9.1.
- Any other observations?
- Okay, I noticed...
- Yeah, I noticed that the...
- uh, attestation subnet.
- The count of our peers or mesh peers seemed.
- Choppy or irregular. I don't know if anyone else saw that.
- Yeah, from from mesh piece is quite tricky because it just 12 hours.
- If you want it to be stable, it checks them.
- Right. I do see the mesh peers slowly, you know, creeping in an upward direction,
- but it does take time for it to, I guess, stabilize at around like eight or so.
- But I'm not seeing anything that's like terribly worse. We do have like
- a couple spikes here on like the gossip scores average.
- I'm not sure if that's like anything to be concerned about.
- Otherwise, as long as it's not like detrimental, I think what we especially wanted to see with
- this optics release as well as any changes in the block production time from that incident
- that we had a couple of days ago.
- And seeing if 2000 milliseconds would make a difference.
- We have any reduced blocks on this node in the past 12 hours.
- I think it might take a while.
- Yeah, it might take a while with 64 validators to get a
- produced block here to know whether or not it made a huge difference.
- I would say if we are comfortable with it, we should release it. If not, maybe we do a bit more
- of a slower rollout to some of the Lido nodes where we can see block production metrics over time.
- Because that's where we're probably going to get, you know, the most metrics on block
- production here.
- I don't see anything that's like, like super dangerous in terms of releasing it as a hot
- fix. If anything, we'll collect the data from the Leo nodes as we upgrade them to 1.9.2
- and see and just keep an eye on the block production metrics. Is there any objection
- to that?
- No, I think I think it's.
- I think it's fine to cut it to cut release.
- OK. Let's go ahead with that and I think.
- As well, we were running a specific commit.
- It was one of the unstable ones
- feature one over the weekend. That would have been a potential candidate, I guess, for a 1.10.
- Has anybody observed anything concerning that they want to bring up for that?
- I'll be honest, I did not take a look at those metrics yet, even though we have,
- I think about two or three days worth of metrics.
- On that group of servers.
- If no one's taking a look, we can take that async and then just continue on with.
- With planning.
- Yeah, I I think we want to fix some of these bugs that Nico filed over the weekend.
- Before we.
- commit to a release candidate.
- Sounds good.
- Okay.
- Next up, I had a conversation with, with the DevOps team and also came in and about some
- just some of the, the, the sort of metrics that you can get out of something like Grafana
- to be able to cater to some of the information
- that you're looking to get.
- I think Jon may have dropped off the call,
- but I'm gonna look to get the DevOps guys
- to also look into building their own Grafana dashboards
- for their own needs,
- but would like to see if anybody here could benefit also
- from some sort of a Grafana education session,
- whether that is like building, yeah, sounds good.
- Matthew has his hand up.
- I think I could definitely use a refresher as well
- in terms of what the content is.
- It would definitely be something that we can compile
- as to like what sort of burning questions
- are in people's minds.
- If you have an idea as to like what you would like to see
- from a Grafana education session,
- we could also set that up separately
- similar to what we're doing with the Prover
- and then just get people up to date on how to use Grafana,
- basically.
- So if you have anything that you'd like to mention here now,
- feel free.
- If not, we'll just compile them and set a date.
- Some more PromQL as well.
- PromQL, OK.
- Anything else from the rest of you
- guys as to what you'd like to see in like a bit of a Griffon education session?
- Actually also just going through some of our dashboards because I think that's something
- that would be nice to just look at the metrics we have and talk about them and kind of the importance
- and which ones are more important less important for different types of research would be it would
- would be great, actually.
- Yeah.
- Even an explanation of--
- because not all of them are expressed in necessarily
- line graphs or whatever.
- Some of them are also heat maps.
- Being able to understand and read those, what looks good,
- what doesn't, that's also pretty helpful.
- OK.
- All right.
- I'm going to suggest probably sometime next week for something like this.
- Just taking a quick look at the calendar here.
- It would be probably similar to
- maybe like the same time that we're suggesting for like Nazar's presentation, but like a week later.
- Do you want to swap out the MEV talk and just do it as the protocol discussion?
- Metrics and reading dashboards? We could. I would ideally like to know that
- I think Line would be available for this. He's probably the most knowledgeable, I feel like,
- on Grafana. What is your schedule looking like I guess for Thursday or for next week line
- to make this work? Next week is best.
- Next week is better. How about reschedule the...
- Nope, just leave the Ethereum protocol discussion where it is. Maybe next week on Thursday,
- like same time we'll set up a um a refined education session for that.
- Okay cool. Does anybody have any points for planning at all?
- Otherwise, I would love to--
- >>I think, I guess the current game plan
- is fix these bugs that were filed over the weekend
- by Nico.
- Are there any other things that we
- need to do before we consider cutting a release candidate?
- We're just going to quickly pull up the milestones here, but they were very much cleaned up.
- Did we ever fix that issue where with the upgrade of libp2p we consistently had more
- than 55 peers?
- Is that going to be an issue as well?
- Like I guess for the upcoming release candidate?
- So looking at feature one, we typically
- don't have more than 55 peers.
- So I think it may happen right when we start up.
- It kind of spikes above 55, but then it
- stabilizes between 50 and 55.
- That's what I've seen.
- OK.
- I don't know that it will be an issue,
- but I don't know that it is not an issue either.
- OK.
- It's okay, we can understand you.
- So I was studying with Terence from Prysm about the aggregation deadlines because there is a
- proposal to extend the block deadline in the slot and compress the attestation and aggregate
- sections of the slot. So he asked me, How long do you guys take to aggregate all the attestations
- when you subscribe to our subnet.
- And my answer was like, well, we don't.
- We have to drop out of them.
- Anyway, which is a funny point that I hope,
- I hope we can become a better network participant
- at some point.
- And maybe Tuyen will touch on this,
- but it's not really like now that we have merged
- the subnet refactor, where we only subscribe to two,
- note, if we reduce the time that we give aggregators we can reduce traffic a lot.
- So theoretically we could do much better and I think that this can be huge and we should
- strategize that but Tuyen is already on it.
- Thanks, Lion, for that.
- Tune, if you want to add to any of those points, feel free.
- Yeah, I think we can test that subnet per node work
- on when we release 1.10.
- We can try that in any apparent node,
- CIP node, for example, to see if we reduce bandwidth significantly.
- because previously we subscribed to one subnet per validator
- for node with 64 validators, we subscribe on subnets
- the change is we subscribe to two subnets only
- so it should reduce a lot of bandwidth
- so for 64 subnets, we still subscribe to short-lived subnets, I think
- but it still reduce the bandwidth
- it should reduce a lot for
- not like with 8 or 16
- wide data
- oh
- and we have to increase our peer count to 100
- Um. Right. Okay Um, yeah, we'll
- definitely note that, um,
- especially for trying out after
- we get a 1.10 out. Um. Any
- I really want to try a node with 100 peers.
- Just I don't see we have enough mainnet nodes.
- Last time I had to use beta mainnet, but now it's busy.
- Can we have beta mainnet node too?
- Because we have feat one mainnet, we have feat two mainnet, but we don't have feat three mainnet.
- Oh, okay.
- Yeah, I definitely did not notice that.
- I'll put up an issue today for the infrastructure guys
- to get those set up.
- It's definitely missing in our stack there.
- - Thanks, Tuyen.
- - Okay.
- One other question for you, Tuyen.
- There is something else in my milestones here
- that I was tracking 5556.
- I haven't continued to look into this,
- but if you have anything that you could update me with
- on this, that would be great.
- I see that there was a PR that was done to Gossip Sub for this.
- I think this happened when we had before used
- call as a tool. Now we disabled that. So it's still an issue but we can fix that later.
- It's not for 1.10 I think. Okay. Great. I'm going to move this one forward.
- Yeah, please. Okay. Thanks. I think outside of the issues that were put up over the weekend
- by Nico, let's focus on getting those fixed up and then once we got a good potential release
- candidate we'll throw it into beta to retest for data collection.
- And after the call let's release 1.9.2 and start deploying them to the Lido nodes.
- Anything else for planning?
- [BLANK_AUDIO]
- >> I just deployed unstable to feature two, so
- you can use that one if you need it.
- >> Thanks, I just need the minute note.
- Thanks, we'll check that.
- [BLANK_AUDIO]
- >> Okay.
- >> Let's just go with updates.
- Lion, is there anything else you wanna add from your adventures over in Paris?
- for your updates.
- If you can hear us still.
- Okay, let's move with Gajinder.
- - Hey guys.
- So I have just hit P2P validation for AIP 7045.
- I think there is some test which is basically breaking
- because of the types I'll fix it.
- I just, I mean, I pushed it and then I saw it today.
- And I continued the preview of BLST PR2,
- but I didn't finish it.
- So I'll target finishing it this week.
- Did some debugging on missed proposals
- and did a small PR tool over the race, which threshold,
- where we basically wait for both engine and builder blocks.
- And basically, you know,
- but it seems from the empirical evidence
- that four second is the threshold at which
- if the blocks don't arrive to the nodes,
- then there's high probability that they would be missed
- because around four seconds,
- till four seconds,
- proposal boost is there.
- So that might be a big factor.
- So lowering the race threshold to two seconds made sense.
- Earlier it was three seconds,
- but if it continues to happen,
- maybe we can lower it to 1.5,
- but seems like builders mostly reply
- till 1.5 to a little bit more than 1.5,
- or maybe two, I mean, you'll need to see more data
- to make a more tighter call around this
- if it continues to be an issue.
- And did some reviews for EL offline PRs
- and started a PR for broadcast validations,
- which basically will help the builder guys
- to sort of say that, you know,
- before broadcasting the block do this validation.
- So the thing is that, you know, right now,
- right now when we propose a builder block,
- we send it to the builder and builder earlier
- used to transmit it without validating the blocks,
- which led to some of the MEV sandwich attacks
- in which builder basically proposed a block
- builder basically gave a block to the validator,
- which was not correct.
- And then use that,
- use basically extracted out of transactions,
- not a builder, basically a bot in between.
- I think malicious validators,
- they basically took the block from the builder
- and they basically,
- what they did was they created a block,
- which was not valid.
- and posted it to the builders
- and the builder transmitted the block
- as well as revealing the transaction contents
- to the validators, which the validator
- then used to create a valid block
- and sort of front run the MEV bots.
- So now builders want to sort of validate the block
- before broadcasting and that is the flag
- that is being added in this PR.
- So the validator can now request on the publish block calls
- what kind of validation it wants to run
- for the beacon or before it broadcast.
- So there are a few steps over there
- and we'll basically start from lowest hanging fruit
- of having no validations plus some consensus validations
- where the beacon itself is proposing the block.
- So basic beacon knows that, you know,
- the constructed block, it constructed the block.
- So it is valid.
- So it is a easy check in cache to sort of figure out,
- figure this out.
- And then there will be,
- and then the next steps would be to add validation
- for the blocks which the beacon didn't construct.
- So that, so I will do follow-up PR for that.
- So first PR is just to make sure that, you know,
- the base, the lowest hanging fruit is there
- and this flow is activated.
- And then I, for DevNet 8 readiness,
- I did a specs simplification proposal
- where basically parent beacon block,
- so which is part of execution payload on the EL side
- because EL state is affected.
- So EL basically needs this in the execution payload,
- but right now the way it has been architected
- to send is through via new payloads.
- So execution payload on beacon block side
- does not has parent beacon block root,
- but execution payload on EL side needs it.
- So basically EL's execution payload
- has parent beacon block root,
- and the way it is currently being transmitted
- is through new payload.
- So while we are sending new payload,
- the CL is supposed to also send parent beacon block route.
- So basically it's not part of the beacon blocks
- execution payload structure,
- but we have to massage it and send it.
- But I propose a simplification payout
- where basically it's a part of
- beacon's execution payload so that, you know,
- the correspondence and debugging is a bit easier
- because we have found that on DevNet,
- it happens that you just have the payload from beacon block
- and then you need to do some debugging
- based on that on the EL side.
- So, propose a PR for that.
- And to get DevNet 8 ready,
- I'll just writing a PR for fork choice update three,
- in which we also need to send to EL parent beacon block root
- so that it can use it to construct the execution payload.
- Again, it matters because parent beacon block route
- is being saved in execution layer via pre-compiled.
- So, and this affects the state.
- So if it would not have affected the state,
- then we wouldn't need to send it.
- But since it's affecting the state
- of the final execution payload that we'll get from EL,
- So we need to send it where our fork choice update as well.
- So this is the PR I'll be completing in a couple of hours
- and updating it.
- - Thanks, Ginder.
- Just to add to a little bit of context
- to that second last point in regards to the validation
- before block publishing.
- That was something that's,
- it's basically a feature that one of the relayers requires
- to allow them to integrate Lodestar
- into their relayer setup.
- So that's why we're integrating that flag into Lodestar.
- Prism and Lighthouse already have like forks
- their software to allow this for the relayers. So they are currently dominant in the relayer
- landscape. So due to some issues that we had previously with some of the relayers themselves,
- it was made clear to me that they needed this feature to integrate
- Roadstar and by diversifying their setup, they could hopefully reduce
- relayer issues that they were having. This is specifically the HVIS relay.
- But yeah, that will allow us to really help that part of the ecosystem as well.
- All right, let's continue forward.
- Lion, can you still hear us? Do you have anything that you want to share from ECC?
- Yes, I went to a PBS session on Sunday and Monday.
- Yeah, the signs are doing solid. I think it just needs more research to make sure.
- I'm telling them that they are superior.
- But it was a good session to understand
- that we are not cost-consuming.
- We have the five-minute code from WP
- and the wallet is also relevant.
- Okay, cool. Yeah.
- Feel free as well, like, just, if you have the time
- to just do more of an async update.
- We're having some trouble hearing you, so.
- Anyways, we'll move forward.
- Let's go with Thuyen next.
- - Hi, so I work on a PR to prioritize VLS signature
- for signature sets sent from API,
- we set a flag priority as true.
- And for other bases like Gossi,
- we have prioritized as forms
- and with a prioritized BLS signature,
- it will append the signature to the head of the job queue.
- By the way, the job queue was refactored to a linked list.
- The PR was merged.
- Another work I follow is to,
- which is related to BLS too,
- is to verify signature sets of the same message.
- I did a rebase and the main thing now is to test.
- What I monitor on Fit3 is that it does not make
- a huge difference to unstable,
- but CPUs is reduced by 25%
- and less attestation job wait time,
- a little bit less GC time.
- However, I need to deploy on a minute
- to see what's the different threat there because for now we will iterate the signature on main
- thread I would like to see if we can get stable main sphere there.
- And golly it works fine.
- Other than that I did a small PI in the zipsub to track the published time on the zipsub
- and I fixed a bug regarding the unknown sync when we subscribe and unsubscribe.
- it was merged to 1.9.2.
- That's it for me.
- - Thanks, Tuya.
- All right, next up, we have Nico.
- - Hey, so last week was mostly looking at closing
- the remaining tickets for 1.10,
- so the issues that were still open.
- Also looked a bit into the issues we had with Node 20.
- I think those are now basically fine.
- Still one concern is this issue I opened
- where consecutive requests to Lighthouse at least,
- caused a socket hangup issue.
- I checked this out, there are some workarounds to fix this,
- but it will require an upstream fix ideally
- in the node fetch library on node core itself.
- So yeah, maybe we just need to wait there.
- Yeah, I mean, the only users in production
- that could face issues is if they run load house
- with a light house beacon node, I guess,
- which might be the case actually in the dbt setup.
- So we might consider to wait with node 20 update
- in that case, until this is resolved.
- Yeah, besides that, just a few minor UX stuff,
- improve the validator exit command a bit
- based on previous user feedback we got.
- And yeah, just looking into a few issues
- that happened on unstable and open,
- yeah, basically documented that.
- There was another thing that the user attempted
- to update our sync API to the latest beacon spec.
- And I checked this, what needs to be done there
- and found that it's not that trivial actually,
- because we basically overwrite the custom handler
- to set the status code,
- because it's not possible in the normal API implementation.
- So yeah, maybe it's not that great of a good first issue,
- but yeah, I'm checking that maybe then I can fix it myself
- and the user can maybe take a look at that.
- Also, there's an issue that we don't report
- a correct status code right now.
- So we always say 200, but the beacon spec wants 206
- if the node is syncing, which is not the case right now.
- So might as well fix this.
- Yeah.
- And besides that,
- yeah, just trying to document more on the issues
- that I opened.
- Still trying to find out what is causing the process
- from not shutting down.
- Yeah.
- So yeah, mainly investigating that.
- Thank you, Nico.
- - All right, next up we have Kamin.
- - Hey, yeah, I guess I would recommend
- that we hold off on node 20 for our Docker image
- until we get a fix on that consecutive HTTP requests
- causing socket hangup.
- But yeah, so last week.
- We got the I got the.
- IPV6 support added.
- I think that was Friday.
- That may have caused that
- problem with the metric server,
- but somehow doubt it, but we.
- I can also help take a look at that Nico,
- but much much appreciative Nico and
- are helping with the Node 20 PR and debugging issues around that.
- Other than that, I was looking at Tuyen's PR for multi-adder, JS multi-adder, where
- he refactored how we get the multi-adder path from the multi-adder and have another PR open
- that kind of takes his approach.
- It takes it to the logical extreme
- of pre-calculating everything
- in the construction of a multi-adder.
- So I think the Lippie2P guys
- are getting back from a vacation this week.
- I don't know if they're all back yet.
- So hopefully that'll get reviewed in the next week or so.
- Yeah, I've been I've been a little disheveled the past week, and not at full capacity.
- But I'm going to be getting back to full capacity next week.
- Sounds good.
- Thanks, Kevin.
- And yeah, I think guys with all the help with the node 20 stuff.
- Has anybody upstream stuff to Node before?
- Do we have any sort of expectation of when that could potentially be fixed?
- I'm wondering if we should just completely not upgrade Node for v1.10.
- Oh yeah, I recommend we we revert that PR for run 10.
- Okay, because otherwise we're going to break compatibility if people are using a lighthouse
- validator.
- Yeah, so we should hold off and it it's not critical that we move to 20 right now anyway.
- I didn't see any spectacular performance gains.
- So I don't think that we're missing out a lot.
- Because there's things that are faster
- and there's things that are slower.
- So it ended up just being about the same.
- - Is there already a PR in to fix the issue
- or is it something that is not identified even?
- - Yeah, maybe Nico can speak to it.
- Yeah, so there is a PR in NodeFetch that addresses it.
- So I think it can be addressed there, but then there's also, I saw a discussion on the Node Core repo
- where they discuss a fix potentially there.
- Because the weird thing is, so if you yield to the macro queue, basically the issue does not happen.
- So there seems to be that Node is reusing a socket or whatever, the same connection,
- which is already closed but it's not cleaned up properly or something like
- this. Oh and it's throwing in the loop okay. You know what if you don't mind if
- you can point me at that and we can take a look together I'd love to to do that.
- I'm starting to do a couple commits here and there with them and it's something I
- can even maybe message Mike Dawson and ask him if there's a potential fix on
- that.
- Yeah, good.
- Yeah.
- So I referenced the node issue.
- Even Ben was also part of that.
- And yeah, Mateo and so I think the core contributors there.
- Okay.
- Pretty involved already there.
- So yeah.
- We could at least I mean, we could bump it to like, I'm sure they've got exactly a billion
- things on their plate.
- But this is something that probably fell into the seem like it's, you know, it was March
- or April when that issue was being discussed and probably fell through the cracks.
- Yeah, it's all in here. The upstream issue I linked as well in the chat here.
- If you want to take a look after the meeting. Thanks. Yeah.
- Okay. Um, yeah, Matt, feel free to go ahead with any updates from your end as well.
- I have been pretty down for the count this week. It was, um, feeling pretty rugged. I'm starting to feel more like myself, which is good.
- And I did a bunch more unit testing on the blinding, not blinding.
- I got the CLI flag to turn it on and off.
- I did some PR work on the blast repo as well in response to Kajinder's comments.
- But I'm just moving a little slow.
- I'm having a hard time.
- But so we'll.
- - No problem, Matt.
- Feel better soon.
- Take care of yourself.
- And yeah, we'll keep in touch and just take it easy.
- Okay, let's move on with Nazar then.
- - Thank you.
- Last week there were a couple of PRs open for some API fixes and then I was
- working on integrating Lowe's chart prover into the LiteClient demo that we
- have and I observed that the LiteClient demo is not working, it's broken and it
- was using 1.2 version of the Lowe's chart. I thought I should first upgrade it to
- the latest loadstar versions with the same codebase.
- Once it's working, then I will move it to the prover.
- But apparently there are some type changes
- and because of which the current loadstar
- lightclan demo that we have is not working with a 1.9x.
- So I'm trying to fix that.
- Once that is fixed, then open the PR for it.
- And then on top of it, I will open a PR
- to migrate the LiteClient demo to the ProWorks.
- And along with this migration,
- I would like to start an open discussion,
- what kind of stuff do we want to show
- onto the LiteClient demo?
- Because earlier we were showing a proof of it,
- a full proof tree on the LiteClient demo,
- but because of the lower star prover
- that is hidden in the implementation.
- So prover is doing everything in the backend for us.
- So if we want to migrate the LiteClient demo to the prover,
- then unfortunately we will not be able to show
- the full prove in the demo.
- We can show if the request was verified,
- proved or not, but not show the full prove actually.
- - I think it might be helpful to have our meeting tomorrow
- where we can all see how the prover works
- and what it's capable of,
- and then we'll be able to make a more informed decision.
- - We don't need to show the proof at all.
- The proof can be locked in the locks
- and then just open the console, that's it.
- - Okay.
- Sure, then we can discuss this particular topic tomorrow.
- But I was asking if there is some document
- or issue in past which describes this demo.
- - The original website is dead,
- we don't need to respect it.
- - Okay.
- Okay, then we will see a PR from my side
- with fixing the LiteClient demo
- with the latest version of the packages.
- and then another PR tomorrow migrating into the Uber.
- And we will be seeing each other tomorrow as well
- for the demo.
- Thank you so much.
- That's all from me.
- - Thanks Azhar.
- Okay, I'm just gonna read out for the recording
- the async updates from Lion.
- Last week, he progressed on single secret leader election
- and the max effective balance research fronts,
- multiple pending things.
- It's linked in consensus specs from Mike Neuter.
- He wrote a beacon node resource doc under big state sizes.
- There's a link to that in the private chat.
- Good chats with Ansgar, Mike,
- and others about max effective balance and PBF.
- There's a clear need to reduce the rewards curve and ship MEV burn.
- So that's from Lineside.
- And see if you have anything that you would like to add here, feel free.
- If not, we'll just open it up for anybody who wants to add any last minute points for
- a stand up.
- I've got questions about reducing the rewards curve and MEV burn.
- Are there any documents, anything that we can read about that?
- Blind. Do you have any, um.
- Any links for us in regards to
- discussions happening with the
- reward curve and maybe burn.
- I know.
- OK, no problem. Yeah, well,
- you just follow up with it as as we get him.
- Anything else for stand up today?
- All right, guys, thanks for coming out and have yourselves a great week.
- See you tomorrow for the prover call.
- Bye-bye, y'all.
- Bye.
- See you tomorrow.
- See you.
- Bye-bye.
- Have a nice week.
- Bye-bye.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement