Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Okay. Welcome everyone to the July 25th stand up.
- Let's see what I have here on my list here today. Okay, one of the things that I wanted to cover was
- block production times. We've been seeing some issues with some mis-blocks recently
- in on main net on some of our Lido nodes.
- Tune was looking into some of this stuff.
- He has a proposal to start the slot to pull
- for proposer duties.
- I think that's 5409.
- I don't know if that had been reviewed yet though,
- but I'll let maybe Tuyen summarize some of the PRs
- That he has in regards to tackling some of this,
- and then we can go from there.
- - So I analyze the issue of missed proposal
- at the first slot of the epoch.
- There are two issues.
- First is the delay at the validator client side,
- because it's a zero epoch look ahead.
- So usually we have to pull the proposal at the start of,
- the epoch, and due to the IO lag,
- it's maybe two second delay.
- So I have one PR for that,
- to on the proposal duties before the next epoch start,
- because we had the prepare slot scheduler,
- which run the epoch transition for us already.
- Usually it takes less than three seconds.
- So I did the PR to poll the proposal duties
- one second in advance.
- If the epoch transition data finish,
- it can wait for a while for some seconds
- and then return back so that we can save some time
- at the VC time, at the validator time.
- Other than that, at the beacon node side,
- I see we have some delay in producing the phase 0
- Beacon block body.
- I did not look into that yet.
- - Great, thanks, Tuyen.
- Sorry, it's my mistake.
- That was, he was describing 5794,
- which I posted on there, but yeah.
- - Did we figure out why the execution layer client took about 14 seconds
- to produce the block?
- I don't have an answer to that
- investigation at all yet.
- I think that is still in our --
- yeah, Gajinder did question
- that in our missed block
- proposal thread.
- the missing slot 6940832, there was a huge sort of delay in getting an execution block.
- I don't know if anybody has looked into that though.
- Okay.
- Um, so there's that. Um, so that's the current.
- That is on really these investigations. There is, of course, the fact that we are.
- Having a long epoch transition time, which was the 5409 that I had originally quoted.
- that is a blocker. I don't know if much investigation came out of this. I know
- Tune reopened it because of this block production delay issue that we're having.
- But we'll need to sort of investigate this a little bit further.
- But just unless there's anybody who wants to add to this
- to potentially what's happening here. I think we do know but we don't really have a plan
- yet to tackle that outside of Tuyen's PRs that he's working on.
- I see that sometimes we see a process block in more than one second which is huge.
- I mean, we should also look at why the builder response also came so late because I mean,
- Builder could have also saved us. But the builder response was very late.
- Okay.
- [AUDIO OUT]
- OK.
- And it sounds like there's a bunch of different pieces
- that are being tackled here.
- And I don't know that all of them
- are really necessary to fix before we
- think about cutting a release.
- So the epoch transition time, that
- seems like something that we would like to fix.
- But it's also something that has already existed in our
- Yeah, yeah, I should not say it's, it's not a blocker, be honest. Yeah, I used the wrong
- word. So it's just nice to have in my PR, when we, when it did not finish the import
- transition, we can wait for a while also. So we, yeah, we can work on that later. But
- epoch transition more than three seconds is bad.
- Okay.
- Any other points that anybody want to add to this.
- If not, we'll move forward with some other follow up issues.
- I was curious about just the kind of the status of the network thread, if anyone has done
- any kind of like the latest research is on that.
- And whether or not--
- it seems like, I guess, just thinking
- about where we're at for our next release,
- it's like we're not ready.
- Clearly, it's like we're still kind of trying
- to get a few things in.
- But whether or not that we think that would land in this release
- or what kind of the blockers are that are still open for that?
- Yeah, so I did message Tuyen about this yesterday.
- Maybe I'll give Matt a chance to also chime in on this.
- But one of the big main issues that we're still having
- is the event loop lag.
- So I don't know if there's an update from Matt about that,
- if there was any work done on it.
- Yes, I put up a PR last night and actually just about done getting emerged to add metrics.
- For the last thing that Ben wanted was the message queue between the worker and the main thread.
- So that is PR now. And that's what I'm going to deploy to feature one and try to pull some metrics.
- And then I'm also going to turn back on the extra new space
- in that same PR and see if that affects things.
- But it didn't do a huge amount of difference.
- It did actually reduce a lot of the lower level metrics,
- but it didn't seem to affect the loop much.
- So the last thing that I needed to give Ben
- was the worker queue, like the message queue between the worker and the main thread.
- And then from there, we were going to start the discussion again.
- And I'll add the two of the things I'm going to ask in my update back to him for that.
- And then as a matter of fact, actually, that's not.
- Cool. Also as well, we merged in 5747, which was the verify signature sets with the same
- attestations. So that I think Tuyen said, gave us more stable peering when the network thread
- was enabled. So we do have that now. I thought that the 5729 also had to be merged for that,
- the batch verify attestation gossip messages in batch.
- - Yeah, so the new BLS APIs,
- there's no consumer at this moment.
- So that's still a work in progress
- to implement the index, the CPU
- and the validation logic in batch,
- which I should work on next week.
- [BLANK_AUDIO]
- Okay, and then there were some recent things that were also merged
- into hopefully reduce the IO traffic related to thread.
- So that was the two subnets per node.
- And then I think Tween is also working on the subscribe to short-live subnets.
- too early. So that one is also going to be for inclusion for that. And then hopefully
- that leads to a much more stable network thread.
- Okay, I guess I was just thinking like, as far as release planning and everything goes,
- what our strategy is for all of it. It seems like we don't have any major pending, like
- urgent timeline, things going on where we need to cut a release right now.
- So yeah, I was actually thinking of maybe doing a hotfix.
- There were some PRs here that were merge related to helping to potentially alleviate some of
- the sync committee issue stuff that we were having.
- I don't know if that's something that you guys want to do
- either or--
- but sorry, I don't want to get off topic with that.
- >>I think it's definitely on topic.
- I guess the question is, is our current unstable stable enough
- to attempt a release candidate or just a 1.10 scaled down
- version, just whatever we have here on unstable,
- Or is it unstable enough that we need to actually just
- do the hotfix if we want to release those features?
- I don't see any block for 1.10.
- Otherwise, please update me.
- Do we want to update to node 20 for that?
- Right.
- We want-- we probably don't want to.
- We want to stay on 18.
- But I think there was a fix for the issue
- I was looking into with Matthew.
- So I think they merged it today, right?
- Yeah, this morning.
- But it still has to be merged through--
- was it cross-fetch?
- I'm not really able to just update the dependency resolution.
- Okay.
- So we, if I guess we could, we can't keep 20 if we just, um, fix the
- issue or we can downgrade, I think ideally we would, we would upgrade, but,
- um, we can't want to test it though.
- Because in theory, it should work.
- And I mean, it should work.
- But you don't have a day or two of metrics on it,
- just to make sure it's stable.
- >>Right, right.
- Yeah, we would go through our earliest candidate testing
- process of testing.
- >>It is ready for that, though.
- >>Yeah, if one of you all want to PR that resolution,
- Or we could wait a few days for CrossFetch to bump the--
- -That was going to be on my updates,
- is that I was going to message them today
- because I was going to send them that info,
- and then the stuff that Nazar had found as well.
- But if you want, Nazar, if you think it's a good idea,
- I'm down.
- We could just update the intermediate dependency
- the lower dependency and then do cross-fetch, you know, just so it's not a blocker.
- Yeah, I mean I could quickly test the issues that it's pretty easy to reproduce,
- so I can see if it's actually fixed now.
- I fix on my PR by, like in the React application that I'm working,
- by downgrading the lower dependencies of the cross-fetch.
- Like lower dependencies of our package of cross-fetch to 3.1.5.
- Okay, yeah, if that is, you know, of course, like a safe and logical thing to do, I would rather
- take an action where we can sort of control things rather than depend on other people
- to sort of bump stuff. So if we do want to do it that way, let's get that in. And then we'll
- stay on node 20. And then we'll try for an RC release right after,
- if that works for everyone. Because we do have, I think it's 90, close to 90 commits
- between stable and unstable right now. So it's quite a few changes. And ideally, I'd like to like
- push out a 1.10 release to get us all caught up if there's no, if it's stable enough,
- And there's nothing blocking it right now.
- Cool. Any opposition to that at all?
- Okay.
- I had a question about 5225, that was the code coverage test.
- I've been sitting there for awhile,
- so just wondering if there was a reason why
- we've been waiting on that one for awhile.
- There is no reason it was just a low priority.
- Thing. OK.
- I merged it because it seemed.
- Like self contained enough that we could just.
- Delete it if we wanted to,
- but it's a nice feature that we might.
- Try, I don't know what why
- had this drive by PR, but he's a security researcher at the EF and
- Figured it might help him with some things he's doing so
- Okay, cool
- Thanks, I think to make it useful we should maybe
- See if we can aggregate the coverage and get an average and then maybe also update the batch maybe that we have in the readme
- So I'm not sure how easy that is with that CA tool, but yeah.
- Right.
- Right now, it's not in use anywhere.
- It just adds a script in package.json,
- which is not being used unless you decide to run it.
- Cool.
- Any other points specifically for planning at this point?
- Real quick, just looking at that cross-fetch and node-fetch, they updated it to 3 and 2
- is the major version in cross-fetch, so that update that they just merged today
- is not going to be available to us on a minor seminar. We wouldn't be able to
- actually use it unless we update.
- And they update because they're on 2.
- >>I think we could probably just downgrade cross-fetch.
- I updated it to, I guess it was a 4.0 in the Node 20 PR.
- But I think actually it wasn't even necessary.
- And it seems like it's created all these problems when--
- it's like, we can just downgrade it back to 3.something.
- on it yeah i think that's the right strategy because if you look on the
- uh download stats 4 is like pretty immature right now uh it does not have a lot of downloads as
- well compared to the 3 so maybe we should wait for it once it becomes more usable by other people
- and then we start using it.
- Yeah, this doesn't solve what Matthew
- mentioned, right?
- Like cross-fetch might not work with
- Node-fetch version 3
- right now.
- Yeah, it's using... we can...
- we'll take it offline and just so we
- don't
- do some weird stuff and but we should be
- able to figure something out but I think
- the downgrade is probably just go back and it should work.
- Because that addition that they basically took back out
- was adding the close connection,
- like the connection close header that got added.
- The agent is what's auto adding a keep alive,
- and that's what's actually causing
- the successive closing of requests,
- because there's a, I'll do it in my update,
- but if we get back grade or back downgrade,
- we should be fine.
- - If that's the case, actually,
- maybe we'll just go right into updates
- and then we'll just start with you, Matthew,
- and then you can give us the whole update.
- So there was an update that was done to NodeFetch
- that added a close connection header,
- and then the agent has keep alive that's being also applied,
- and it's basically causing an issue
- where it's closing the socket instead of keeping it alive,
- and then there's a recycle issue in Node,
- which is an existing bug.
- So what I did was I looked into both the bug in Node
- and then the bug in NodeFetch,
- and they're kind of conflicting,
- and that's really what's driving the issue
- with the upgrade to version 20.
- The fix for it is actually structural in Node.
- It's not an easy fix and it's something that...
- It's an issue that had come up in Node 8, I believe,
- and then it kind of went away in Node 12,
- and then it came back again at some point.
- It has to do with how the socket and the agent
- and the readable stream interact.
- It's really like a design issue.
- Um, and that's the reason why it hasn't gotten fixed yet. Um, I did actually add some information to the ticket, uh that we have up
- um the issue and basically I pinged the
- Person who was supposed to be putting up a pr in april
- And just ask the question of like are you still going to be doing that or is it something you'd like us to help with?
- in order to be able to resolve it because I've got a couple good ideas of how it might be possible in order to
- To fix it looking at what was done before and just kind of like how the classes interact
- so it's possible that we might be able to fix it, but
- He says he's already got a pr up. It just hasn't been put up yet
- It's trickier than it looks and that's my guess is why it's not done yet
- even mateo
- collina basically said that this is going to take a couple days
- in order for someone on the Node team to look at.
- And so it's a tricky, sticky wicket.
- And then because of that,
- it was surfaced because of a header that got added in NodeFetch,
- which we're importing through CrossFetch.
- And then that PR got merged,
- that takes that header off as default,
- so it basically falls back to the Node agent.
- and it should resolve the issue of auto-closing the sockets.
- We got to test that.
- So it should, in theory, be resolving that issue.
- But we'll see.
- And then basically, his research all
- shows that it should work fine.
- But honestly, I didn't test it.
- So we'll have to double check that it actually works.
- But it all looked like it worked.
- and they tested it on the NodeFetch side.
- So it has been tested,
- I just didn't personally do it to sign my name on it.
- Also, for my update, I did a PR for the du command.
- I had to restart my computer and found a weird thing
- where basically the du command was failing in the unit test,
- so I put that up.
- I put up a PR for the network worker message latency.
- and in order to be able to get some metrics for that.
- And then I'm going to follow up with Ben
- after I get some metrics running in order to just let him know what's happened there.
- And then I'm also going to be adding a question from Tuyen about breaking up the run micro task function
- to see if we can get that scheduled a little better
- to improve the network performance.
- And then I'm also going to be adding a question from Nico
- about set timeout versus set immediate and just strategies of how to use the scheduling methods
- that we've been using if there's got any suggestions essentially. And then the other
- thing I'd like to be doing this week is the blast stuff is pretty close and hopefully Gajinder will
- we'll be able to get it over the hump
- because when we were testing that a couple of weeks ago,
- it really did stabilize the network a lot
- just by freeing up the main thread
- in order to be able to process a lot of the other,
- the work that's existing.
- So that's something I'd really like to be able to push over
- the hump this week if possible.
- And in particular, just by not having to deserialize
- and serialize the keys in order to be able to convert
- from the state transition back in through all
- of the validation functions.
- I think we'll, I mean, there's just a few things there
- that I think are gonna free up a lot of resources,
- which I think is gonna stabilize the note a lot
- 'cause it really was doing a really good job.
- Assuming everybody is okay with that,
- that's really, I think, gonna be my goal
- to just see if we can get some metrics on that.
- And then following up with Ben.
- >>Awesome.
- Thanks, Matthew.
- All right.
- That sounds exciting.
- I will now hand it over to Cayman for any updates that you might have.
- >>Yeah.
- So this past week, to be honest, I didn't-- I was not very productive.
- I got a small PR merged in the P2P
- that allows us to manually dial the identify protocol, which
- should help with a very minor thing in Lodestar.
- May help us identify peers a little bit better,
- identify the client versions a little better.
- I've seen sometimes we have an unknown when we might--
- that unknown peer might actually be related to a client
- that we know about.
- Other than that, I was closing out some--
- trying to close out old PRs in our queue.
- And I'm going to keep on doing that this week,
- specifically the disc v5 using vanilla events.
- That's a prime candidate.
- And I would really love to look again at the multi-fork types
- PR that I had out.
- There was a type error that was blocking it.
- But I'd like to see if I can revisit that because it's
- going to keep on--
- I think just having a better organization of our types
- is going to be helpful as we get more and more forks out there.
- Other than that, if anyone has any specific things
- they want me to review, I'm free to take a look.
- So ping me.
- But yeah, I'm back to full availability
- now that my family is no longer in my house.
- Cool, thanks, Cayman.
- All right, I'll hand it over to Nico.
- Hey, so I was mostly looking into the issues I opened last week.
- So there was this that the process was hanging so that turns out that's the network worker.
- It turned out so that was not related to anything IPv6 updates we did.
- did. There was the issue that our metrics was actually not
- configured to listen on localhost. That's fixed now,
- with the PR I did. Besides that, I was basically trying to
- investigate why our sim tests were hanging. For this one PR I
- closed, where I changed the order of how we shut down the
- peer manager. And then looking at those logs, I found out that
- in our sim tests, we actually have a lot of these cannot set
- header errors. So this leads me to further look into that issue.
- And I think this is now finally correctly resolved. So it was
- just a race condition how we close the event stream
- basically. And so in some cases, we still emitted events to the
- event stream even though it was already closed or the stream was no longer
- writable. So this should be fixed now. Yeah besides that was also looking a bit
- into that node 20 issue that we had and yeah so what I want to finalize now is
- just the other open PR regarding this node health API. So yeah there are good
- suggestions there how we can improve that. So from NASA I think that's a
- pretty good approach that he suggested. So I will implement
- that. And then hopefully this week, do some progress on
- looking into how we can improve our region strategy, and
- eventually talk with line about it.
- Which strategy?
- Just looking into how we do region at the moment and state
- caching. And yeah, I was looking at strategies that other
- clients used. But yeah, there was still some points that I
- need to better understand to really do proper decisions of
- what on what we can improve basically.
- I'd like to give a shout out to Nico for holding down the help
- channel and always seeming to have the answer for everybody in
- there. I just I think it deserves commendation. You do an amazing job at all that stuff.
- Yeah, man, big ups. Big ups.
- It's really fun for me to have users. So
- you're really good at it.
- Like the broad, the thing is, it teaches you a lot. Because if you try to answer,
- I guess always my favorite approach to like learning stuff, just helping other software stuff.
- It's very impressive to me that like because I learned a ton from your answers in there also, it's just I just wanted to specifically call it especially getting the guy like what what sparked it was the guy started a test net from from Genesis, just as pretty cool, like I hope he actually puts that repo up, because I'd love to be able to see that.
- Yeah, I think he's doing great work. Let's see what comes out of it. Because that whole
- def net or test net was not. I mean, I also tested it, but not that extensively like he does now.
- So I think that's good that we know it actually works.
- And he has a compose. And it's not starting from phase zero, like he started like halfway through,
- which I think is also like it's, it's, it's a very cool thing we should we should even turn it into
- a highlight or something, or maybe do a blog on it or I don't know there's some, there's some
- opportunity there. Yeah, definitely. And also like to add that, you know, the work that you do Nico,
- which helps other community members build tooling or guides or whatever that may also help the
- community, that is actually a flywheel we've been trying to get going with a community,
- basically, specifically to Lodestar. If we're able to help other builders do some of the work
- that we otherwise wouldn't get to or will help improve the Ethereum community in some way,
- Like that's awesome work.
- It just exponentially increases the output
- of what we're trying to do here.
- So thanks a lot, Nico.
- And okay, we'll move on to Nazar.
- - Thank you so much.
- Yeah, lately I was struggling using the prover package
- in the React application.
- There is a very famous known pattern of conditional exports.
- It turned out that the package edition conditional exports are not standardized or most of the
- libraries are not using it properly in different ways.
- So I made some changes to make those conditional exports working for the webpack.
- For reference, conditional exports are when the building tool like TSC compiler, TypeScript
- compiler or Webpack or any other building tools, they can detect if it's a browser or
- an old environment or what kind of conditions that apply and then appropriately switch the
- import path at the runtime.
- was a bug which I fixed for the webpack, it was working but then we have a package in
- our repo which is like linting the readme files and then that package stopped working
- because they have a different way of detecting the conditional exports. So they were only
- detecting one level, they were not doing it deep. On the other hand, webpack can do nested
- conditional exports as well.
- So due to this limitation, I banged my head a lot,
- but finally I went for the named export.
- So now if you wanna use the prover into the browser,
- there is a name export for our browser,
- so you can rely on that.
- That is much more streamlined in all building tools,
- so that will work.
- And there was one other discussion with Nico
- about a situation where when we shut down a beacon node
- we see an error message in the control log
- which says that execution went offline
- which actually is not the case
- because execution is there, we just shut down the node.
- Apparently it's an abort error which somehow been detected
- as that there is a communication error
- between the execution layer and the beacon layer
- and then our logic that we have create this error
- as an error that the execution went offline.
- So there is a PR I opened for it.
- I'm writing some tests for it.
- We'll finalize those tests and open the draft PR,
- like make the draft PR ready for the review.
- And there is one other PR I'm working right now
- a logical error I found in one of the implementation of the prover
- when we don't have enough finalized blocks
- so if we initialize the prover and we only have one finalized block
- at that time then there is a logical error which limits fetching some payload
- So I will open one PR for it and then if both works fine, then I will open the
- React application PR ready. It's almost done
- It's just limited because of this
- Logical error, which I just found in the morning
- Yeah, so you guys can see three PRs upcoming by me maybe today or tomorrow
- Thank you.
- - Okay, well, next up we got Tuyen,
- if you have anything to add.
- - Yeah, so I finished the new BLS API.
- Next, I will work on the index,
- but secure hopefully we'll have a PR tomorrow.
- Other than that, I submitted two PR.
- One is to own proposal duties before the next epoch.
- The other one is not to subscribe to too much subnets.
- The context is that when we join a sync committee,
- there are a lot of long-lived subnets appear
- up to 50 in average.
- And I see that we receive like 120K message IDs
- in the IHAVE gossipsub, which increase the bandwidth a lot.
- And for each of the message ID,
- we have to convert to string.
- and we have a lot of IO lag at that moment.
- This afternoon when I look into the rated network with Lion,
- there was time when the rated network decreased a lot
- and that happened when we joined the sync committee too.
- So the fix is not to subscribe to too much subnet peers.
- Right now the target is six subnet peers.
- So please review that.
- Next, the other thing I will work on
- is not to subscribe to short-lived subnets too early.
- Right now, if we have an hour later duty, the next epoch,
- we start subscribing at the beginning
- of this epoch, which increase the bandwidth a lot.
- we try to subscribe to just some slot in advance.
- And the last thing is the noble guys
- have a new chacha-poly.
- And in the last update,
- he said that he will support the destination
- as an optional param.
- This is what we want.
- So I will do a performance test to see
- if it's actually better than our assembly script.
- we can switch to that.
- That's it for me.
- - Thanks, Tuya.
- Yeah, some of those fixes that you're putting in there
- to help with the sync committee issues,
- that was my rationale for potentially
- pushing out a hotfix release.
- But if we're gonna go ahead and do a 1.10 anyway,
- hopefully that goes well,
- we can sort of play it by ear and see how the RST does instead might be a better way to go.
- Okay, next up we got Lion.
- Hey, so last week was Paris. I think I gave an update on the last week.
- I guess I had an interesting conversation. We were chatting a lot with Terrence about
- multiple things and he asked a question. So because we were discussing what's going to
- happen when the state is so big and yada yada. And basically he asked me, how long does it take
- Lodestar to process all the attestations in the aggregate moment? And basically my question is,
- we just don't. So I was spending a bunch of time trying to understand how bad this problem
- is. So we have a new dashboard called load star group dash good behavior. What is the
- selection of the things we do that do not affect directly us but affects others? Because
- Because I think, yeah, we have to keep, I don't know, like, I don't know why it bothers
- me this much, but I don't think it's okay that Lodestar keeps growing while being a
- negative to the network.
- Especially like the problem that we have where we essentially drop messages.
- That means that messages that are propagated through the network don't get there.
- If Lodestar had a significant share, this would be pretty catastrophic for the network.
- I guess at the rate that we are now,
- the redundancies in the network
- do not cause a significant effect
- due to the fact that we have so many aggregators
- and everyone has a decent amount of mesh peers.
- So we are basically a sink
- where if you send us something, it will just not get through.
- So that's why I'm coordinating with Tuyen to see,
- which is kind of the realization, right?
- If we are not processing a distinguish in time,
- why are we doing that at all? It's kind of stupid. We could even completely turn off
- our aggregator at all. Just make loadstar performant and then focus on that part. Because
- otherwise it doesn't make sense. We are spending all this time to aggregate the decisions to then
- not do it in time. And if it doesn't get to the aggregator within a slot, it's useless.
- So, yeah. Working on that, at least. I think doing something radical like this could give us
- us a bit of time to not overload Lodestar while we get something more permanent like
- the networking threads. I think that I would be okay with that compromise, at least if
- we know that it's temporary and we just know that otherwise it's kind of useless anyway.
- And yeah, besides that, all the different research paths that I mentioned last week
- Just continue, no big news there.
- And that's it for me.
- - Thanks, Lai.
- Great update, something for us to think about
- how we might wanna approach this.
- Okay, just in the essence of time,
- we have good ginger and NC left,
- if NC has anything to say.
- Do you have any sort of updates
- in regards to some of the work that you've been doing?
- - Right, yes.
- Yeah, so on the ePBS,
- so I finally had some capacity
- starting this project last Friday.
- And since then, like a couple of things has happened.
- So first off,
- Terence invited Lion and I to join the EPBS discussion
- over on the Prysm Discord.
- So I think like, I think like from now,
- like all the EPBS discussion is going to happen
- over on that side.
- And also like, okay, he set up like an initial touch base
- with, you know, Lion and I,
- and also like couple present folks next Wednesday.
- So I hope like, you know, we can have some like
- productive outcome or like any sort of discussion.
- And then he also posted like his first draft
- on the P2P spec on the ePBS.
- It's something I still need to review.
- Right, so for this coming week,
- I need to get myself up to speed on the P2P,
- especially the network layer stuff,
- mostly the libP2P and also the gossip stuff like that,
- just so enough that I could understand
- what cameras is doing with the EPBS P2P side.
- And also over the next two weeks or so,
- I wish to write up like a project documents on the EPBS,
- just to formalize the projects a little bit,
- maybe like set some objectives, goals,
- and split up the project into a couple of phases,
- so that it's easier to organize and to track progress.
- So I hope that, so it seems like right now
- we are focusing on the P2P side of things on the EPPS
- and for the rest, like we still don't have, you know
- any meaningful discussion yet.
- Yeah, that's all from me.
- - Awesome, thanks for the update.
- All right, and we have Gajinder.
- - Hey guys, so I worked on PRs for forkChoiceUpdate v3
- for DevNet 8, then I worked on broadcast validation PR,
- and I hope I've addressed the concerns
- raised by Cayman and Lion.
- And then I tried to sync Constantine, the Verkle TestNet,
- but I was facing issues regarding loading the genesis,
- spent quite a lot of time debugging them,
- and finally figure out that constant had a change
- over the current local branch in payload header.
- It now has execution witness header
- rather than execution witness.
- So I'll try to make that change
- and try to again, try to run the network.
- And then I basically had discussions as well as raised PRs
- on the consensus specs regarding publishBlockV3.
- And it seems that, right, we will need to move our builder
- versus execution race to beacon rather than the validator,
- which is what we are doing right now.
- because now the format is,
- so there is the format right now
- is basically all the APIs are on the format
- where it's assuming that this race and selection
- is happening in the beacon.
- So that is something that I'll pick up.
- And then there is a PR on consensus specs
- regarding parent beacon block header.
- So, again, it seems that EL guys were in favor of the PR but CL guys are not and on tomorrow's
- call, most probably it will be decided whether this PR will be included or not.
- Yep.
- That's all.
- Thanks, Gajinder.
- Okay.
- So thanks guys for coming out and we'll see you later.
- - Bye guys.
- - Bye. - Bye.
- - Have a great week. - Bye-bye.
- - Have a great week, everybody.
- - Have a good week, bye-bye.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement