Advertisement
philknows

Untitled

Jul 25th, 2023
23
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 29.16 KB | None | 0 0
  1. Hello, everyone. Welcome to the July 18th stand up. You guys may have noticed a new
  2. person in the call today here. This is NC, he's going to be working with us as a freelance
  3. contributor on some enshrined PDF stuff. So everyone, welcome NC. And feel free, NC,
  4. if you want to make a quick intro. These are pretty much our team here. If you want to say hi.
  5. Yeah, thanks, Phil. Yeah, hey, guys, my name is NC. I don't know what to say. Well,
  6. I've been in the Ethereum space for the past, I don't know, like 10 months or 8 months or so.
  7. So I joined the Ethereum Protocol fellowship before and I did some contribution to the Lighthouse
  8. and also Besu on the execution layer side. So right now I'm just picking up this project,
  9. working with the folks on Lodestar to get the EPBS stuff going.
  10.  
  11. Great, NC. Thank you. Yeah, so these are the guys here. I could make further introductions
  12. for you to them a bit later. But yeah, welcome to the team. Glad to have you here. And yeah,
  13. we'll keep going forward with standup now. So one of the things that I wanted to make sure about is
  14. we did want to see a demo of the prover at some point. I don't think I got like a concrete answer
  15. yet about when we'd like to do that. So Nazar, if you just give us a brief like update on where
  16. you're at with that and if you think it would make sense to do a demo maybe when people, when I guess
  17. like line is not a conference and stuff probably next week or something like that. How do you feel?
  18. I think I said on Friday, Wednesday, but I forget to send the invite. Yeah. So in my mind,
  19. it was like Wednesday tomorrow. Oh, okay. Um, like if everyone is here,
  20. if they agree and they're available, we can do tomorrow.
  21. I think the only person that wouldn't be like available, I guess, is line. I'm not sure
  22. what his schedule is like over at ECC or if he just wants to watch a recording of it.
  23. Do you have any preference line?
  24. I can do my best, otherwise.
  25. Okay.
  26. Did you got it for the line time?
  27. He's gonna try his best to make it.
  28. Okay.
  29. Is there any objections to tomorrow?
  30. I think we had said like one hour before like this current time.
  31. Mm-hmm. Yeah, so any objections to that?
  32. Okay, great. We'll send out the invite after the call or...
  33. Yeah, I will send it. Oh, yeah, sure. Okay. Thanks, Nazar. Okay, we had pushed a
  34. hotfix release v1.9.2 to our CIP fleet. It included a couple of issues in relation to
  35. reducing the race time. I think it could jitter through that one in. There was also a change
  36. that was made by Tuyen, and we also threw in that DVT PR from Nico.
  37. Did anybody get a good chance to look at how our CIP nodes are performing
  38. for the last 12-ish hours, and whether or not it is suitable for release?
  39. I did have a quick look before the meeting.
  40. It's basically the same performance to 1.9.1.
  41. Any other observations?
  42. Okay, I noticed...
  43. Yeah, I noticed that the...
  44. uh, attestation subnet.
  45. The count of our peers or mesh peers seemed.
  46. Choppy or irregular. I don't know if anyone else saw that.
  47. Yeah, from from mesh piece is quite tricky because it just 12 hours.
  48. If you want it to be stable, it checks them.
  49. Right. I do see the mesh peers slowly, you know, creeping in an upward direction,
  50. but it does take time for it to, I guess, stabilize at around like eight or so.
  51. But I'm not seeing anything that's like terribly worse. We do have like
  52. a couple spikes here on like the gossip scores average.
  53. I'm not sure if that's like anything to be concerned about.
  54. Otherwise, as long as it's not like detrimental, I think what we especially wanted to see with
  55. this optics release as well as any changes in the block production time from that incident
  56. that we had a couple of days ago.
  57. And seeing if 2000 milliseconds would make a difference.
  58. We have any reduced blocks on this node in the past 12 hours.
  59. I think it might take a while.
  60. Yeah, it might take a while with 64 validators to get a
  61. produced block here to know whether or not it made a huge difference.
  62. I would say if we are comfortable with it, we should release it. If not, maybe we do a bit more
  63. of a slower rollout to some of the Lido nodes where we can see block production metrics over time.
  64. Because that's where we're probably going to get, you know, the most metrics on block
  65. production here.
  66. I don't see anything that's like, like super dangerous in terms of releasing it as a hot
  67. fix. If anything, we'll collect the data from the Leo nodes as we upgrade them to 1.9.2
  68. and see and just keep an eye on the block production metrics. Is there any objection
  69. to that?
  70. No, I think I think it's.
  71. I think it's fine to cut it to cut release.
  72. OK. Let's go ahead with that and I think.
  73. As well, we were running a specific commit.
  74. It was one of the unstable ones
  75. feature one over the weekend. That would have been a potential candidate, I guess, for a 1.10.
  76. Has anybody observed anything concerning that they want to bring up for that?
  77. I'll be honest, I did not take a look at those metrics yet, even though we have,
  78. I think about two or three days worth of metrics.
  79. On that group of servers.
  80. If no one's taking a look, we can take that async and then just continue on with.
  81. With planning.
  82. Yeah, I I think we want to fix some of these bugs that Nico filed over the weekend.
  83. Before we.
  84. commit to a release candidate.
  85. Sounds good.
  86. Okay.
  87. Next up, I had a conversation with, with the DevOps team and also came in and about some
  88. just some of the, the, the sort of metrics that you can get out of something like Grafana
  89. to be able to cater to some of the information
  90. that you're looking to get.
  91. I think Jon may have dropped off the call,
  92. but I'm gonna look to get the DevOps guys
  93. to also look into building their own Grafana dashboards
  94. for their own needs,
  95. but would like to see if anybody here could benefit also
  96. from some sort of a Grafana education session,
  97. whether that is like building, yeah, sounds good.
  98. Matthew has his hand up.
  99. I think I could definitely use a refresher as well
  100. in terms of what the content is.
  101. It would definitely be something that we can compile
  102. as to like what sort of burning questions
  103. are in people's minds.
  104. If you have an idea as to like what you would like to see
  105. from a Grafana education session,
  106. we could also set that up separately
  107. similar to what we're doing with the Prover
  108. and then just get people up to date on how to use Grafana,
  109. basically.
  110. So if you have anything that you'd like to mention here now,
  111. feel free.
  112. If not, we'll just compile them and set a date.
  113. Some more PromQL as well.
  114. PromQL, OK.
  115. Anything else from the rest of you
  116. guys as to what you'd like to see in like a bit of a Griffon education session?
  117. Actually also just going through some of our dashboards because I think that's something
  118. that would be nice to just look at the metrics we have and talk about them and kind of the importance
  119. and which ones are more important less important for different types of research would be it would
  120. would be great, actually.
  121. Yeah.
  122. Even an explanation of--
  123. because not all of them are expressed in necessarily
  124. line graphs or whatever.
  125. Some of them are also heat maps.
  126. Being able to understand and read those, what looks good,
  127. what doesn't, that's also pretty helpful.
  128. OK.
  129. All right.
  130. I'm going to suggest probably sometime next week for something like this.
  131. Just taking a quick look at the calendar here.
  132. It would be probably similar to
  133. maybe like the same time that we're suggesting for like Nazar's presentation, but like a week later.
  134. Do you want to swap out the MEV talk and just do it as the protocol discussion?
  135. Metrics and reading dashboards? We could. I would ideally like to know that
  136. I think Line would be available for this. He's probably the most knowledgeable, I feel like,
  137. on Grafana. What is your schedule looking like I guess for Thursday or for next week line
  138. to make this work? Next week is best.
  139. Next week is better. How about reschedule the...
  140. Nope, just leave the Ethereum protocol discussion where it is. Maybe next week on Thursday,
  141. like same time we'll set up a um a refined education session for that.
  142. Okay cool. Does anybody have any points for planning at all?
  143. Otherwise, I would love to--
  144. >>I think, I guess the current game plan
  145. is fix these bugs that were filed over the weekend
  146. by Nico.
  147. Are there any other things that we
  148. need to do before we consider cutting a release candidate?
  149. We're just going to quickly pull up the milestones here, but they were very much cleaned up.
  150. Did we ever fix that issue where with the upgrade of libp2p we consistently had more
  151. than 55 peers?
  152. Is that going to be an issue as well?
  153. Like I guess for the upcoming release candidate?
  154. So looking at feature one, we typically
  155. don't have more than 55 peers.
  156. So I think it may happen right when we start up.
  157. It kind of spikes above 55, but then it
  158. stabilizes between 50 and 55.
  159. That's what I've seen.
  160. OK.
  161. I don't know that it will be an issue,
  162. but I don't know that it is not an issue either.
  163. OK.
  164. It's okay, we can understand you.
  165. So I was studying with Terence from Prysm about the aggregation deadlines because there is a
  166. proposal to extend the block deadline in the slot and compress the attestation and aggregate
  167. sections of the slot. So he asked me, How long do you guys take to aggregate all the attestations
  168. when you subscribe to our subnet.
  169. And my answer was like, well, we don't.
  170. We have to drop out of them.
  171. Anyway, which is a funny point that I hope,
  172. I hope we can become a better network participant
  173. at some point.
  174. And maybe Tuyen will touch on this,
  175. but it's not really like now that we have merged
  176. the subnet refactor, where we only subscribe to two,
  177. note, if we reduce the time that we give aggregators we can reduce traffic a lot.
  178. So theoretically we could do much better and I think that this can be huge and we should
  179. strategize that but Tuyen is already on it.
  180. Thanks, Lion, for that.
  181. Tune, if you want to add to any of those points, feel free.
  182. Yeah, I think we can test that subnet per node work
  183. on when we release 1.10.
  184. We can try that in any apparent node,
  185. CIP node, for example, to see if we reduce bandwidth significantly.
  186. because previously we subscribed to one subnet per validator
  187. for node with 64 validators, we subscribe on subnets
  188. the change is we subscribe to two subnets only
  189. so it should reduce a lot of bandwidth
  190. so for 64 subnets, we still subscribe to short-lived subnets, I think
  191. but it still reduce the bandwidth
  192. it should reduce a lot for
  193. not like with 8 or 16
  194. wide data
  195. oh
  196. and we have to increase our peer count to 100
  197. Um. Right. Okay Um, yeah, we'll
  198. definitely note that, um,
  199. especially for trying out after
  200. we get a 1.10 out. Um. Any
  201. I really want to try a node with 100 peers.
  202. Just I don't see we have enough mainnet nodes.
  203. Last time I had to use beta mainnet, but now it's busy.
  204. Can we have beta mainnet node too?
  205. Because we have feat one mainnet, we have feat two mainnet, but we don't have feat three mainnet.
  206. Oh, okay.
  207. Yeah, I definitely did not notice that.
  208. I'll put up an issue today for the infrastructure guys
  209. to get those set up.
  210. It's definitely missing in our stack there.
  211. - Thanks, Tuyen.
  212. - Okay.
  213. One other question for you, Tuyen.
  214. There is something else in my milestones here
  215. that I was tracking 5556.
  216. I haven't continued to look into this,
  217. but if you have anything that you could update me with
  218. on this, that would be great.
  219. I see that there was a PR that was done to Gossip Sub for this.
  220. I think this happened when we had before used
  221. call as a tool. Now we disabled that. So it's still an issue but we can fix that later.
  222. It's not for 1.10 I think. Okay. Great. I'm going to move this one forward.
  223. Yeah, please. Okay. Thanks. I think outside of the issues that were put up over the weekend
  224. by Nico, let's focus on getting those fixed up and then once we got a good potential release
  225. candidate we'll throw it into beta to retest for data collection.
  226. And after the call let's release 1.9.2 and start deploying them to the Lido nodes.
  227. Anything else for planning?
  228. [BLANK_AUDIO]
  229. >> I just deployed unstable to feature two, so
  230. you can use that one if you need it.
  231. >> Thanks, I just need the minute note.
  232. Thanks, we'll check that.
  233. [BLANK_AUDIO]
  234. >> Okay.
  235. >> Let's just go with updates.
  236. Lion, is there anything else you wanna add from your adventures over in Paris?
  237. for your updates.
  238. If you can hear us still.
  239. Okay, let's move with Gajinder.
  240. - Hey guys.
  241. So I have just hit P2P validation for AIP 7045.
  242. I think there is some test which is basically breaking
  243. because of the types I'll fix it.
  244. I just, I mean, I pushed it and then I saw it today.
  245. And I continued the preview of BLST PR2,
  246. but I didn't finish it.
  247. So I'll target finishing it this week.
  248. Did some debugging on missed proposals
  249. and did a small PR tool over the race, which threshold,
  250. where we basically wait for both engine and builder blocks.
  251. And basically, you know,
  252. but it seems from the empirical evidence
  253. that four second is the threshold at which
  254. if the blocks don't arrive to the nodes,
  255. then there's high probability that they would be missed
  256. because around four seconds,
  257. till four seconds,
  258. proposal boost is there.
  259. So that might be a big factor.
  260. So lowering the race threshold to two seconds made sense.
  261. Earlier it was three seconds,
  262. but if it continues to happen,
  263. maybe we can lower it to 1.5,
  264. but seems like builders mostly reply
  265. till 1.5 to a little bit more than 1.5,
  266. or maybe two, I mean, you'll need to see more data
  267. to make a more tighter call around this
  268. if it continues to be an issue.
  269. And did some reviews for EL offline PRs
  270. and started a PR for broadcast validations,
  271. which basically will help the builder guys
  272. to sort of say that, you know,
  273. before broadcasting the block do this validation.
  274. So the thing is that, you know, right now,
  275. right now when we propose a builder block,
  276. we send it to the builder and builder earlier
  277. used to transmit it without validating the blocks,
  278. which led to some of the MEV sandwich attacks
  279. in which builder basically proposed a block
  280. builder basically gave a block to the validator,
  281. which was not correct.
  282. And then use that,
  283. use basically extracted out of transactions,
  284. not a builder, basically a bot in between.
  285. I think malicious validators,
  286. they basically took the block from the builder
  287. and they basically,
  288. what they did was they created a block,
  289. which was not valid.
  290. and posted it to the builders
  291. and the builder transmitted the block
  292. as well as revealing the transaction contents
  293. to the validators, which the validator
  294. then used to create a valid block
  295. and sort of front run the MEV bots.
  296. So now builders want to sort of validate the block
  297. before broadcasting and that is the flag
  298. that is being added in this PR.
  299. So the validator can now request on the publish block calls
  300. what kind of validation it wants to run
  301. for the beacon or before it broadcast.
  302. So there are a few steps over there
  303. and we'll basically start from lowest hanging fruit
  304. of having no validations plus some consensus validations
  305. where the beacon itself is proposing the block.
  306. So basic beacon knows that, you know,
  307. the constructed block, it constructed the block.
  308. So it is valid.
  309. So it is a easy check in cache to sort of figure out,
  310. figure this out.
  311. And then there will be,
  312. and then the next steps would be to add validation
  313. for the blocks which the beacon didn't construct.
  314. So that, so I will do follow-up PR for that.
  315. So first PR is just to make sure that, you know,
  316. the base, the lowest hanging fruit is there
  317. and this flow is activated.
  318. And then I, for DevNet 8 readiness,
  319. I did a specs simplification proposal
  320. where basically parent beacon block,
  321. so which is part of execution payload on the EL side
  322. because EL state is affected.
  323. So EL basically needs this in the execution payload,
  324. but right now the way it has been architected
  325. to send is through via new payloads.
  326. So execution payload on beacon block side
  327. does not has parent beacon block root,
  328. but execution payload on EL side needs it.
  329. So basically EL's execution payload
  330. has parent beacon block root,
  331. and the way it is currently being transmitted
  332. is through new payload.
  333. So while we are sending new payload,
  334. the CL is supposed to also send parent beacon block route.
  335. So basically it's not part of the beacon blocks
  336. execution payload structure,
  337. but we have to massage it and send it.
  338. But I propose a simplification payout
  339. where basically it's a part of
  340. beacon's execution payload so that, you know,
  341. the correspondence and debugging is a bit easier
  342. because we have found that on DevNet,
  343. it happens that you just have the payload from beacon block
  344. and then you need to do some debugging
  345. based on that on the EL side.
  346. So, propose a PR for that.
  347. And to get DevNet 8 ready,
  348. I'll just writing a PR for fork choice update three,
  349. in which we also need to send to EL parent beacon block root
  350. so that it can use it to construct the execution payload.
  351. Again, it matters because parent beacon block route
  352. is being saved in execution layer via pre-compiled.
  353. So, and this affects the state.
  354. So if it would not have affected the state,
  355. then we wouldn't need to send it.
  356. But since it's affecting the state
  357. of the final execution payload that we'll get from EL,
  358. So we need to send it where our fork choice update as well.
  359. So this is the PR I'll be completing in a couple of hours
  360. and updating it.
  361. - Thanks, Ginder.
  362. Just to add to a little bit of context
  363. to that second last point in regards to the validation
  364. before block publishing.
  365. That was something that's,
  366. it's basically a feature that one of the relayers requires
  367. to allow them to integrate Lodestar
  368. into their relayer setup.
  369. So that's why we're integrating that flag into Lodestar.
  370. Prism and Lighthouse already have like forks
  371. their software to allow this for the relayers. So they are currently dominant in the relayer
  372. landscape. So due to some issues that we had previously with some of the relayers themselves,
  373. it was made clear to me that they needed this feature to integrate
  374. Roadstar and by diversifying their setup, they could hopefully reduce
  375. relayer issues that they were having. This is specifically the HVIS relay.
  376. But yeah, that will allow us to really help that part of the ecosystem as well.
  377. All right, let's continue forward.
  378. Lion, can you still hear us? Do you have anything that you want to share from ECC?
  379. Yes, I went to a PBS session on Sunday and Monday.
  380. Yeah, the signs are doing solid. I think it just needs more research to make sure.
  381. I'm telling them that they are superior.
  382. But it was a good session to understand
  383. that we are not cost-consuming.
  384. We have the five-minute code from WP
  385. and the wallet is also relevant.
  386. Okay, cool. Yeah.
  387. Feel free as well, like, just, if you have the time
  388. to just do more of an async update.
  389. We're having some trouble hearing you, so.
  390. Anyways, we'll move forward.
  391. Let's go with Thuyen next.
  392. - Hi, so I work on a PR to prioritize VLS signature
  393. for signature sets sent from API,
  394. we set a flag priority as true.
  395. And for other bases like Gossi,
  396. we have prioritized as forms
  397. and with a prioritized BLS signature,
  398. it will append the signature to the head of the job queue.
  399. By the way, the job queue was refactored to a linked list.
  400. The PR was merged.
  401. Another work I follow is to,
  402. which is related to BLS too,
  403. is to verify signature sets of the same message.
  404. I did a rebase and the main thing now is to test.
  405. What I monitor on Fit3 is that it does not make
  406. a huge difference to unstable,
  407. but CPUs is reduced by 25%
  408. and less attestation job wait time,
  409. a little bit less GC time.
  410. However, I need to deploy on a minute
  411. to see what's the different threat there because for now we will iterate the signature on main
  412. thread I would like to see if we can get stable main sphere there.
  413. And golly it works fine.
  414. Other than that I did a small PI in the zipsub to track the published time on the zipsub
  415. and I fixed a bug regarding the unknown sync when we subscribe and unsubscribe.
  416. it was merged to 1.9.2.
  417. That's it for me.
  418. - Thanks, Tuya.
  419. All right, next up, we have Nico.
  420. - Hey, so last week was mostly looking at closing
  421. the remaining tickets for 1.10,
  422. so the issues that were still open.
  423. Also looked a bit into the issues we had with Node 20.
  424. I think those are now basically fine.
  425. Still one concern is this issue I opened
  426. where consecutive requests to Lighthouse at least,
  427. caused a socket hangup issue.
  428. I checked this out, there are some workarounds to fix this,
  429. but it will require an upstream fix ideally
  430. in the node fetch library on node core itself.
  431. So yeah, maybe we just need to wait there.
  432. Yeah, I mean, the only users in production
  433. that could face issues is if they run load house
  434. with a light house beacon node, I guess,
  435. which might be the case actually in the dbt setup.
  436. So we might consider to wait with node 20 update
  437. in that case, until this is resolved.
  438. Yeah, besides that, just a few minor UX stuff,
  439. improve the validator exit command a bit
  440. based on previous user feedback we got.
  441. And yeah, just looking into a few issues
  442. that happened on unstable and open,
  443. yeah, basically documented that.
  444. There was another thing that the user attempted
  445. to update our sync API to the latest beacon spec.
  446. And I checked this, what needs to be done there
  447. and found that it's not that trivial actually,
  448. because we basically overwrite the custom handler
  449. to set the status code,
  450. because it's not possible in the normal API implementation.
  451. So yeah, maybe it's not that great of a good first issue,
  452. but yeah, I'm checking that maybe then I can fix it myself
  453. and the user can maybe take a look at that.
  454. Also, there's an issue that we don't report
  455. a correct status code right now.
  456. So we always say 200, but the beacon spec wants 206
  457. if the node is syncing, which is not the case right now.
  458. So might as well fix this.
  459. Yeah.
  460. And besides that,
  461. yeah, just trying to document more on the issues
  462. that I opened.
  463. Still trying to find out what is causing the process
  464. from not shutting down.
  465. Yeah.
  466. So yeah, mainly investigating that.
  467. Thank you, Nico.
  468. - All right, next up we have Kamin.
  469. - Hey, yeah, I guess I would recommend
  470. that we hold off on node 20 for our Docker image
  471. until we get a fix on that consecutive HTTP requests
  472. causing socket hangup.
  473. But yeah, so last week.
  474. We got the I got the.
  475. IPV6 support added.
  476. I think that was Friday.
  477. That may have caused that
  478. problem with the metric server,
  479. but somehow doubt it, but we.
  480. I can also help take a look at that Nico,
  481. but much much appreciative Nico and
  482. are helping with the Node 20 PR and debugging issues around that.
  483. Other than that, I was looking at Tuyen's PR for multi-adder, JS multi-adder, where
  484. he refactored how we get the multi-adder path from the multi-adder and have another PR open
  485. that kind of takes his approach.
  486. It takes it to the logical extreme
  487. of pre-calculating everything
  488. in the construction of a multi-adder.
  489. So I think the Lippie2P guys
  490. are getting back from a vacation this week.
  491. I don't know if they're all back yet.
  492. So hopefully that'll get reviewed in the next week or so.
  493. Yeah, I've been I've been a little disheveled the past week, and not at full capacity.
  494. But I'm going to be getting back to full capacity next week.
  495. Sounds good.
  496. Thanks, Kevin.
  497. And yeah, I think guys with all the help with the node 20 stuff.
  498. Has anybody upstream stuff to Node before?
  499. Do we have any sort of expectation of when that could potentially be fixed?
  500. I'm wondering if we should just completely not upgrade Node for v1.10.
  501. Oh yeah, I recommend we we revert that PR for run 10.
  502. Okay, because otherwise we're going to break compatibility if people are using a lighthouse
  503. validator.
  504. Yeah, so we should hold off and it it's not critical that we move to 20 right now anyway.
  505. I didn't see any spectacular performance gains.
  506. So I don't think that we're missing out a lot.
  507. Because there's things that are faster
  508. and there's things that are slower.
  509. So it ended up just being about the same.
  510. - Is there already a PR in to fix the issue
  511. or is it something that is not identified even?
  512. - Yeah, maybe Nico can speak to it.
  513. Yeah, so there is a PR in NodeFetch that addresses it.
  514. So I think it can be addressed there, but then there's also, I saw a discussion on the Node Core repo
  515. where they discuss a fix potentially there.
  516. Because the weird thing is, so if you yield to the macro queue, basically the issue does not happen.
  517. So there seems to be that Node is reusing a socket or whatever, the same connection,
  518. which is already closed but it's not cleaned up properly or something like
  519. this. Oh and it's throwing in the loop okay. You know what if you don't mind if
  520. you can point me at that and we can take a look together I'd love to to do that.
  521. I'm starting to do a couple commits here and there with them and it's something I
  522. can even maybe message Mike Dawson and ask him if there's a potential fix on
  523. that.
  524. Yeah, good.
  525. Yeah.
  526. So I referenced the node issue.
  527. Even Ben was also part of that.
  528. And yeah, Mateo and so I think the core contributors there.
  529. Okay.
  530. Pretty involved already there.
  531. So yeah.
  532. We could at least I mean, we could bump it to like, I'm sure they've got exactly a billion
  533. things on their plate.
  534. But this is something that probably fell into the seem like it's, you know, it was March
  535. or April when that issue was being discussed and probably fell through the cracks.
  536. Yeah, it's all in here. The upstream issue I linked as well in the chat here.
  537. If you want to take a look after the meeting. Thanks. Yeah.
  538. Okay. Um, yeah, Matt, feel free to go ahead with any updates from your end as well.
  539. I have been pretty down for the count this week. It was, um, feeling pretty rugged. I'm starting to feel more like myself, which is good.
  540. And I did a bunch more unit testing on the blinding, not blinding.
  541. I got the CLI flag to turn it on and off.
  542. I did some PR work on the blast repo as well in response to Kajinder's comments.
  543. But I'm just moving a little slow.
  544. I'm having a hard time.
  545. But so we'll.
  546. - No problem, Matt.
  547. Feel better soon.
  548. Take care of yourself.
  549. And yeah, we'll keep in touch and just take it easy.
  550. Okay, let's move on with Nazar then.
  551. - Thank you.
  552. Last week there were a couple of PRs open for some API fixes and then I was
  553. working on integrating Lowe's chart prover into the LiteClient demo that we
  554. have and I observed that the LiteClient demo is not working, it's broken and it
  555. was using 1.2 version of the Lowe's chart. I thought I should first upgrade it to
  556. the latest loadstar versions with the same codebase.
  557. Once it's working, then I will move it to the prover.
  558. But apparently there are some type changes
  559. and because of which the current loadstar
  560. lightclan demo that we have is not working with a 1.9x.
  561. So I'm trying to fix that.
  562. Once that is fixed, then open the PR for it.
  563. And then on top of it, I will open a PR
  564. to migrate the LiteClient demo to the ProWorks.
  565. And along with this migration,
  566. I would like to start an open discussion,
  567. what kind of stuff do we want to show
  568. onto the LiteClient demo?
  569. Because earlier we were showing a proof of it,
  570. a full proof tree on the LiteClient demo,
  571. but because of the lower star prover
  572. that is hidden in the implementation.
  573. So prover is doing everything in the backend for us.
  574. So if we want to migrate the LiteClient demo to the prover,
  575. then unfortunately we will not be able to show
  576. the full prove in the demo.
  577. We can show if the request was verified,
  578. proved or not, but not show the full prove actually.
  579. - I think it might be helpful to have our meeting tomorrow
  580. where we can all see how the prover works
  581. and what it's capable of,
  582. and then we'll be able to make a more informed decision.
  583. - We don't need to show the proof at all.
  584. The proof can be locked in the locks
  585. and then just open the console, that's it.
  586. - Okay.
  587. Sure, then we can discuss this particular topic tomorrow.
  588. But I was asking if there is some document
  589. or issue in past which describes this demo.
  590. - The original website is dead,
  591. we don't need to respect it.
  592. - Okay.
  593. Okay, then we will see a PR from my side
  594. with fixing the LiteClient demo
  595. with the latest version of the packages.
  596. and then another PR tomorrow migrating into the Uber.
  597. And we will be seeing each other tomorrow as well
  598. for the demo.
  599. Thank you so much.
  600. That's all from me.
  601. - Thanks Azhar.
  602. Okay, I'm just gonna read out for the recording
  603. the async updates from Lion.
  604. Last week, he progressed on single secret leader election
  605. and the max effective balance research fronts,
  606. multiple pending things.
  607. It's linked in consensus specs from Mike Neuter.
  608. He wrote a beacon node resource doc under big state sizes.
  609. There's a link to that in the private chat.
  610. Good chats with Ansgar, Mike,
  611. and others about max effective balance and PBF.
  612. There's a clear need to reduce the rewards curve and ship MEV burn.
  613. So that's from Lineside.
  614. And see if you have anything that you would like to add here, feel free.
  615. If not, we'll just open it up for anybody who wants to add any last minute points for
  616. a stand up.
  617. I've got questions about reducing the rewards curve and MEV burn.
  618. Are there any documents, anything that we can read about that?
  619. Blind. Do you have any, um.
  620. Any links for us in regards to
  621. discussions happening with the
  622. reward curve and maybe burn.
  623. I know.
  624. OK, no problem. Yeah, well,
  625. you just follow up with it as as we get him.
  626. Anything else for stand up today?
  627. All right, guys, thanks for coming out and have yourselves a great week.
  628. See you tomorrow for the prover call.
  629. Bye-bye, y'all.
  630. Bye.
  631. See you tomorrow.
  632. See you.
  633. Bye-bye.
  634. Have a nice week.
  635. Bye-bye.
  636.  
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement