-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mining instability issue tracker #11251
Comments
TL;DR My current theory on this, is that its related to splitstore pruning logic which seems to correlate exactly with the high latency of Here are updates after past 2 days debugging this issue:
Also collecting notes in: https://docs.google.com/document/d/1ZxTJ5mQi6tezGlF6FFWAZRTCOf2PsinmjPZ9INnaUe0/edit This is where I am now. My theory is that calling |
I think this is just one situation, there are several types:
|
Update:
|
Update:
|
on Aug 28th, @beck-8 reported that the network is less stable with small forks, and storage providers are observing delay in block generation and an increase in missing winning blocks..
Unfortunately, we didn't get a chance to get to #10888 (other than message execution profiling #10892 ) work yet, in which aims to develop better monitoring, tooling and metrics to help mining node operators to collect more detail and gain more visibility into the mining executions when an issue occur. Thus, currently, while we have some guess into what could be the cause, we are not so sure.
We are creating this issue to track the investigation and development effort. @fridrik01 will add on more details plans/update later. For now, the info we have is
MpoolSelect
is taking longer when mpool is packed which impacts winningPoSt. @rjan90 suspected that we might have some high contention/waiting locks in mpool logics.lotus-shed mpool miner-select-messages
, see findings here.tPendings
could take >30s.Potential Todos:
The text was updated successfully, but these errors were encountered: