-
Notifications
You must be signed in to change notification settings - Fork 406
Prevent ModelObserver update backlogs #541
Conversation
|
@kuychaco Do you mind looking over above and making sure I didn't miss anything? |
|
The tests have been brought up to date for the new behavior. @kuychaco, I'd like to remove the If you're in favor, I'll make the change; it breaks a lot of other tests (presumably where we're returning |
test/models/model-observer.test.js
Outdated
| } | ||
|
|
||
| fetchA() { | ||
| this.fetchACallCount++; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔥
test/models/model-observer.test.js
Outdated
| } | ||
|
|
||
| fetchB() { | ||
| this.fetchBCallCount++; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔥
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good ![]()
|
Also, @BinaryMuse, your ASCII-art game is 🤘 |
The
ModelObserveris used as a pseudo-declarative way to watch for models that implementonDidUpdateto change and then take some action (usually fetching new data from the underlying model). This pattern is made more fully declarative via theObserveModeldecorator.Take the following scenario:
As written currently, the following scenario could take place:
onDidUpdatecallback in the observer.getSomeDataon the model. This takes 3 seconds to complete.onDidUpdatecallback fires again.getSomeDataagain. This time it takes 2 seconds to complete.getSomeDatacallback completes, and since that promise is associated with the latest fetch, it's set as the "current" data.getSomeDatapromise resolves a second later, but since it's associated with an older fetch, the data is considered stale and is thrown away.Normally, this is perfect — we're able to run multiple fetches concurrently, and only the latest one is actually used, rendering updates as quickly as possible.
However, this triggers a problem when fetching data from the
Repositorymodel, since reads are queued (up to a certain point controlled by a parallelism limit). In that case, every read ends up piling on the queue:In this case, the reads are queued such that the second
getSomeData()is forced to wait the rest of the 3 seconds from the initial fetch before it can start it's own 2 seconds worth of fetching, making the time profile look more like this:So not only does the update feel like it's way slower, because we don't set any data until the latest fetch has all its reads finally executed, but multiple triggers of
onDidUpdate()could cause many reads to be placed into the queue, thus causing a huge delay in the actual operations that will ultimately actually be used. This is an actual problem for us, as nsfw triggers updates in batches; in large repos, we've observed as many as 12 discrete events triggering model updates in just over a second, causing the Git operation queue to be saturated for a good 10 or 15 seconds before any UI updates occur.This PR changes the
ModelObserversemantics such that:That means we lose concurrent fetches, but we gain the ability to have "partial" updates when a model has only partially updated. This would change the graph to be more like this:
The additional benefit is that no matter how many
onDidUpdatecallbacks are triggered during the original 3-second fetch, there will only be one additional fetch run after it completes; for us, this results in a Git operation queue that has many fewer operations queued.We also considered throttling nsfw events at the watcher level, but feel this is ultimately a better experience since partial updates are rendered to the UI as soon as possible (much like unix commands that stream text to a terminal as things update). We've tested this with a large merge conflict in a fairly large repo (
github/github) and it feels much better, especially with other changes to the read queue that are still in progress (to better speed up and parallelize the queue).