KEMBAR78
Speed up SCC dependency inference by JukkaL · Pull Request #18299 · python/mypy · GitHub
Skip to content

Conversation

JukkaL
Copy link
Collaborator

@JukkaL JukkaL commented Dec 16, 2024

Avoid redundant computation of frozenset(scc).

This helps with incremental type checking of torch, since it has a big SCC. In my measurements this speeds up incremental checking of -c "import torch" by about 11%.

Avoid redundant computation of `frozenset(scc)`.

This helps with incremental type checking of torch, since it has a
big SCC. In my measurements this speeds up incremental checking
of `-c "import torch"` by about 11%.
@github-actions
Copy link
Contributor

According to mypy_primer, this change doesn't affect type check results on a corpus of open source code. ✅

sccs: list[set[T]], edges: dict[T, list[T]]
) -> dict[AbstractSet[T], set[AbstractSet[T]]]:
"""Use original edges to organize SCCs in a graph by dependencies between them."""
sccsmap = {v: frozenset(scc) for scc in sccs for v in scc}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you wanted to keep the dict comprehension, you could use an assignment expression

sccsmap = {v: s for scc in sccs if (s := frozenset(scc)) is not None for v in scc}  # type: ignore[redundant-expr]

It adds an additional if not None check for each scc that's always true but necessary to use :=.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, but I think this version is clearer

@hauntsaninja hauntsaninja merged commit d3be43d into master Dec 16, 2024
19 checks passed
@hauntsaninja hauntsaninja deleted the faster-scc branch December 16, 2024 23:20
@cdce8p
Copy link
Collaborator

cdce8p commented Dec 17, 2024

This helps with incremental type checking of torch, since it has a big SCC. In my measurements this speeds up incremental checking of -c "import torch" by about 11%.

To provide an additional data point. I'm seeing a ~10% improvement 🚀 on full runs (without cache) for Home Assistant. Just this PR and #18298.

Before 3:21
After 3:00

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants