Description
FigureCanvasAgg.get_renderer is defined as
def get_renderer(self, cleared=False):
l, b, w, h = self.figure.bbox.bounds
key = w, h, self.figure.dpi
reuse_renderer = (hasattr(self, "renderer")
and getattr(self, "_lastKey", None) == key)
if not reuse_renderer:
self.renderer = RendererAgg(w, h, self.figure.dpi)
self._lastKey = key
elif cleared:
self.renderer.clear()
return self.renderer
i.e. (ignoring the cleared kwarg) if the figure bbox or dpi has changed, generate a new renderer.
Although this won't matter in most cases, it will in the figure saving routines as they manipulate the dpi, or (in GUI backends) when the figure is being resized.
In the codebase, it looks like canvas.renderer
and canvas.get_renderer()
are both used more or less indiscriminately and I doubt anyone took the difference between them into account.
It also makes it harder to write 3rd-party backends (cough cough) which have to provide exactly the same semantics if they want to be swappable to replace the Agg backend.
Given that a renderer whose size/dpi does not match the figure's doesn't really make sense, I think we should deprecate get_renderer()
and make .renderer
a property that self-replaces when the figure size/dpi changes (or vice-versa deprecate .renderer
, but that seems worse on a backcompat code-churn PoV); or, if we really worry about performance, move the renderer-regenerating behavior to e.g. check_renderer_invalidation()
. (The cleared=True
behavior should also be split out to e.g. get_clean_renderer()
).
See also #1852.