Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
TODO
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
TODO
Loading...
Loading...
TODO
TODO
Welcome to Flutter Internals, a community-maintained open source book providing a guided tour through Flutter's implementation.
This book is very much a "work in progress"; in fact, we'd love your help with grammatical fixes, technical edits, and new content.
Click here to become a contributor.
The goal of this book is to provide intuitive descriptions of Flutterās internals in an easily digestible outline format.
Descriptions are intended to be comprehensive without becoming bogged down in implementation details or sacrificing clarity.
This book strives to provide a āhawkās eye viewā of Flutterās implementation.
These notes are most useful once you have a solid understanding of how to use Flutter (since they describe how the interfaces actually work).
This outline was written for developers looking to build intuition about Flutter internals and Flutter hackers looking to ramp up on the codebase.
We also hope that this book can inspire more thorough learning materials (such as deep dive videos or long form articles).
Anyone (thank you)! If thereās a corner of the framework that you find confusing, please consider becoming a contributor and updating the book with the relevant details.
Please read the "Get Involved" section.
ServicesBinding.initInstances
č®¾ē½®å
Øå±ę¶ęÆę§å¶åØ (Window.onPlatformMessage
) ē»ServicesBinding.defaultBinaryMessenger
. This instance processes messages from the platform (via BinaryMessenger.handlePlatformMessage
) and allows other framework code to register message handlers (via BinaryMessenger.setMessageHandler
). Handlers subscribe to a channel (an identifier used to multiplex the single engine callback) using an identifier shared by the framework and the platform.
BinaryMessenger
multiplexes the global message handler via channel names, supporting handler registration and bidirectional binary messaging. Sending a message produces a future that resolves to the raw response.
MessageCodec
defines an interface to encode and decode byte data (MessageCodec.encodeMessage
, MessageCodec.decodeMessage
). A cross-platform binary codec is available (StandardMessageCodec
) as well as a JSON-based codec (JSONMessageCodec
). The platform must implement a corresponding codec natively.
MethodCodec
is analogous to MessageCodec
(but otherwise independent) encoding and decoding MethodCall
instances that wrap a method name and a dynamic list of arguments. Method-based codecs pack and unpack results into envelopes to distinguish success and error outcomes.
BasicMessageChannel
provides a thin wrapper around BinaryMessager
that uses the provided codec to encode and decode messages to and from raw byte data.
MethodChannel
provides a thin wrapper around BinaryMessager
that uses the provided method codec to encode and decode method invocations. Responses to incoming invocations are packed into envelopes indicating outcome; similarly, results from outgoing invocations are unpacked from their encoded envelope. These are returned as futures.
Success envelopes are unpacked and the result returned.
Error envelopes throw a PlatformException
.
Unrecognized methods throw a MissingPluginException
(except when using an OptionalMethodChannel
).
EventChannel
is a helper that exposes a remote stream as a local stream. The initial subscription is handled by invoking a remote method called listen
(via MethodChannel.invokeMethod
) which causes the platform to begin emitting a stream of envelope-encoded items. A top-level handler is installed (via ServicesBinding.defaultBinaryMessenger.setMessageHandler
) to unpack and forward items to an output stream in the framework. If the stream ends (for any reason), a remote method called cancel
is invoked and the global handler cleared.
SystemChannels
is a singleton instance that provides references to messaging channels that are essential to the framework (SystemChannels.system
, SystemChannels.keyEvent
, etc.).
This book is a community effort that aims to explain how Flutter actually works in an intuitive, but brief, way.
Given the breadth and scope of the framework, there's a lot of material to cover and a lot of material to review. Suffice to say: we need your help!
Any and all contributions are much appreciated. Keep on reading for ways to get involved.
Copy editing and structural improvements.
Fact checking, corrections, and technical revisions.
Expanding sections that are incomplete or outdated.
Adding new sections or topics.
... or however you think would be helpful!
Use our invite link to join as a contributor.
Once you've joined, you'll be able to comment, edit, and review.
Start editing! (If adding a new section, please be mindful to mark it as "Work in Progress").
See the "Project status" section for pointers to areas needing attention.
Add your name to the "Authors" section so you get the credit you deserve!
Needs copy editing
(Section) Core
(Section) Data Model
(Section) Rendering
(Section) Interaction
(Section) Scrolling
(Section) Slivers
(Section) Animation
(Section) Assets
(Section) Text
(Section) User Interface
(Section) Business Logic
Needs expansion
Gestures
Needs writing
Semantics
Themes
Navigation
Material
State Management
Async Programming
Testing
Brandon Diamond
Ian Hickson
runApp
éčæč°ēØéęę¹ę³ WidgetsFlutterBinding
/RenderingFlutterBinding.ensureInitialized
åå§åē»å®. čæå°č°ēØęÆäøŖē»å®ēinitInstances
ę¹ę³ļ¼ä»čå
č®øęÆäøŖåÆ¹č±”ä¾ę¬”åå§åć(å°±ęÆmixin - withå®ē°ēå¤ē»§ęæ,ęÆäøŖbindngé½č§¦åäŗinit)
čÆ„ęµēØęÆä½æēØmixiné¾ę„ęå»ŗēļ¼ęÆäøŖå
·ä½ēē»å®ļ¼ä¾å¦WidgetsFlutterBinding
ļ¼é½ę©å±äŗBaseBinding
ļ¼ęę "Binding" mixinļ¼ä¾å¦GestureBinding
ļ¼å
±äŗ«ēč¶
ē±»ēŗ¦ęćå ę¤ļ¼åÆ仄éčæč¶
ēŗ§č°ēØå°åøøēØę¹ę³ļ¼å¦BaseBinding.initInstances
ļ¼é¾ę„åØäøčµ·ćčæäŗč°ēØęÆä»å·¦å°å³ēŗæę§åēļ¼ä»č¶
ē±»å¼å§ļ¼ē¶åä¾ę¬”čæč”ę··åćčæē§äø„ę ¼ēé”ŗåŗå
č®ø仄åēē»å®ä¾čµę“ę©ēē»å®ć
ę³Øę:mixinč·withēåę³,dartęäøē§ē±»ä¼¼å¤ē»§ęæēę¹å¼,ęÆäøŖmixinå®ē°äøåēčŖčŗ«é»č¾,ä¾éäŗåäøäøŖē¶ē±»
RendererBinding.initInstances
creates the RenderView
, passing an initial ViewConfiguration
(describing the size and density of the render surface). It then prepares the first frame (via RenderView.prepareInitialFrame
); this schedules the initial layout and initial paint (via RenderView.scheduleInitialLayout
and RenderView.scheduleInitialPaint
; the latter creates the root layer, a TransformLayer
). This marks the RenderView
as dirty for layout and painting but does not actually schedule a frame.
This is important since users may wish to begin interacting with the framework (by initializing bindings via BaseBinding.ensureInitialized
) before starting up the app (via runApp
). For instance, a plugin may need to block on a backend service before it can be used.
Finally, the RendererBinding
installs a persistent frame callback to actually draw the frame (WidgetsBinding
overrides the method invoked by this callback to add the build phase). Note that nothing will invoke this callback until the Window.onDrawFrame
handler is installed. This will only happen once a frame has actually been scheduled.
Returning to runApp
, WidgetsBinding.scheduleAttachRootWidget
asynchronously creates a RenderObjectToWidgetAdapter
, a RenderObjectWidget
that inserts its child (i.e., the appās root widget) into the provided container (i.e., the RenderView
).
This asynchronicity is necessary to avoid scheduling two builds back-to-back; while this isnāt strictly invalid, it is inefficient and may trigger asserts in the framework.
If the initial build werenāt asynchronous, it would be possible for intervening events to re-dirty the tree before the warm up frame is scheduled. This would result in a second build (without an intervening layout pass, etc.) when rendering the warm-up frame. By ensuring that the initial build is scheduled asynchronously, there will be no render tree to dirty until the platform is initialized.
For example, the engine may report user settings changes during initialization (via the _updateUserSettingsData
hook). This invokes callbacks on the window (e.g., Window.onTextScaleFactorChanged
), which are forwarded to all WidgetsBindingObservers
(e.g., via RendererBinding.handleTextScaleFactorChanged
). As an observer, WidgetsApp
reacts to the settings data by requesting a rebuild.
It then invokes RenderObjectToWidgetAdapter.attachToRenderTree
to bootstrap and mount an element to serve as the root of the element hierarchy (RenderObjectToWidgetElement
, i.e., the element corresponding to the adapter). If the element already exists, which will only happen if runApp
is called again, its associated widget is updated (RenderObjectToWidgetElement._newWidget
) and marked as needing to be built.
RenderObjectToWidgetElement.updateChild
is invoked when this element is mounted or rebuilt, inflating or updating the child widget (i.e., the appās root widget) accordingly. Once a descendant RenderObjectWidget
is inflated, the corresponding render object (which must be a RenderBox
) will be inserted into the RenderView
(via RenderObjectToWidgetElement.insertChildRenderObject
). The resulting render tree is managed in the usual way going forward.
A reference to this element is stored in WidgetsBinding.renderViewElement
, serving as the root of the element tree. As a RootRenderObjectElement
, this element establishes the BuildOwner
for its descendants.
Finally, after scheduling the first frame (via SchedulerBinding.instance.ensureVisualUpdate
, which will lazily install the frame callbacks), runApp
invokes SchedulerBinding.scheduleWarmUpFrame
, manually pumping the rendering pipeline. This gives the initial frame extra time to render as itās likely the most expensive.
SchedulerBinding.ensureFrameCallbacksRegistered
lazily installs frame callbacks as part of SchedulerBinding.scheduleFrame
. Frames are typically scheduled in response to PipelineOwner.requestVisualUpdate
(due to UI needing painting, layout, or a rebuild). Once configured, these callbacks (Window.onBeginFrame
, Window.onDrawFrame
) are invoked once per frame by the engine, running transient and persistent processes, respectively. The latter is generally responsible for ticking animations whereas the former runs the actual building and rendering pipeline.
å½ē¬¬äøåø§å·²ē»å°±ē»Ŗ,并äøåč°å·²ē»č¢«ę³Øå (via SchedulerBinding.ensureFrameCallbacksRegistered
), flutterå¼ęä¾æå¼å§čŖåØå¤ēęÆäøåø§. åø§åč°ååŗčæäŗčÆ·ę±č°ēØå¤ēēØåŗćē¹å«ęÆFlutterå®ē°åØęø²ęē®”é(rendering pipeline) ē SchedulerBinding.drawFrame
ęä¹
åø§åč°ę¹ę³. WidgetsBinding.drawFrame
éåRendererBinding.drawFrame
仄å°ęå»ŗčæēØę·»å å°ę¤ē®”éć
ęø²ęē®”éä¼ęå»ŗå°éØ件ļ¼ęø²ęē®”éåÆęå»ŗå°éØ件ļ¼ę§č”åøå±ļ¼ę“ę°åęä½ļ¼ē»å¶å¾å±ļ¼ęåå°ęęå
容åęå°äøäøŖåŗęÆäøļ¼å¹¶éčæRenderView.compositeFrame
äøč½½å°å¼ęćčÆä¹ä¹ä¼éčæę¤čæēØčæč”ę“ę°ć
RenderView.compositeFrame
retains a reference to the root layer (a TransformLayer
) which it recursively composites using Layer.buildScene
. This iterates through all layers thatneedsAddToScene
. If true, the layer is freshly composited into the scene. If false, previous invocations of addToScene
will have stored an EngineLayer
in Layer.engineLayer
, which refers to a retained rendering of the layer subtree. A reference to this retained layer is added to the scene via SceneBuilder.addRetained
. Once the Scene
is built, it is uploaded to the engine via Window.render
.
ę”ę¶äø»č¦éčæWindowē±»čæč”äŗ¤äŗļ¼čÆ„Windowē±»ęÆåø¦ęčæåŗ dart engine ēhookę„å£ć
ę”ę¶ēäø»č¦ęµēØęÆē±engineč°ēØę”ę¶åč°é©±åØē(å°±ęÆčÆ“äø»č¦ęÆdartę¬čŗ«é©±åØę”ę¶)ćę”ę¶ēå ¶ä»å „å£ē¹å ę¬ęåæå¤ē(gesture handling)ļ¼å¹³å°ę¶ęÆä¼ é(platform messaging)åč®¾å¤ę¶ęÆä¼ é(device messaging)ć
Each binding serves as the singleton root of a subsystem within the framework; in several cases, bindings are layered to augment more fundamental bindings (i.e., WidgetsBinding
adds support for building to RendererBinding
). All direct framework/engine interaction is managed via the bindings, with the sole exception of the RenderView
, which uploads frames to the engine.
GestureBinding
facilitates gesture handling across the framework, maintaining the gesture arena and pointer routing table.
Handles Window.onPointerDataPacket
.
ServicesBinding
(ęå”ę¶ęÆē»å®) å®č¾
å©ę¶ęÆåØę”ę¶äøå¹³å°(åēē³»ē»)ä¹é“ēä¼ é.
Handles Window.onPlatformMessage
.
SchedulerBinding
ē®”ēåē§åč°ļ¼äø“ę¶ļ¼ęä¹
ļ¼é¦åø§åč°åéęø²ęä»»å”ļ¼ļ¼č·čøŖēå½åØęē¶ęåč°åŗ¦ēØåŗé¶ę®µć ä¹č“č“£å½éč¦ę“ę°ēé¢ę¶ę¾å¼č°åŗ¦åø§ .
Handles Window.onDrawFrame
, Window.onBeginFrame
.
Invokes Window.scheduleFrame
.
PaintingBinding
owns the image cache which manages memory allocated to graphical assets used by the application. It also performs shader warm up to avoid stuttering during drawing (via ShaderWarmUp.execute
in PaintingBinding.initInstances
). This ensures that the corresponding shaders are compiled at a predictable time.
SemanticsBinding
which is intended to manage the semantics and accessibility subsystems (at the moment, this binding mainly tracks accessibility changes emitted by the engine via Window.onAccessibilityFeaturesChanged
).
RendererBinding
implements the rendering pipeline. Additionally, it retains the root of the render tree (i.e., the RenderView
) as well as the PipelineOwner
, an instance that tracks when layout, painting, and compositing need to be re-processed (i.e., have become dirty). The RendererBinding
also responds to events that may affect the applicationās rendering (including semantic state, though this will eventually be moved to the SemanticsBinding
).
Handles Window.onSemanticsAction
, Window.onTextScaleFactorChanged
, Window.onMetricsChanged
, Window.onSemanticsEnabledChanged
.
Invokes Window.render
via RenderView
.
WidgetsBinding
augments the renderer binding with support for widget building (i.e., configuring the render tree based on immutable UI descriptions). It also retains the BuildOwner
, an instance that facilitates rebuilding the render tree when configuration changes (e.g., a new widget is substituted). The WidgetsBinding
also responds to events that might require rebuilding related to accessibility and locale changes (though these may be moved to the SemanticsBinding
in the future).
Handles Window.onAccessibilityFeaturesChanged
, Window.onLocaleChanged
.
TestWidgetsFlutterBinding
supports the widget testing framework.
Element.inflateWidget
åØ唫å
å°éØ件ä¹åę£ę„ global keyćå¦ęę¾å°äŗ global keyļ¼åčæåēøåŗēå
ē“ ļ¼äæēēøåŗēå
ē“ 并ęø²ęåę ļ¼ć
Global keys åØåƹåŗēå
ē“ č¢«å é¤(unmounted)ę¶č¢«å é¤ (via Element.unmount
).
The element tree is anchored in the WidgetsBinding
and established via runApp
/ RenderObjectToWidgetAdapter
.
Widget
instances are immutable representations of UI configuration data that are āinflatedā into Element
instances (via Element.inflateWidget
). Elements therefore serve as widgets' mutable counterparts and are responsible for modeling the relationship between widgets (e.g., the widget tree), storing state and inherited relationships, and participating in the build process, etc.
All elements are associated with a BuildOwner
singleton. This instance is responsible for tracking dirty elements and, during WidgetsBinding.drawFrame
, re-building the element tree as needed. This process triggers several lifecycle events (e.g., initState
, didChangeDependencies
, didUpdateWidget
).
Elements are assembled into a tree (via Element.mount
and Element.unmount
). Whereas these operations are permanent, elements may also be temporarily removed and restored (via Element.deactivate
and Element.activate
, respectively).
Elements transition through several lifecycle states (_ElementLifecycle
) in response to the following methods: Element.mount
(initial
to active
, Element.activate
is only called when reactivating), Element.deactivate
(active
to inactive
, can be reactivated via Element.activate)
, then finally Element.unmount
(inactive
to defunct
).
Note that deactivating or unmounting an element is a recursive process, generally facilitated by the build owner's inactive elements list (BuildOwner._inactiveElements
). All descendant elements are affected (via_InactiveElements._deactivateRecursively
and_InactiveElements._unmount
).
Elements are attached (i.e., mounted) to the element tree when they're first created. They may then be updated (via Element.update
) multiple times as they become dirty (e.g., due to widget changes or notifications). An element may also be deactivated; this removes any associated render objects from the render tree and adds the element to the build owner's list of inactive nodes. This list (_InactiveElements
) automatically deactivates all nodes in the affected subtree and clears all dependencies (e.g., from InheritedElement
).
Parents are generally responsible for deactivating their children (via Element.deactivateChild
). Deactivation temporarily removes the element (and any associated render objects) from the element tree; unmounting makes this change permanent.
An element may be reactivated within the same frame (e.g., due to tree grafting), otherwise the element will be permanently unmounted by the build owner (via BuildOwner.finalizeTree
which calls Element.unmount
).
If the element is reactivated, the subtree will be restored and marked dirty, causing it to be rebuilt (re-adopting any render objects which were previously dropped).
Element.updateChild
is used to update a child element when its configuration (i.e., widget) changes. If the new widget isn't compatible with the old one (e.g., doesn't exist, has a different type, or has a different key), a fresh element is inflated (via Element.inflateWidget
). Once an element is retrieved or inflated, the new configuration is applied via Element.update
; this might alter an associated render object, notify dependents of a state change, or mutate the element itself.
When an element is re-inflated, it has no access to any existing children; that is, children associated with the old element aren't passed to the new element. Thus, all descendants need to be re-inflated, too (there are no old elements to synchronize).
Global keys are one exception: any children associated with a global key can be restored without being re-inflated.
Elements are mainly broken down into RenderObjectElement
and ComponentElement
. RenderObjectElements
are responsible for configuring render objects and keeping the render object tree and widget tree in sync. ComponentElements
don't directly manage render objects but instead produce intermediate nodes via mechanisms like Widget.build
. Both processes are driven by Element.performRebuild
which is itself triggered by BuildOwner.buildScope
. The latter is run as part of the build process every time the engine requests a frame.
ProxyElement
forms a third category of elements that wrap a subtree of elements (and are configured by ProxyWidget
). These generally augment the subtree in some way (e.g., InheritedElement
injects heritable state). Proxy elements use notifications to inform subscribers when its configuration changes (ProxyElement.update
invokes ProxyElement.updated
which, by default, calls ProxyElement.notifyClients
). Subclasses manage subscribers in an implementation-specific way.
ParentDataElement
updates the parent data of all closest descendant render objects (via ParentDataElement._applyParentData
, which is called by ParentDataElement.notifyClients
).
InheritedElement
notifies a set of dependents whenever its configuration is changed (i.e., when InheritedElement.update
is invoked).InheritedElement._dependants
is implemented as a mapping since each dependent can provide an arbitrary object to use when determining whether an update is applicable. Dependents are notified by invoking Element.didChangeDependencies
.
RenderObjectElement
?Render object elements are responsible for managing an associated render object. RenderObjectElement.update
applies updates to this render object to match a new configuration (i.e., widget).
The render object is created (via RenderObjectWidget.createRenderObject
) when its element is first mounted. The render object is retained throughout the life of the element, even when the element is deactivated (and the render object is detached).
A new render object is created if an element is inflated and mounted (e.g., because a new widget couldn't update the old one); at this point, the old render object is destroyed. A slot token is used during this process so the render object can attach and detach itself from the render tree (which can vary from the element tree).
The render object is attached to the render tree when its element is first mounted (via RenderObjectElement.attachRenderObject
). If the element is later deactivated (due to tree grafting), it will be re-attached when the graft is completed (via RenderObjectElement.inflateWidget
, which includes special logic for handling grafting by global key).
The render object is updated (via RenderObjectWidget.updateRenderObject
) when its element is updated (via Element.update
) or rebuilt (via Element.rebuild
).
The render object is detached from its parent (via RenderObjectElement.detachRenderObject
) when the element is deactivated. This is generally managed by the parent (via Element.deactivateChild
) and occurs when children are explicitly removed or reparented due to tree grafting. Deactivating a child calls Element.detachRenderObject
which recursively processes descendants until reaching the nearest render object element boundary. RenderObjectElement
overrides this method to detach its render object, cutting off the recursive walk.
Render objects may have children. However, there may be several intermediate nodes (i.e., component elements) between its RenderObjectElement
and the elements associated with its children. That is, the element tree typically has many more nodes than the render tree.
Slot tokens are passed down the element tree so that these RenderObjectElement
nodes can interact with their render object's parent (viaRenderObjectElement.insertChildRenderObject
, RenderObjectElement.moveChildRenderObject
, RenderObjectElement.removeChildRenderObject
). Tokens are interpreted in an implementation-specific manner by the ancestor RenderObjectElement
to distinguish render object children.
Elements generally use their widget's children as the source of truth (e.g., MultiChildRenderObjectWidget.children
). When the element is first mounted, each child is inflated and stored in an internal list (e.g., MultiChildRenderObjectElement._children
); this list is later used when updating the element.
Elements can be grafted from one part of the tree to another within a single frame. Such elements are āforgottenā by their parents (via RenderObjectElement.forgetChild
) so that they are excluding from iteration and updating. The old parent removes the child when the element is added to its new parent (this happens during inflation, since grafting requires that the widget tree be updated, too).
Elements are responsible for updating any children. To avoid unnecessarily inflation (and potential loss of state), the new and old child lists are synchronized using a linear reconciliation scheme optimized for empty lists, matched lists, and lists with one mismatched region:
The leading elements and widgets are matched by key and updated.
The trailing elements and widgets are matched by key with updates queued (update order is significant).
A mismatched region is identified in the old and new lists.
Old elements are indexed by key.
Old elements without a key are updated with null (deleted).
The index is consulted for each new, mismatched widget.
New widgets with keys in the index update together (re-use).
New widgets without matches are updated with null (inflated).
Remaining elements in the index are updated with null (deleted).
LeafRenderObjectElement
, SingleChildRenderObjectElement
, and MultiChildRenderObjectElement
provide support for common use cases and correspond to the similarly named widget helpers (LeafRenderObjectWidget
, SingleChildRenderObjectWidget
, MultiChildRenderObjectWidget
)
The multi-child and single-child variants pair with ContainerRenderObjectMixin
and RenderObjectWithChildMixin
in the render tree.
These use the previous child (or null) as the slot identifier; this is convenient since ContainerRenderObjectMixin
manages children using a linked list.
ComponentElement
?ComponentElement
composes other elements. Rather than managing a render object itself, it produces descendant elements that manage their own render objects through building.
Building is an alternative to storing a static list of children. Components build a single child dynamically whenever they become dirty.
This process is driven by Element.rebuild
which is invoked by the build owner when an element is marked dirty (via BuildOwner.scheduleBuildFor
). Component elements also rebuild when they're first mounted (via ComponentElement._firstBuild
) and when their widget changes (via ComponentElement.update
). For StatefulElement
, a rebuild may be scheduled spontaneously via State.setState
. In all cases, lifecycle methods are invoked in response to changes to the element tree (for example, StatefulElement.update
will invoke State.didUpdateWidget
).
The actual implementation is supplied by Element.performRebuild
. Component elements overrideElement.performRebuild
to invoke ComponentElement.build
whereas render object elements update their render object via RenderObjectWidget.updateRenderObject
.
ComponentElement.build
provides a hook for producing intermediate nodes in the element tree. StatelessElement.build
invokes the widgetās build method, whereas StatefulElement.build
invokes the stateās build method. ProxyElement
simply returns its widget's child.
Note that if a component element rebuilds, the child element and the newly built widget will still be synchronized (via Element.updateChild
). If the widget is compatible with the existing element, it'll be updated instead of re-inflated. This allows existing render objects to be mutated instead of being recreated. Depending on the mutation, this might involve any combination of layout, painting, and compositing.
Reassembly (e.g., Element.reassemble
) marks the element as being dirty; most subclasses do not override this behavior. This causes the element tree to be rebuilt during the next frame. Render object elements update their render objects in response to Element.performRebuild
and therefore also benefit from hot reload.
Only widgets associated with ComponentElement
(e.g., StatelessWidget
, StatefulWidget
, ProxyWidget
) participate in the build process; RenderObjectWidget
subclasses, generally associated with RenderObjectElements
, do not; these simply update their render object when building. ComponentElement
instances only have a single child, typically that returned by their widgetās build method (ProxyElement
returns the child attached to its widget)..
When the element tree is first anchored to the render tree (via RenderObjectToWidgetAdapter.attachToRenderTree
), the RenderObjectToWidgetElement
(a RootRenderObjectElement
) assigns a BuildOwner
for the element tree. The BuildOwner
is responsible for tracking dirty elements (BuildOwner.scheduleBuildFor
), establishing build scopes wherein elements can be rebuilt / descendant elements can be marked dirty (BuildOwner.buildScope
/ BuildOwner.scheduleBuildFor
), and unmounting inactive elements at the end of a frame (BuildOwner.finalizeTree
). It also maintains a reference to the root FocusManager
and triggers reassembly after a hot reload.
When a ComponentElement
is mounted (e.g., after being inflated), an initial build is performed immediately (via ComponentElement._firstBuild
, which calls ComponentElement.rebuild
).
Later, elements can be marked dirty using Element.markNeedsBuild
. This is invoked any time the UI might need to be updated implicitly (or explicitly, in response to State.setState
). This method adds the element to the dirty list and, via BuildOwner.onBuildScheduled
, schedules a frame via SchedulerBinding.ensureVisualUpdate
. The actual build will take place when the next frame is processed.
Some operations trigger a rebuild directly (i.e., without marking the tree dirty first). These include ProxyElement.update
, StatelessElement.update
, StatefulElement.update
, and ComponentElement.mount
. In these cases, the intention is to update the element tree immediately.
Other operations schedule a build to occur during the next frame. These include State.setState
, Element.reassemble
, Element.didChangeDependencies
, StatefulElement.activate
, etc.
Proxy elements use notifications to indicate when underlying data has changed. In the case of InheritedElement
, each dependent's Element.didChangeDependencies
is invoked which, by default, marks that element as being dirty. This causes the descendant to rebuild when any of its dependencies change.
Once per frame, BuildOwner.buildScope
will walk the element tree in depth-first order, only considering those nodes that have been marked dirty. By locking the tree and iterating in depth first order, any nodes that become dirty while rebuilding must necessarily be lower in the tree; this is because building is a unidirectional process -- a child cannot mark its parent as being dirty. Thus, it is not possible for build cycles to be introduced and it is not possible for elements that have been marked clean to become dirty again.
As the build progresses, ComponentElement.performRebuild
delegates to the ComponentElement.build
method to produce a new child widget for each dirty element. Next, Element.updateChild
is invoked to efficiently reuse or recreate an element for the child. Crucially, if the childās widget hasnāt changed, the build is immediately cut off. Note that if the child widget did change and Element.update
is needed, that child will itself be marked dirty, and the build will continue down the tree.
Each Element
maintains a map of all InheritedElement
ancestors at its location. Thus, accessing dependencies from the build process is a constant time operation.
If Element.updateChild
invokes Element.deactivateChild
because a child is removed or moved to another part of the tree, BuildOwner.finalizeTree
will unmount the element if it isnāt reintegrated by the end of the frame.
InheritedElement
provides an efficient mechanism for publishing heritable state to a subset of the element tree. This mechanism depends on support provided by Element
itself.
All elements maintain a set of dependencies (Element._dependencies
, e.g., elements higher in the tree that fill a dependency) and a mapping of all InheritedElement
instances between this element and the root (Element._inheritedWidgets
). The dependencies set is mainly tracked for debugging purposes .
The map of inherited elements serves as an optimization to avoid repeatedly walking the tree. Each dependency is uniquely identified by its instantiated type; multiple dependencies sharing a type shadow one another (in this case, shadowed dependencies may still be be retrieved by walking the tree).
This mapping is maintained by Element._updateInheritance
. By default, elements copy the mapping from their parents. However, InheritedElement
instances override this method to insert themselves into the mapping (the mapping is always copied so that different branches of the tree are independent).
This mapping is built on the fly (via Element._updateInheritance
) when elements are first mounted (via Element.mount
) or are reactivated (via Element.activate
). The mapping is cleared when elements are deactivated (via Element.deactivate
); the element is removed from each of its dependency's dependent lists (InheritedElement._dependents
). As a result, it's usually not necessary to manually walk an element's ancestors.
Inherited relationships are established via Element.dependOnInherited
(Element.inheritFromElement
is a simple wrapper). In general, the inherited ancestor should be available in Element._inheritedWidgets
. This process causes the inherited element to add the dependent element to its list of dependencies (via InheritedElement.updateDependencies
).
When an element is reactivating (e.g., after grafting), it is notified of dependency changes if it had existing or unsatisfied dependencies (e.g., a dependency was added but a corresponding InheritedElement
wasn't found in Element._inheritedWidgets
).
Elements are notified when their dependencies change via Element.didChangeDependencies
. By default, this method marks the element as being dirty.
Widgets provide an immutable description of the user interface. Though widgets themselves are immutable, they may be freely replaced, removed, or rearranged (note that updating a widget's child typically requires the parent widget to be replaced, too). Creating and destroying widgets is efficient since widgets are lightweight, immutable instances that are, ideally, compile-time constants.
The immutable widget tree is used to create and configure (i.e., inflate) a mutable element tree which manages a separate render tree; this final tree is responsible for layout, painting, gestures, and compositing. The element tree is efficiently synchronized with widget changes, reusing and mutating elements where possible (that is, though a widget may be replaced with a different instance, provided the two instances have the same runtime type and key, the original element will be updated and not recreated). Modifying the element tree typically updates the render tree which, in turns, changes what appears on the screen.
The main widget types are RenderObjectWidget
, StatefulWidget
, and StatelessWidget
. Widgets that export data to one or more descendant widgets (via notifications or another mechanism) utilize ProxyWidget
or one of its subclasses (e.g., InheritedWidget
or ParentDataWidget
).
In general, widgets either directly or indirectly configure render objects by modifying the element tree. Most widgets created by application developers (via StatefulWidget
and StatelessWidget
) delegate to a constellation of descendant widgets, typically via a build method (e.g., StatelessWidget.build
). Others (e.g., RenderObjectWidget
) manage a render object directly (creating it and updating it via RenderObjectWidget.createRenderObject
and RenderObjectWidget.updateRenderObject
, respectively).
Certain widgets wrap an explicit child widget viaProxyWidget
, introducing heritable state (e.g.,InheritedWidget
, InheritedModel
) or configuring auxiliary data (e.g.,ParentDataWidget
).
ProxyWidget
notifies clients (via ProxyElement.notifyClients
) in response to widget changes (via ProxyElement.updated
, called by ProxyElement.update
).
ParentDataWidget
updates the nearest descendant render objects' parent data (via ParentDataElement._applyParentData
, which calls RenderObjectElement._updateParentData
); this process is triggered any time the corresponding widget is updated.
There are also bespoke widget subclasses that support less common types of configuration. For instance,PreferredSizeWidget
extends Widget
to capture a preferred size allowing subclasses (e.g., AppBar
, TabBar
, PreferredSize
) to express sizing information to their containers (e.g., Scaffold
).
LeafRenderObjectWidget
, SingleChildRenderObjectWidget
, and MultiChildRenderObjectWidget
provide storage for render object widgets with zero or more children without constraining how the underlying render object is created or updated. These widgets correspond toLeafRenderObjectElement
, SingleChildRenderObjectElement
, and MultiChildRenderObjectElement
, respectively, which manage the underlying child model in the element and render trees.
Anonymous widgets can be created using Builder
and StatefulBuilder
.
StatelessWidget
is a trivial subclass of Widget
that defines a StatelessWidget.build
method and configures a StatelessElement
.
StatelessElement
is a ComponentElement
subclass that invokes StatelessWidget.build
in response to StatelessElement.build
(e.g., delegates building to its widget).
StatefulWidget
is associated with StatefulElement
, a ComponentElement
that is almost identical to StatelessElement
. The key difference is that the StatefulElement
retains a reference to the State
of the corresponding StatefulWidget
, invoking methods on that instance rather than the widget itself. For instance, when StatefulElement.update
is invoked, the State
instance is notified via State.didUpdateWidget
.
StatefulElement
creates the associated State
instance when it is constructed (i.e., in StatefulWidget.createElement
). Then, when the StatefulElement
is built for the first time (via StatefulElement._firstBuild
, called by StatefulElement.mount
), State.initState
is invoked. Crucially, State
instance and the StatefulWidget
reference the same element.
Since State
is associated with the underlying StatefulElement
, if the widget changes, provided that StatefulElement.updateChild
is able to reuse the same element (because the widgetās runtime type and key both match), State
will be preserved. Otherwise, the State
will be recreated.
Flutter doesn't have the ability to compare trees. That is, only an element's immediate children are considered when matching widgets and elements (via RenderObjectElement.updateChildren
).
When increasing the tree depth (i.e., inserting an intermediate node), the existing parent will be configured with a child corresponding to the intermediate widget. In most cases, this widget will not correspond to a previous child (i.e., Widget.canUpdate
will return false). Thus, the new element will be freshly inflated. Since the intermediate node is the new owner of its parent's children, each of those children will also be inflated (the intermediate node doesn't have access to the existing elements). This will proceed down the entire subtree.
When decreasing the tree depth, the parent will once again be assigned new children which likely won't sync with old children. Thus, the new children will need to be inflated, cascading down the entire subtree.
Adding a GlobalKey
to the previous child can mitigate this issue since Element.updateChild
is able to reuse elements that are stored in the GlobalKey
registry (allowing that subtree to simply be reinserted instead of rebuilt).
Notification support is not built directly into the widget abstraction, but layered on top of it.
Notification
is an abstract class that searches up the element tree, visiting each widget subclass of NotificationListener
(Notification.dispatch
calls Notification.visitAncestor
, which performs this walk).
The notification invokes NotificationListener._dispatch
on each suitable widget, comparing the notification's static type with the callback's type parameter. If there's a match (i.e., the notification is a subtype of the callback's type parameter), the listener is invoked.
If the listener returns true, the walk terminates. Otherwise, the notification continues to bubble up the tree.
RenderBox
models a two dimensional box with a width, height, and position (RenderBox.size.width
, RenderBox.size.height
,RenderBox.parentData.offset
). The box's top-left corner defines its origin, with the bottom-right corner corresponding to (width, height)
.
BoxParentData
stores the child's offset in the parentās coordinate space (BoxParentData.offset
). By convention, this data may not be accessed by the child.
BoxConstraints
describes immutable constraints expressed as a maximum and minimum width and height ranging from zero to infinity, inclusive. Constraints are satisfied by concrete sizes that fall within this range.
Box constraints are classified in several ways:
BoxConstraints.isNormal
: minimum is greater than zero and less than or equal to the maximum in both dimensions.
BoxConstraints.tight, BoxConstraints.isTight
: minimum and maximum values are equal in both dimensions.
BoxConstraints.loose
: minimum value is zero, even if maximum is also zero (i.e., loose and tight).
BoxConstraints.hasBoundedWidth
, BoxConstraints.hasBoundedHeight
: the corresponding maximum is not infinite.
Constraints are unbounded when the maximum is infinite.
BoxConstraints.expanding
: both maximum and minimum values are infinite in the same dimension (i.e., tightly infinite).
Expanding constraints imply that the corresponding dimension will be determined by other incoming constraints (e.g., established by containing UI). Dimensions must ultimately be finite.
BoxConstraints.hasInfiniteWidth
, BoxConstraints.hasInfiniteHeight
: the corresponding minimum is infinite (thus, the maximum must also be infinite; this is the same as expanding).
Box constraints can be evaluated (BoxConstraints.isSatisfiedBy
), applied to a Size
(BoxConstraints.constrain
), tightened relative to constraints (BoxConstraints.tighten
), loosened by setting minimums to zero (BoxConstraints.loosen
), and intersected (BoxConstraints.enforce
). Constraints can also be scaled standard algebraic operators.
BoxHitTestResult
is a HitTestResult
subclass that captures each RenderBox
(aHitTestTarget
) that reported being hit in order of decreasing precedence.
Instances include box-specific helpers intended to help transform global coordinates to local coordinates (e.g., BoxHitTestResult.addWithPaintOffset
, BoxHitTestResult.addWithPaintTransform
).
BoxHitTestEntry
represents a box that was hit during hit testing. It captures the position of the collision in local coordinates (BoxHitTestEntry.localPosition
).
ContainerBoxParentData
extends BoxParentData
with ContainerParentDataMixin
. This combines a child offset (BoxParentData.offset
) with next and previous pointers (ContainerParentData.previousSibling
, ContainerParentData.nextSibling
) to describe a doubly linked list of children.
RenderObjectWithChildMixin
and ContainerRenderObjectMixin
can both be used with a type argument of RenderBox
. The ContainerRenderObjectMixin
accepts a parent data type argument; ContainerBoxParentData
is compatible and adds support for box children.
RenderBoxContainerDefaultsMixin
adds useful defaults to ContainerRenderObjectMixin
for render boxes with children. Type constraints require that children extend RenderBox
and parent data extends ContainerBoxParentData
.
This mixin provides support for hit testing (RenderBoxContainerDefaultsMixin.defaultHitTestChildren
), painting (RenderBoxContainerDefaultsMixin.defaultPaint
), and listing (RenderBoxContainerDefaultsMixin.getChildrenAsList
) children.
RenderProxyBox
delegates all methods to a single RenderBox
child, adopting the child's size as its own size and positioning the child at its origin. Proxies are convenient for writing subclasses that selectively override some, but not all, of a box's behavior. This box's implementation is provided by RenderProxyBoxMixin
, which provides an alternative to direct inheritance. Proxy boxes uses ParentData
rather than BoxParentData
since the child's offset is never used.
There are numerous examples throughout the framework:
RenderAbsorbPointer
overrides RenderProxyBox.hitTest
to disable hit testing for a subtree (i.e., by not testing its child) when RenderAbsorbPointer.absorbing
is enabled.
RenderAnimatedOpacity
listens to an animation (RenderAnimatedOpacity.opacity
) to render its child with varying opacity (via RenderAnimatedOpacity.paint
). This box manages its listener by overriding RenderProxyBox.attach
and RenderProxyBox.detach
. It selective inserts an OpacityLayer
(i.e., for translucent values only), and manages RenderBox.alwaysNeedsCompositing
to reflect whether a layer will be added (via RenderAnimatedOpacityMixin._updateOpacity
, called in response to the animation).
RenderIntrinsicWidth
overrides RenderProxyBox.performLayout
to adopt a size that corresponds to its child's intrinsic width, subject to the incoming constraints. It also overrides intrinsic sizing methods since it also provides size snapping (RenderBox.stepWidth
).
RenderShiftedBox
delegates all methods to a single RenderBox
child, but leaves layout undefined. Subclasses override RenderShiftedBox.performLayout
to assign a position to the child (via BoxParentData.offset
). Otherwise, this subclass is analogous to RenderProxyBox
.
RenderPadding
overrides RenderShiftedBox.performLayout
to deflate the incoming constraints by the resolved padding amount. The child is laid out using the new, padded constraints and positioned within the padded region.
RenderBaseline
overrides RenderShiftedBox.performLayout
to align its child's baseline (via RenderBox.getDistanceToBaseline
) with an offset from its top edge (RenderBaseline.baseline
). It then sizes itself such that its bottom is coincident with the child's baseline (this can potentially truncate the top of the child).
Boxes use the standard RenderObject
layout protocol to map from incoming BoxConstraints
to a concrete Size
(stored in RenderBox.size
). Layout also determines the child's offset relative to the parent (RenderBox.parentData.offset
). This information should not be read by the child during layout.
Boxes add support for intrinsic dimensions (an ideal size, computed outside of the standard layout protocol) as well as baselines (a line to use for vertical alignment, typically used when laying out text). RenderBox
instances track changes to these values and whether the parent has queried them; if so, when that box is marked as needing layout, the parent is marked as well.
Intrinsic dimensions are cached (RenderBox._cachedIntrinsicDimensions
) whenever they're computed (via RenderBox._computeIntrinsicDimension
). The cache is disabled during debugging.
Baselines are cached (RenderBox._cachedBaselines
) whenever they're computed (via RenderBox.getDistanceToActualBaseline
). Different baselines are computed for alphabetic and ideographic text.
RenderBox.markNeedsLayout
marks the parent as needing layout if the box's intrinsic dimension or baseline caches have been modified (i.e., this implies that the parent has accessed the box's "out-of-band" geometry). If so, both are cleared so that new values are computed after the next layout pass.
By default, boxes that are sized by their parents adopt the smallest size permitted by incoming constraints (via RenderBox.performResize
).
Boxes can have non-box children. In this case, the constraints provided to children will need to be adapted from BoxConstraints
to the appropriate type.
Boxes use the standard RenderObject
painting protocol to paint themselves to a provided canvas. The canvas's origin isn't necessarily coincident with the box's origin; the offset provided to RenderBox.paint
describes where the box's origin falls on the canvas. The canvas and the box will always be axis-aligned.
RenderBox.paintBounds
describes the region that will be painted by a box. This determines the size of the buffer used for painting and is expressed in local coordinates. It need not match RenderBox.size
.
If the render box applies a transform when painting (e.g., painting at a different offset than the one provided), RenderBox.applyPaintTransform
must apply the same transformation to the provided matrix.
RenderBox.globalToLocal
and RenderBox.localToGlobal
rely on this transformation to map from global coordinates to box coordinates and vice versa.
By default, RenderBox.applyPaintTransform
applies the childās offset (via child.parentData.offset
) as a translation.
Boxes support hit testing, a subscription-like mechanism for delegating and processing events. The box protocol implements a custom flow rather than extending theHitTestable
interface (though RendererBinding
, the hit testing entry point, does implement this interface).
RendererBinding.hitTest
is invoked using mixin chaining (via GestureBinding._handlePointerEvent
). The binding delegates to RenderView.hitTest
which tests its child (via RenderBox.hitTest
).
RenderBox.hitTest
determines whether an offset (in local coordinates) falls within its bounds. If so, each child is tested in sequence (viaRenderBox.hitTestChildren
) before the box tests itself (via RenderBox.hitTestSelf
). By default, both methods return false
(i.e., boxes do not handle events or forward events to their children).
Boxes subscribe to the related event stream (i.e., receive RenderBox.handleEvent
calls for events related to the interaction) by adding themselves to BoxHitTestResult
. Boxes added earlier take precedence.
All boxes in the BoxHitTestResult
are notified of events (via RenderBox.handleEvent
) in the order that they were added.
Conceptually, the intrinsic dimensions of a box are its natural dimensions (i.e., the size it āwantsā to be). The precise definition depends on the box's implementation and semantics (i.e., what the box represents).
Intrinsic dimensions are often defined in terms of the intrinsic dimensions of children and are therefore expensive to calculate (typically traversing an entire subtree).
Intrinsic dimensions often differ from the dimensions produced by layout (except when using the IntrinsicHeight
and IntrinsicWidth
widgets, which attempt to layout a child using its intrinsic dimensions).
Intrinsic dimensions are generally ignored unless one of these widgets are used (or another widget that explicitly incorporates intrinsic dimensions into its own layout, e.g., RenderTable
).
The box model describes intrinsic dimensions in terms of minimum and maximum values for width and height (via RenderBox.computeMinIntrinsicHeight
, RenderBox.computeMaxIntrinsicHeight
, etc.). Both receive a value for the opposite dimension (if infinite, the other dimension is unconstrained); this is useful for boxes that define one intrinsic dimension in terms of the other (e.g., text).
Minimum intrinsic width is the smallest width before the box cannot paint correctly without clipping.
Intuition: making the box thinner would clip its contents.
If width is determined by height according to the box's semantics, the incoming height (which may be infinite, i.e., unconstrained) should be used. Otherwise, ignore the height.
Minimum intrinsic height is the same concept for height.
Maximum intrinsic width is the smallest width such that further expansion would not reduce minimum intrinsic height (for that width).
Intuition: making the box wider wonāt help fit more content.
If width is determined by height according to the box's semantics, the incoming height (which may be infinite, i.e., unconstrained) should be used. Otherwise, ignore the height.
Maximum intrinsic height is the same concept for height.
The specific meaning of intrinsic dimensions depends on the box's semantics.
Text is width-in-height-out.
Maximum intrinsic width: the width of the string without line breaks (increasing the width would not shrink the preferred height).
Minimum intrinsic width: the width of the widest word (decreasing the width would clip the word or cause an invalid break).
Intrinsic heights are computed by laying out text with the provided width.
Viewports ignore incoming constraints and aggregate child dimensions without clipping (i.e., ideally, a viewport can render all of its children without clipping).
Aspect ratio boxes use the incoming dimension to compute the queried dimension (i.e., width to determine height and vice versa). If the incoming dimension is unbounded, the child's intrinsic dimensions are used instead.
When intrinsic dimensions cannot be computed or are too expensive, return zero.
Baselines are a concept borrowed from text rendering to describe the line upon which all glyphs are placed. Portions of the glyph typically extend below this line (e.g., descenders) as it serves primarily to vertically align a sequence of glyphs in a visually pleasing way. Characters from different fonts can be visually aligned by positioning each span of text such that baselines are collinear.
Boxes that define a visual baseline can also be aligned in this way.
Boxes may specify a baseline by implementing RenderBox.computeDistanceToActualBaseline
. The returned value represents a vertical offset from the top of the box. Values are cached until the box is marked as needing layout.
Boxes typically return null (i.e., they don't define a logical baseline), a value intrinsic to what they represent (i.e., the baseline of a span of text), delegate to a single child, or use RenderBoxContainerDefaultsMixin
to produce a baseline from a set of children.
RenderBoxContainerDefaultsMixin.defaultComputeDistanceToFirstActualBaseline
returns the first valid baseline reported by the set of children, adjusted to account for the child's offset.
RenderBoxContainerDefaultsMixin.defaultComputeDistanceToHighestActualBaseline returns the minimum baseline (i.e., vertical offset) amongst all children, adjusted to account for the child's offset.
RenderBox.getDistanceToBaseline
returns the offset to the box's bottom edge (RenderBox.size.height
) if an actual baseline isn't available (i.e., RenderBox.computeDistanceToActualBaseline
returns null
).
The baseline may only be queried by a box's parent and only after the box has been laid out (typically during parent layout or painting).
TODO
Flutter
interact with the system clipboard?Clipboard and ClipboardData
Flutter
interact with the host system?SystemChannels
?This book is written in outline format and is fairly terse. In order to get the most out of this book, we recommend the following learning path:
Read the .
A basic understanding of Dart is, of course, essential.
Skim the official .
The official docs provide a great starting point for using Flutter as a developer.
Skim the official article.
This article will give you a general idea of what's happening behind the scenes.
Read Didier Boelen's article.
Didier's article has outstanding illustrations and descriptions that will help you develop intuition about how Flutter works.
Read through this book, referencing the and .
This book will help you integrate your conceptual understanding with how the framework actually works.
Unfortunately, though methods and identifiers are frequently referenced, we haven't been able to provide deep linking to the relevant code just yet.
If you're able, clone a copy of the Flutter repository and use a tool like grep
to find the relevant sections. A source code browser with identifier linking is even better.
Good luck!
TODO: Expand.
The RendererBinding
is HitTestable
, which implies the presence of a hitTest
method. The default implementation defers to RenderView
, which itself implements hitTest
to visit the entire render tree. Each render object is given an opportunity to add itself to a shared HitTestResult
.
GestureBinding.dispatchEvent
(via HitTestDispatcher
) uses the PointerRouter
to pass the original event to all render objects that were hit (RenderObject
implements HitTestTarget
, and therefore provides handleEvent
). If a GestureRecognizer
is utilized, the eventās pointer is passed via GestureRecognizer.addPointer
which registers with the PointerRouter
to receive future events.
Related events are sent to all original HitTestTarget
as well as any routes registered with the PointerRouter
. PointerDownEvent
will close the gesture area, barring additional entries, whereas PointerUpEvent
will sweep it, resolving competing gestures and preventing indecisive gestures from locking up input.
Window.onPointerDataPacket
captures pointer updates from the engine and generates PointerEvents
from the raw data. PointerEventConverter
is utilized to map physical coordinates from the engine to logical coordinates, taking into account the deviceās pixel ratio (via PointerEventConverter.expand
).
A pointer is added to the gesture recognizer by client code on PointerDownEvent
. Gesture recognizers determine whether a pointer is allowed by overriding GestureRecognizer.isPointerAllowed
. If so, the recognizer subscribes to future pointer events via the PointerRouter
and adds the pointer to the gesture arena via GestureArenaManager.add
.
The recognizer will process incoming events, outcomes from the gesture arena (GestureArenaMember.accept
/rejectGesture
), spontaneous decisions about the gesture (GestureArenaEntry.resolve
), and other externalities. Typically, recognizers watch the stream of PointerEvents
via HitTestTarget.handleEvent
, looking for terminating events like PointerDown
, or criteria that will cause acceptance / rejection. If the gesture is accepted, the recognizer will continue to process events to characterize the gesture, invoking user-provided callbacks at key moments.
The gesture recognizer must unsubscribe from the pointer when rejecting or done processing, removing itself from the PointerRouter
(OneSequenceGestureRecognizer.stopTrackingPointer
does this).
The arena disambiguates multiple gestures in a way that allows single gestures to resolve immediately if there is no competition. A recognizer āwinsā if it declares itself the winner or if itās the last/sole survivor.
A GestureArenaTeam
combines multiple recognizers together into a group.
Captained teams cause the captain recognizer to win when all unaffiliated recognizers reject or a constituent accepts.
A non-captained team causes the first added recognizer to win when all unaffiliated recognizers reject. However, if a constituent accepts, that recognizer still takes the win.
There are two major categories of gesture recognizers, multi-touch recognizers (i.e., MultiTapGestureRecognizer
) that simultaneously process multiple pointers (i.e., tapping with two fingers will register twice), and single-touch recognizer (i.e., OneSequenceGestureRecognizer
) that will only consider events from a single pointer (i.e., tapping with two fingers will register once).
There is a helper āDragā object that is used to communicate drag-related updates to other parts of the framework (like DragScrollActivity
)
Thereās a VelocityTracker
that generates fairly accurate estimates about drag velocity using curve fitting.
There are local and global PointerRoutes
in PointerRouter
. Local routes are used as described above; global routes are used to react to any interaction (i.e., to dismiss a tooltip).
OffsetBase
č”Øē¤ŗäŗē»“ļ¼2Dļ¼č½“åƹé½åéćåē±»ęÆäøåÆåēļ¼å¹¶äøåÆ仄ä½æēØę åčæē®ē¬¦čæč”ęÆč¾ć(éč½½äŗčæē®ē¬¦ +, - ēē)
Offset
ęÆ OffsetBase
åē±», åÆ仄å°å
¶ēč§£äøŗē¬å”å°ē©ŗé“ęåéäøēē¹ćåē§»éåÆ仄ä½æēØę åčæē®ē¬¦čæč”代ę°čæē®ćļ¼
čæē®ē¬¦å
č®øéčæå°åē§»éäø大å°ē»åčµ·ę„ę„ęé Rect
ļ¼åē§»éę čÆē©å½¢ēå·¦äøč§ļ¼ćåē§»éåÆä»„č¢«ēØäŗęå¼č®”ē®ć(ęå¼ļ¼ēØę„唫å
å¾ååę¢ę¶åē“ ä¹é“ēē©ŗéćäøēč§£ēčŖč”å¦ä¹ åŗē”ē„čÆ)
Point
ē±»ļ¼ēØäŗč”Øē¤ŗē¬å”å°å¹³é¢äøē2Dē¹ć
Size
ęÆäøäøŖOffsetBase
åē±»ļ¼å®č”Øē¤ŗ宽åŗ¦åé«åŗ¦ćä»å ä½č§åŗ¦ę„ē,å®ęčæ°äŗäøäøŖ左锶ē¹åØåę åē¹ēē©å½¢. å¤äøŖSizeåÆ仄ä½æēØę åčæē®ē¬¦čæč”代ę°čæē®ć+čæē®ē¬¦ä¼ę ¹ę®ęä¾ēå¢éļ¼éčæāåē§»āļ¼ę„ę©å±å¤§å°ć. ä»ä»¬åÆä»„č¢«ēØäŗęå¼č®”ē®
Radius
ęčæ°åå½¢ęę¤åå½¢ēåå¾ćräøåę ē³»xyēøäŗ¤. åēęÆäøŖåå¾å¼äøę ·. åäø,ęÆäøŖré½č½ēØęä½ē¬¦č®”ē®,ä¹č½č®”ē®ęå¼.
Rect
ęÆäøäøŖäøåēļ¼äŗē»“ēļ¼äøč½“åƹé½ēęµ®ē¹ē©å½¢ļ¼å
¶åę ēøåƹäŗē»å®ēåē¹ćåÆ仄éčæåē§ę¹å¼ļ¼ä¾å¦ļ¼éčæå
¶äøåæļ¼éčæč¾¹ēåļ¼éčæå
¶å·¦ä¾§ļ¼é”¶éØļ¼å³ä¾§ååŗéØč¾¹ē¼ē大å°ēļ¼ę„ęčæ°ē©å½¢ļ¼ęč
éčæē»åāOffset
āåāSize
āę„ęé ē©å½¢ć ē©å½¢åÆ仄ę¾å¤§ļ¼ē¼©å°ļ¼ē»åļ¼ēøäŗ¤ļ¼å¹³ē§»ļ¼ę„čÆ¢ēēćåÆ仄ęÆč¾ē©å½¢ęÆå¦ēøē并čæč”ęå¼ć
RRect
augments a Rect
with four independent radii (via Radius
) corresponding to its corners. Rounded rectangles can be described in various ways (e.g., by offsets to each of its sides and one or more radii, by a bounding box fully enclosing the rounded rectangle with one or more radii, etc.). Rounded rectangles define a number of sub-rectangles: a bounding rectangle (RRect.outerRect
), an inner rectangle with left and right edges matching the base rectangle and top and bottom edges inset to coincide with the rounded corners' centers (RRect.wideMiddleRect
), a similar rectangle but with the sets reversed (RRect.tallMiddleRect
), and a rectangle that is the intersection of these two (RRect.middleRect
). A rounded rectangle is said to describe a āstadiumā if it possesses a side with no straight segment (e.g., entirely drawn by the two rounded corners). Rounded rectangles can be interpolated.
Axis
represents the X- or Y-axis (horizontal or vertical, respectively) relative to a coordinate space. The coordinate space can be arbitrarily transformed and therefore need not be parallel to the screenās edges.
AxisDirection
applies directionality to an axis. The value represents the direction in which values increase along an associated axis, with the origin rooted at the opposite end (e.g., AxisDirection.down
positions the origin at the top with positive values growing downward).
GrowthDirection
is the direction of growth relative to the current axis direction (e.g., how items are ordered along the axis). GrowthDirection.forward
implies an ordering consistent with the axis direction (the first item is at the origin with subsequent items following). GrowthDirection.reverse
is exactly the opposite (the last item is at the origin with preceding items following).
Growth direction does not flip the meaning of āleadingā and ātrailing,ā it merely determines how children are ordered along a specified axis.
For a viewport, the origin is positioned according to an axis direction (e.g., AxisDirection.down
positions the origin at the top of the screen, AxisDirection.up
positions the origin at the bottom of the screen), with the growth direction determining how children are ordered starting from the origin. As a result, both pieces of information are necessary to determine where a set of slivers should actually appear.
ScrollDirection
represents the userās scroll direction relative to the positive scroll offset direction (i.e., the direction in which positive scroll offsets increase as determined by axis direction and growth direction). Includes an idle state (ScrollDirection.idle
).
This is typically subject to the growth direction (e.g., the scroll direction is flipped when growth is reversed).
Confusingly, this refers to the direction the content is moving on screen rather than where the user is scrolling (e.g., scrolling down a web page causes the pageās contents to move upward; this would be classified as ScrollDirection.reverse
since this motion is opposite the axis direction).
Color
ęÆäøäøŖ32ä½äøåéļ¼ęčæ°alphaļ¼ēŗ¢č²ļ¼ē»æč²åčč²ééćåÆ仄ä½æēØä»é¶å°äøēäøéęåŗ¦å¼å®ä¹AlphaćåÆ仄ęå¼é¢č²å¹¶å°å
¶č½¬ę¢äøŗäŗ®åŗ¦å¼ć
Shadow
ęä¾äŗé¢č²ļ¼ęå½±å
ē“ ēåē§»ä»„åęØ”ē³åå¾ēåäøŖęå½±é“å½±ļ¼čÆ„ęØ”ē³åå¾č”Øē¤ŗåŗēØäŗé“å½±ēé«ęÆęØ”ē³ć
Gradient
ęčæ°äøē§ęå¤ē§å¹³ę»ēé¢č²čæęø”ćåÆ仄åƹęøåčæč”ęå¼åē¼©ę¾ļ¼ ęøåä¹åÆ仄ēØäŗč·ååƹēøåŗēč²åØēå¼ēØćęÆęēŗæę§ļ¼å¾ååę«ęęøåļ¼åå«éčæLinearGradient
ļ¼RadialGradient
åSweepGradient
ļ¼ćTileMode
ē”®å®ęøåå¦ä½ē»å¶č¶
åŗå
¶å®ä¹ēč¾¹ēćGradients may be clamped (e.g., hold their initial and final values), repeated (e.g., restarted at their bounds), or mirrored (e.g., restarted but with initial and final values alternating).
AbstractNode
represents a node in a tree without specifying a particular child model (i.e., the tree's actual structure is left as an implementation detail). Concrete implementations must call AbstractNode.adoptChild
and AbstractNode.dropChild
whenever the child model changes.
AbstractNode.owner
references an arbitrary object shared by all nodes in a subtree.
AbstractNode.attach
assigns an owner to the node. Adopting children will attach them automatically. Used by the owner to attach the tree via its root node.
Subclasses should attach all children since the parent can change its attachment state at any time (i.e., after the child is adopted) and must keep its children in sync.
AbstractNode.detach
clears a node's owner. Dropping children will detach them automatically. Used by the owner to detach the tree via its root node.
Subclasses should detach all children since the parent can change its attachment state at any time (i.e., after the child is adopted) and must keep its children in sync.
AbstractNode.attached
indicates whether the node is attached (i.e., has an owner).
AbstractNode.parent
references the parent abstract node.
AbstractNode.adoptChild
updates a child's parent and depth. The child is attached if the parent has an owner.
AbstractNode.dropChild
clears the child's parent. The child is detached if the parent has an owner.
AbstractNode.depth
is an integer that increases with depth. All depths below a given node will be greater than that node's depth. Note that values need not match the actual depth of the node.
AbstractNode.redepthChild
updates a child's depth to be greater than its parent.
AbstractNode.redepthChildren
uses the concrete child model to call AbstractNode.redepthChild
on each child.
TODO
performAction
ęÆå
éØēļ¼action
ęÆå¤éØēć
computeValue
ęÆå
éØēļ¼getValue
ęÆå¤éØēć
éå½č°ēØåŗä½æēØå¤éØåéćRecursive calls should use the external variant.
adoptChild
ådropChild
ä»AbstractNode
č°ēØę¶ļ¼åæ
é”»åchildå®ä¾åę”ę¶ä½åŗååŗä¹åć
č®øå¤å°éØ件ē±č¾ä½ēŗ§ēå°éØ件ē»ęļ¼éåøø仄Raw
å¼å¤“ļ¼ļ¼ä¾å¦GestureDetector
åRawGestureDetector
ļ¼Chip
åRawChip
ļ¼MaterialButton
åRawMaterialButton
ļ¼ć Text
åRichText
ęÆäøäøŖä¾å¤ć
AssetBundle
is a container that provides asynchronous access to application resources (e.g., images, strings, fonts). Resources are associated with a string-based key and can be retrieved as bytes (via AssetBundle.load
), a string (via AssetBundle.loadString
), or structured data (via AssetBundle.loadStructuredData
). A variety of subclasses support different methods for obtaining assets (e.g., PlatformAssetBundle
, NetworkAssetBundle
). Some bundles also support caching; if so, keys can be evicted from the bundleās cache (via AssetBundle.evict
).
CachingAssetBundle
caches strings and structured data throughout the applicationās lifetime (unless explicitly evicted). Binary data is not cached since the higher level methods are built atop AssetBundle.load
, and the final representation is more efficient to store.
Every application is associated with a rootBundle
. This AssetBundle
contains the resources that were packaged when the application was built (i.e., as specified by pubspec.yaml
). Though this bundle can be queried directly, DefaultAssetBundle
provides a layer of indirection so that different bundles can be substituted (e.g., for testing or localization).
NetworkAssetBundle
loads resources over the network. It does not implement caching; presumably, this is provided by the network layer. It provides a thin wrapper around dartās HttpClient
.
PlatformAssetBundle
is a CachingAssetBundle
subclass that fetches resources from a platform-specific application directory via platform messaging (specifically, Engine
::HandleAssetPlatformMessage
).
A relayout boundary is a logical divide between nodes in the render tree. Nodes below the boundary can never invalidate the layout of nodes above the boundary.
The boundary is represented as a render object (RenderObject._relayoutBoundary
) that meets a set of criteria allowing it to layout independently of its parent. That is, the subtree rooted at a relayout boundary node can never invalidate nodes outside of that subtree.
A render object defines a relayout boundary (i.e., cannot influence its parent's layout) if its parent explicitly ignores its size (!parentUsesSize
), fully determines its size (sizedByParent
), provides tight constraints, or isn't a render object (e.g., RenderView
).
The last two cases imply that the child cannot change size regardless of what happens in its subtree.
The parentUsesSize
parameter prevents a render object from becoming a relayout boundary. As a consequence, when the render object is marked dirty, all ancestors up to the nearest relayout boundary will be marked dirty, including the parent.
Relayout boundaries limit how many ancestor nodes must be marked dirty when a render object needs layout. The walk can be cut off at the nearest ancestor relayout boundary since nodes beyond that point cannot be affected by descendant layout.
Relayout boundaries are ignored when actually performing layout. Render objects always layout their children. However, children are expected to cut off the walk if they haven't been marked dirty and receive identical constraints (i.e., nothing has changed so layout isn't needed).
RenderObject.layout
is responsible for applying this logic.RenderObject._layoutWithoutResize
performs layout unconditionally since PipelineOwner.flushLayout
already skips clean nodes.
PipelineOwner
maintains a list of dirty nodes (PipelineOwner._nodeNeedingLayout
) that are cleaned once per frame (via PipelineOwner.flushLayout
).
Marking render objects as needing layout allows operations to batched (i.e., layout can be processed in a single walk rather than potentially laying out nodes multiple times).
When a render object is marked dirty (via RenderObject.markNeedsLayout
, which sets RenderObject._needsLayout
) , all ancestors up to and including the nearest relayout boundary are marked dirty (via RenderObject.markParentNeedsLayout
).
Only the nearest enclosing relayout boundary is added to PipelineOwner._nodeNeedingLayout
. Performing layout on this node will recursively layout all descendants. This also schedules the next frame (via PipelineOwner.requestVisualUpdate
) so that layout changes will be applied (via PipelineOwner.flushLayout
).
It's possible that nodes later in the dirty list will be cleaned while laying out earlier nodes; PipelineOwner.flushLayout
ignores any clean nodes.
A render object and its parent must be notified if the sizedByParent
bit changes (via RenderObject.markNeedsLayoutForSizedByParentChange
). Updating this bit can add or remove a relayout boundary from the render tree (the change is committed subsequently by RenderObject.layout
). Most render objects do not update this bit dynamically.
The parent must be marked dirty because nodes in the dirty list are laid out using RenderObject._layoutWithoutResize
, which doesn't update relayout boundaries. Thus, since the child may change status, its parent must be laid out, too.
Changing this bit also implies that the child will adopt new dimensions, potentially invalidating the parent's layout.
RenderView
provides the topmost constraints (via RenderView.performLayout
) as read from the view configuration (ViewConfiguration.size
). This represents the overall dimensions of the viewing surface in logical units.
Render objects that are marked dirty perform layout using the previously cached constraints (RenderObject.constraints
). These constraints cannot be invalid; otherwise, a node further up the tree would have been marked dirty.
The dimensions of the render object (computed by RenderObject.performResize
or RenderObject.performLayout
) may affect the constraints provided to children; constraints may also be entirely arbitrary (e.g., OverflowBox
). These constraints are forwarded to any children, which repeat the process recursively.
Constraints are cached as part of RenderObject.layout
.
PipelineOwner.flushLayout
triggers layout for all nodes in the dirty list (PipelineOwner._nodesNeedingLayout
) in order of increasing depth. The nodes in this list correspond to relayout boundaries and are laid out using RenderObject._layoutWithoutResize
instead of RenderObject.layout
.
RenderObject.performResize
is never necessary for a node in the dirty list. Consider a node that is sized by its parent:
If the parent isn't dirty, the incoming constraints won't change. Thus, resizing is unnecessary.
If the parent is dirty, since nodes are cleaned from top to bottom, it will be laid out first. This will eventually layout the original node (now potentially with resizing), effectively removing it from the dirty list.
RenderObject._layoutWithoutResize
never updates cached constraints (RenderObject.constraints
) or the relayout boundary (RenderObject._relayoutBoundary
).
Incoming constraints won't change since ancestor nodes haven't been marked dirty.
Relayout boundary status won't change since the relationship between this node and its parent hasn't changed. If it had, the parent would have been marked dirty, too (via RenderObject.markNeedsLayoutForSizedByParentChange
).
A child will perform layout (via RenderObject.layout
) if it's been marked dirty or has received new constraints, or has just become a relayout boundary. Otherwise, it will mark itself clean and cut off the walk.
Note that layout optimizations are applied in the marking phase rather than the layout phase. Relayout boundaries provide a limit on the nodes that are marked dirty when a descendant is marked dirty; they have no other direct effect.
RenderObject.layout
caches new constraints (RenderObject.constraints
) and propagates any changes to the relayout boundary field (RenderObject._relayoutBoundary
) to affected descendants (via RenderObject._cleanChildRelayoutBoundary
).
Render objects that are sized by their parent (RenderObject.sizedByParent
) adopt a new size using the incoming constraints alone (via RenderObject.performResize
). All render objects ā including those that were resized ā proceed with layout (via RenderObject.performLayout
).
This method is responsible for laying out all children (via RenderObject.layout
) and updating each child's parent data.
Children may be laid out before or after the parent depending on how the parent performs layout (e.g., a parent must layout its children first if their sizes affect its own size).
Render objects that are not sized by their parent must adopt a new size as part of RenderObject.performLayout
.
Once complete, RenderObject.layout
marks the object as needing semantics and painting, but no longer needing layout.
Certain criteria allow the layout pass to avoid laying out children:
If a child is not dirty and the constraints are unchanged, it neednāt be laid out, cutting off the walk.
If a parent does not use a childās size, it neednāt be laid out with the child; the child will necessarily conform to the original input constraints, and thus nothing changes for the parent.
If a parent passes tight constraints to its child, it neednāt be laid out with the child; the child cannot change size.
If a child is āsizedByParent
ā (the provided constraints solely determine the size), and the same constraints are passed again, the parent neednāt be laid out with the child; the child cannot change size.
The nearest upstream relayout boundary (RenderObject._relayoutBoundary
) is maintained as layout progresses down the tree. When layout is performed (via RenderObject.performLayout
), the render object determines whether it meets any of the optimization criteria described above. If so, it sets itself as the nearest relayout boundary. Otherwise, it adopts its parent's relayout boundary.
Updating the relayout boundary causes layout to proceed even if the object is clean and incoming constraints are unchanged.
All descendant nodes must update their understanding of the nearest relayout boundary. This is accomplished by recursively visiting to clear the relayout boundary and mark the render object as needing layout (via RenderObject._cleanRelayoutBoundary
). The walk stops at descendants that themselves demarcate a relayout boundary (i.e., RenderObject._relayoutBoundary == this
).
These nodes must be marked dirty because their layout may now be stale (i.e., assumptions that may have led to layout being skipped are no longer true). Layout is also responsible for propagating the relayout boundary down the tree of affected render objects.
RenderObject.invokeLayoutCallback
allows a builder callback to be invoked during the layout process. Only subtrees that are still dirty may be manipulated which ensures that nodes are laid out exactly once. This allows children to be built on-demand during layout (needed for viewports) and nodes to be moved (needed for global key reparenting).
Several invariants ensure that this is valid:
Building is unidirectional -- information only flows down the tree, visiting each node once.
Layout visits each dirty node once.
Building below the current node cannot invalidate earlier layout; build information only flows down the tree.
Building after a node is laid out cannot invalidate that node; nodes are never revisited.
Therefore, it is safe to build within the unvisited subtree rooted at a given render object.
Container
provides a consistent interface to a number of more fundamental layout building blocks. Unless otherwise specified, each box is sized to match its child.
ConstrainedBox
applies an extra set of constraints in addition to the incoming constraints. The extra constraints are clamped to the incoming constraints (via BoxConstraints.enforce
) and thus may not violate them. The box becomes as small as possible without a child.
SizedBox
delegates to ConstrainedBox
, providing tight constraints for the specified dimensions. The result is that the child will be sized to these dimensions subject to the incoming constraints. The box is still sized even without a child.
UnconstrainedBox
lays out its child with unbounded constraints (in one or more dimensions). It then sizes itself to match the child subject to incoming constraints (i.e., to prevent any actual overflow). An alignment is applied if the child is a different size than its parent. If there would have been overflow, the childās painting is clipped accordingly. The box becomes as small as possible without a child.
OverflowBox
allows one or more components of the incoming constraints to be completely overwritten. The child is laid out using the altered constraints and therefore may overflow. An alignment is applied if the child is a different size than its parent. In all other ways, this box delegates to the child.
SizedOverflowBox
assumes a specific size, subject to the incoming constraints. Its child is laid out using the original constraints, however, and therefore may overflow the parent (e.g., if the incoming constraints permit a size of up to 100x100, and the parent specifies a size of 50x50, the child can select a larger size, overflowing its parent). Since this box has an explicit size, its given intrinsic dimensions matching this size.
FractionallySizedBox
sizes its child in proportion to the incoming constraints; it sizes itself to the child, but without violating the original constraints. In particular, the maximum constraints are multiplied by a corresponding factor to obtain a tight bound (if a factor isnāt specified, that dimensionās constraints are left unchanged). The child is laid out using the new constraints and may therefore overflow the parent. The parent is sized to the child subject to the incoming constraints, and therefore cannot overflow; note that painting and hit testing will not be automatically clipped. Intrinsic dimensions are defined as the childās intrinsic dimension scaled by the corresponding factor.
LimitedBox
applies a maximum width or height only when the corresponding constraint is unbounded. That is, when the incoming constraints are forwarded to the child, the maximum constraint in each dimension is set to a specific value if it is otherwise unbounded. This is typically used to place expanding children into containers with infinite dimensions (e.g., viewports).
Padding
applies spacing around its child (via EdgeInsetsGeometry
). This is achieved by providing the child with constraints deflated by the padding. The parent assumes the size of the child (which is zero if there is no child) inflated by the padding.
EdgeInsetsGeometry
describes offsets in the four cardinal directions. EdgeInsets
specifies the offsets with absolute positioning (e.g., left, top, right, bottom) whereas EdgeInsetsDirectional
is text-orientation sensitive (e.g., start, top, end, bottom). Both allow offsets to be combined, subtracted, inflated, and deflated, as well as queried along an axis.
Padding defines intrinsic dimensions by reducing the incoming dimension by the total applicable padding before querying the child; the total padding is then added to the resulting dimension.
Transform
applies a transformation matrix during painting. It does not alter layout, delegating entirely to RenderProxyBox
for these purposes. If the transform can solely be described as a translation, it delegates to its parent for painting, as well. Otherwise, it composites the corresponding transformation before painting its child. Optionally, this transform may be applied before hit testing (via Transform.transformHitTests
).
FittedBox
applies a BoxFit
to its child. BoxFit
specifies how a child box is to be inscribed within a parent box (e.g., BoxFit.fitWidth
scales the width to fit the parent and the height to maintain the aspect ratio, possibly overflowing the parent; BoxFit.scaleDown
aligns the child within the parent then scales to ensure that the child does not overflow). The child is laid out without constraint, then scaled to be as large as possible while maintaining the original aspect ratio and respecting the incoming constraints. The box fit determines a transformation matrix that is applied during painting.
The box fit is calculated by applyBoxFit
, which considers each boxās size without respect to positioning. The result is a FittedSizes
instance, which describes a region within the child (the source) and a region within the parent (the destination). In particular, the source measures the portion of the child visible within the parent (e.g., if this is smaller than the child, the child will be clipped). The destination measures the portion of the parent occupied by the child (e.g., if this is smaller than the parent, a portion of the parent will remain visible).
The sizes are transformed into rectangles using the current alignment (via Alignment.inscribe
). Finally, a transformation is composed to translate and scale the child relative to the parent as per the calculated rectangles.
Baseline
aligns a child such that its baseline is coincident with its own (as specified). The childās baseline is obtained via RenderBox.getDistanceToBaseline
and an offset calculated to ensure the childās baseline is coincident with the target line (itself expressed as an offset from the boxās top). As a result, though the parent is sized to contain the child, the offset may cause the parent to be larger (e.g., if the child is shifted below the parentās top).
Align
allows a child to be aligned within a parent according to a relative position specified via AlignmentGeometry
. During layout, the child is positioned according to the alignment (via RenderAligningShiftedBox.alignChild
, which invokes AlignmentGeometry.alongOffset
; this maps the alignment to logical coordinates using simple geometry). Note that the child must undergo layout beforehand since its size is needed to compute its center; the incoming constraints are loosened before being forwarded to the child. Next, the parent is sized according to whether a width or height factor was specified; if so, the corresponding dimension will be set to a multiple of that of the child (subject to the incoming constraints). Otherwise, it will expand to fill the constraints. If the incoming constraints are unbounded, the parent will instead shrinkwrap the child.
Alignment
is a subclass of AlignmentGeometry
that describes a point within a container using units relative to the containerās size and center. The childās center is positioned such that it is coincident with this point. Each unit is measured as half the containerās corresponding dimension (e.g., the container is effectively 2 units wide and 2 units tall). The center of the container corresponds to (0, 0), the top left to (-1, -1), and the bottom right to (1, 1). Units may overflow this range to specify different multiple of the parentās size.
AlignmentDirectional
is identical but flips the x-axis for right-to-left languages.
FractionalOffset
represents a relative position as a fraction of the containerās size, measured from its top left corner. The childās top-left corner is positioned such that it is coincident with this point. The center of the container corresponds to (0.5
, 0.5
), the top left to (0, 0), and the bottom right to (1, 1).
IntrinsicHeight
and IntrinsicWidth
integrate the intrinsic dimension and layout protocols. Intrinsic dimensions reflect the natural size of an object; this definition varies based on what the object actually represents. Prior to laying out, the childās intrinsic dimension is queried (e.g., via RenderBox.getMaxIntrinsicWidth
). Next, the incoming constraints are tightened to reflect the resulting value. Finally, the child is laid out using the tight constraints. In this way, the childās intrinsic size is used in determining its actual size (the two will only match if permitted by the incoming constraints).
Boxes often define their intrinsic dimensions recursively. Consequently, an entire subtree may need to be traversed to produce one set of dimensions. This can quickly lead to polynomial complexity, particularly if intrinsic dimensions are computed several times during layout (which itself traverses the render tree).
Stack
LayoutBuilder
makes it easy to take advantage of interleaving the build / layout phase via callbacks. The callback has access to the incoming constraints (i.e., because the framework has completed building and is currently performing layout) but may still build additional UI (via RenderObject.invokeLayoutCallback
). Once building is complete, layout continues through the subtree and beyond.
CustomSingleChildLayout
allows its layout and its childās layout to be customized without implementing a new render box. Deferring to a delegate, the associated RenderCustomSingleChildLayout
invokes SingleChildLayoutDelegate.getSize
to size the parent, then SingleChildLayoutDelegate.getConstraintsForChild
to constrain the child, and finally SingleChildLayoutDelegate.getPositionForChild
to position the child.
CustomMultiChildLayout
is similar, but supports multiple children. To identify children, each must be tagged with a LayoutId
, a ParentDataWidget
that stores an identifier in each childās MultiChildLayoutParentData
(recall that render objects establish their childrenās parent data type). MultiChildLayoutDelegate
behaves similarly to its single-child variant; override MultiChildLayoutDelegate.performLayout
to size and position each child via MultiChildLayoutDelegate.layoutChild
and MultiChildLayoutDelegate.positionChild
, respectively (the latter must be called once per child in any order).
ConstrainedLayoutBuilder
LayoutBuilder
SliverLayoutBuilder
Window.onBeginFrame
invokes SchedulerBinding.handleBeginFrame
every frame, which runs all transient callbacks scheduled during the prior frame. Ticker instances utilize transient callbacks (via SchedulerBinding.scheduleFrameCallback
), and are therefore evaluated at this point. All tickers update their measure of elapsed time using the same frame timestamp, ensuring that tickers tick in unison.
AnimationController
utilizes an associated Ticker
to track the passage of time. When the ticker ticks, the elapsed time is provided to an internal simulation which transforms real time into a decimal value. The simulation (typically _InterpolationSimulation
) interpolates between AnimationController.lowerBound
and AnimationController.upperBound
(if spanning the animationās full range), or AnimationController.value
and AnimationController.target
(if traversing from the current value to a new one), applying a curve if available. Listeners are notified once the simulation produces a new value.
The animationās behavior (playing forward, playing backward, animating toward a target, etc) is a consequence of how this internal simulation is configured (i.e., by reversing the bounds, by altering the duration, by using a _RepeatingSimulation
or SpringSimulation
). Typically, the simulation is responsible for mapping from real time to a value representing animation progress.
In the general case, an _InterpolationSimulation
is configured in AnimationController._animateToInternal
.
Next, the controllerās ticker is started, which advances the the simulation once per frame. The simulation is advanced using the elapsed time reported by the ticker.
Listeners are notified whenever the simulation is queried or reaches an endpoint, potentially changing the animationās status (AnimationStatus
).
Composing animations (e.g., via _AnimatedEvaluation
or _ChainedEvaluation
) works by proxying the underlying listenable (i.e., by delegating listener operations to the parent animation, which advances as described above).
An animation, as represented by Animation<double>
, traverses from zero to one (and vice versa) over a user-defined interval (this is typically facilitated by an AnimationController
, a special Animation<double>
that advances in real time). The resulting value represents the animationās progress (i.e., a timing value) and is often fed into a chain of animatables or descendant animations. These are re-evaluated every time the animation advances and therefore notifies its listeners. Some descendants (e.g., curves) transform the animationās timing value into a new timing value; these affect the animationās rate of change (e.g., easing) but not its duration. Others produce derived values that can be used to update the UI (e.g., colors, shapes, and sizes). Repeatedly updating the UI using these values is the basis of animation.
Animation<T>
couples Listenable
with AnimationStatus
and produces a sequence of values with a beginning and an end. The animationās status is derived from the sequenceās directionality and whether values are currently being produced. In particular, the animation can be stopped at the sequenceās beginning or end (AnimationStatus.dimissed
, AnimationStatus.completed
), or actively producing values in a particular order (AnimationStatus.forward
, AnimationStatus.reverse
). Animation<T>
extends ValueListenable
which produces a sequence of values but does not track status.
Animation<double>
is the most common specialization of Animation<T>
(and the only specialization that can be used with Animatable<T>
); as a convention, Animation<double>
produces values from zero to one, though it may overshoot this range before completing. These values are typically interpreted as the animationās progress (referred to as timing values, below). How this interval is traversed over time determines the behavior of any downstream animatables.
Animation<double>
may represent other sequences, as well. For instances, an Animation<double>
might describe a sequence of border radii or line thicknesses.
More broadly, Animation<T>
, where T
is not a timing value, is devoid of conventional significance. Such animations progress through their values as described earlier, and are typically driven by a preceding Animation<double>
(that does represent a timing value).
Animatable<T>
describes an animatable value, mapping an Animation<double>
(which ranges from zero to one) to a sequence of derived values (via Animatable<T>
.evaluate, which forwards the animationās value to Animatable<T>.transform
to produce a new value of type T
). The animatable may be driven by an animation (i.e., repeatedly evaluated as the animation generates notifications, via Animatable<T>.animate
). It may also be associated with a parent animatable to create an evaluation chain (i.e., the parent evaluates the animation value, then the child evaluates the parentās value, via Animatable<T>.chain
). Unless the parent animatable is driven by an animation, however, chaining will not cause the animatable to animate; it only describes a sequence of transformations.
Animatable<T>.evaluate
always maps from a double to a value of type T
. Conventionally, the provided double represents a timing value (ranging from zero to one), but this is not a requirement.
Tween is a subclass of Animatable<T>
that linearly interpolates between beginning and end values of type T
(via Tween.lerp
). By default, algebraic linear interpolation is used, though many types implement custom methods (via T.lerp
) or provide overloaded operators.
TweenSequence
is an animatable that allows an animation to drive a sequence of tweens, associating each with a portion of the animationās duration (via TweenSequenceItem.weight
).
Simulation models an object in one-dimensional space with a position (Simulation.x
), velocity (Simulation.dx
), and completion status (isDone
) using logical units. Simulations are queried using a time value, also in logical units; as some simulations may be stateful, queries should generally use increasing values. A Tolerance
instance specifies epsilon values for time, velocity, and position to determine when the simulation has settled.
AnimationController
is an Animation<double>
subclass introducing explicit control and frame-synchronized timing. When active, the animation controller is driven at approximately 60 Hz. This is facilitated by a corresponding Ticker
instance, a synchronized timer that triggers at the beginning of each frame; this instance may change over the course of the controllerās lifespan (via AnimationController.resync
). The animation can be run backwards and forwards (potentially without bounds), toward and away from target values (via AnimationController.animateTo
and AnimationController.animateBack
), or cyclically (via AnimationController.repeat
). Animations can also be driven by a custom Simulation
(via AnimationController.animateWith
) or a built-in spring simulation (via AnimationController.fling
). The controllerās value (AnimationController.value
) is advanced in real time, using a duration (AnimationController.duration
) to interpolate between starting and ending values (AnimationController.upperBound
, AnimationController.lowerBound
), both doubles. An immutable view of the animation is also exposed (AnimationController.view
).
Ticker invokes a callback once per frame (via a transient callback scheduled using SchedulerBinding.scheduleFrameCallback
), passing a duration corresponding to how long the ticker has been ticking. This duration is measured using a timestamp set at the beginning of the frame (SchedulerBinding.handleBeginFrame
). All tickers advance using the same timestamp and are therefore synchronized. When a ticker is enabled, a transient frame callback is registered via SchedulerBinding.addFrameCallback
; this schedules a frame via Window.scheduleFrame
, ensuring that the ticker will begin ticking.
Tickers measure a duration from when they first tick. If a ticker is stopped, the duration is reset and progress is lost. Muting a ticker allows time (the duration) to continue advancing while suppressing ticker callbacks. The animation will not progress while muted and will appear to jump ahead when unmuted. A ticker can absorb another ticker so that animation progress is not lost; that is, the new ticker will retain the old tickerās elapsed time.
TickerFuture
exposes the ticker status as a Future
. When stopped, this future resolves; in all other cases, the future is unresolved. A derivative future, TickerFuture.orCancel
, extends this interface to throw an exception if the ticker is cancelled.
TickerProvider
vends Ticker
instances. TickerProviderStateMixin
and SingleTickerProviderStateMixin
fulfill the TickerProvider
interface within the context of a State
object (the latter has less overhead since it only tracks a single ticker). These mixins query an inherited TickerMode
that can enable and disable all descendent tickers en masse; this allows tickers to be muted and unmuted within a subset of the widget tree efficiently.
AnimationLocalListenersMixin
and AnimationLocalStatusListenersMixin
provide implementations for the two listenable interfaces supported by animations: value listeners (Animation.addListener
, Animation.removeListener
), and status listeners (Animation.addStatusListener
, Animation.removeStatusListener
). Both store listeners in a local ObserverList
and support hooks indicating when a listener is registered and unregistered (didRegisterListener
and didUnregisterListener
, respectively). A number of framework subclasses depend on these mixins (e.g., AnimationController
) since Animation<T>
doesnāt provide a concrete implementation.
AnimationLazyListenerMixin
uses the aforementioned hooks to notify the client when there are no more listeners. This allows resources to be released until a listener is once again added (via AnimationLazyListenerMixin.didStartListening
and AnimationLazyListenerMixin.didStopListening
).
AnimationEagerListenerMixin
ignores these hooks, instead introducing a dispose protocol; resources will be retained through the animationās lifespan and therefore must be disposed before the instance is released.
Curve determines an animationās rate of change by specifying a mapping from input to output timing values (i.e., from [0, 1] to [0, 1], though some curves stretch this interval, e.g., ElasticInCurve
). Animation<double>
produces suitable input values that may then be transformed (via Curve.transform
) into new timing values. Later, these values may be used to drive downstream animatables (or further transformed), effectively altering the animationās perceived rate of change.
Geometrically, a curve may be visualized as mapping an input timing value (along the X-axis) to an output timing value (along the Y-axis), with zero corresponding to AnimationStatus.dismissed
and one corresponding to AnimationStatus.completed
.
Curves cannot alter the overall duration of an animation, but will affect the rate that an animation is advanced during that interval. Additionally, even if they overshoot the unit interval, curves must map zero and one to values that round to zero or one, respectively.
There are a number of built-in curve instances:
Cubic defines a curve as a cubic function.
ElasticInCurve
, ElasticOutCurve
, ElasticInOutCurve
define a spring-like curve that overshoots as it grows, shrinks, or settles, respectively.
Interval maps a curve to a subinterval, clamping to 0 or 1 at either end.
Threshold is 0 until a threshold is reached, then 1 thereafter.
SawTooth
produces N linear intervals, with no interpolation at edges
FlippedCurve
transforms an input curve, mirroring it both horizontally and vertically.
Curves exposes a large number of pre-defined curves.
CurvedAnimation
is an Animation<double>
subclass that applies a curve to a parent animation (via AnimationWithParentMixin
). As such, CurvedAnimation
proxies the parent animation, transforming each value before any consumers may read it (via Curve.transform
). CurvedAnimation
also allows different curves to be used for forward and reverse directions.
CurveTween
is an Animatable<double>
subclass that is analogous to CurvedAnimation
. As an animatable, CurveTween
delegates its transform (via Animatable<double>.transform
) to the provided curve transform (via Curve.transform
). Since CurveTween
doesnāt perform interpolation, but instead represents an arbitrary mapping, it isnāt actually a tween.
AnimationController
includes built-in curve support (via _InterpolationSimulation
). When the simulation is advanced to transform elapsed wall time into a timing value (by querying _InterpolationSimulation.x
), if available, a curve is used when computing the new value. As the resulting value is generally interpreted as a timing value, this influences the perceived rate of change of the animation.
AnimationWithParentMixin
provides support for building animations that delegate to a parent animation. The various listener methods (AnimationWithParentMixin.addListener
, AnimationWithParentMixin.addStatusListener
) are forwarded to the parent; all relevant state is also read from the parent. Clients provide a value accessor that constructs a derivative value based on the parentās value.
Composition is managed via Animatable.chain
or Animatable.animate
; Animation<T>.drive
delegates to the provided animatable.
_AnimatedEvaluation
is an Animation<T>
that applies an animatable to a parent animation. All listenable methods delegate to the parent animation (via AnimationWithParentMixin
); thus, the resulting animation is driven by the parent. The value accessor is overwritten so that parentās value may be transformed by the animatable (via Animatable.evaluate
).
_ChainedEvaluation
is an Animatable<T>
that combines a parent animatable with a child animatable. In particular, _ChainedEvaluation.transform
first evaluates the parent animatable (via Animatable<T>.evaluate
), then passes this value to the child animatable.
CompoundAnimation
is an Animation<T>
subclass that combines two animations. CompoundAnimation.value
is overwritten to produce a final value using the first and second animationās values (via CompoundAnimation.first
, CompoundAnimation.next
). Note that CompoundAnimation
is driven by two animations (i.e., it ticks when either animation ticks), unlike earlier composition examples that drive an animatable using a single parent animation.
ProxyAnimation
provides a read-only view of a parent animation that will reflect any changes to the original animation. It does this by proxying the animation listener methods as well as the status and value accessors. Additionally, ProxyAnimation
supports replacing the parent animation inline; the transition is seamless from the perspective of any listeners.
TrainHoppingAnimation
monitors two animations, switching from the first to the second when the second emits the same value as the first (e.g., because it is reversed or moving toward the value more quickly). TrainHoppingAnimation
utilizes AnimationEagerListenerMixin
because it relies on the parent animationsā notifications to determine when to switch tracks, regardless of whether there are any external listeners.
CompoundAnimation
combines two animations, ticking when either animation ticks (this differs from, e.g., Animation.animate
, which drives an animatable via an animation). The status is that of the second animation (if itās running), else the first. The values are combined by overriding the Animation.value
accessor; the constituent animations are referenced as CompoundAnimation.first
and CompoundAnimation.next
, respectively. This animation is lazy -- it will only listen to the sub-animations when it has listeners, and will avoid generating useless notifications.
CompoundAnimation
is the basis of MaxAnimation
, MinAnimation
, and MeanAnimation
.
AlwaysStoppedAnimation
exposes a constant value and never changes status or notifies listeners.
ReverseAnimation
plays an animation in reverse, using the appropriate status and direction. That is, if the parent animation is played forward (e.g., via AnimationController.forward
), the ReverseAnimation
ās status will be reversed. Moreover, the value reported by ReverseAnimation
will be the inverse of the parentās value assuming a [0, 1] range (thus, one minus the parentās value). Note that this differs from simply reversing a tween (e.g., tweening from one to zero); though the values would be reversed, the animation status would be unchanged.
AnimatedWidget
is an abstract stateful widget that rebuilds whenever the provided listenable notifies its clients. When this happens, the associated state instance is marked dirty (via _AnimatedState.setState
) and rebuilt. _AnimatedState.build
delegates to the widgetās build method, which subclasses must implement; these utilize the listenable (typically an animation) to update the UI.
AnimatedBuilder
extends AnimatedWidget
to accept a build function (TransitionBuilder
); this builder is invoked whenever the widget rebuilds (via AnimatedBuilder.build
). This allows clients to utilize the AnimatedWidget
flow without creating an explicit subclass.
ImplicitlyAnimatedWidget
provides support for widgets that animate in response to changes to selected properties; the initial value is not animated. Though descendant widgets are only able to customize the animationās duration and curve, ImplicitlyAnimatedWidget
are often convenient in that they fully manage the underlying AnimationController
.
Subclasses must use a State
instance that extends ImplicitlyAnimatedWidgetState
. Those that should be rebuilt (i.e., marked dirty) whenever the animation ticks extend AnimatedWidgetBaseState
, instead.
ImplicitlyAnimatedWidgetState.forEachTween
is the engine that drives implicit animation. Subclasses implement this method such that the provided visitor (TweenVisitor
) is invoked once per implicitly animatable property.
The visitor function requires three arguments: the current tween instance (constructed by the superclass but cached locally, e.g., ExampleState._opacityTween
), the target value (typically read from the widget, e.g., ExampleState.widget.opacityValue
), and a constructor (TweenConstructor
) that returns a new tween instance starting at the provided value. The visitor returns an updated tween; this value is typically assigned to the same field associated with the first argument.
Tweens are constructed during state initialization (via ImplicitlyAnimatedWidgetState._constructTweens
) for all implicitly animatable properties with non-null target values (via ImplicitlyAnimatedWidgetState.forEachTween
). Tweens may also be constructed outside of this context as they transition from null to non-null target values.
When the widget is updated (via ImplicitlyAnimatedWidgetState.didUpdateWidget
), ImplicitlyAnimatedWidgetState.forEachTween
steps through the subclassās animatable properties to update the tweensā bounds (via ImplicitlyAnimatedWidgetState._updateTween
). The tweenās start is set using the current animation value (to avoid jumping), with the tweenās end set to the target value.
Last, the animation is played forward if the tween wasnāt already animating toward the target value (i.e., the tweenās previous endpoint didnāt match the target value, via ImplicitlyAnimatedWidgetState._shouldAnimateTweens
).
The subclass is responsible for using the animation (ImplicitlyAnimatedWidgetState.animation
) and tween directly (i.e., by evaluating the tween using the animationās current value).
SystemChannels.keyEvent
exposes a messaging channel that receives raw key data whenever the platform produces keyboard events.
RawKeyboard
subscribes to this channel and forwards incoming messages as RawKeyEvent
instances (which encapsulate RawKeyEventData
). Physical and logical interpretations of the event are exposed via RawKeyEvent.physicalKey
and RawKeyEvent.logicalKey
, respectively. The character produced is available as RawKeyEvent.character
but only for RawKeyDownEvent
events. This field accounts for modifier keys / past keystrokes producing null for invalid combinations or a dart string, otherwise.
The physical key identifies the actual position of the key that was struck, expressed as the equivalent key on a standard QWERTY
keyboard. The logical key ignores position, taking into account any mappings or layout changes to produce the actual key the user intended.
Subclasses of RawKeyEventData
interpret platform-specific data to categorize the keystroke in a portable way (RawKeyEventDataAndroid
, RawKeyEventDataMacOs
)
IME
?IME
stands for āinput method editor,ā which corresponds to any sort of on-screen text editing interface, such as the software keyboard. There can only be one active IME
at a time.
Flutter
interact with IMEs
?SystemChannels.textInput
exposes a method channel that implements a transactional interface for interacting with an IME
. Operations are scoped to a given transaction (client), which is implicit once created. Outbound methods support configuring the IME
, showing/hiding UI, and update editing state (including selections); inbound methods handle IME
actions and editing changes. Convenient wrappers for this protocol make much of this seamless.
IME
?TextInput.attach
federates access to the IME
, setting the current client (transaction) that can interact with the keyboard.
TextInputClient
is an interface to receive information from the IME
. Once attached, clients are notified via method invocation when actions are invoked, the editing value is updated, or the cursor is moved.
TextInputConnection
is returned by TextInput.attach
and allows the IME
to be altered. In particular, the editing state can be changed, the IME
shown, and the connection closed. Once closed, if no other client attaches within the current animation frame, the IME
will also be hidden.
TextInputConfiguration
encapsulates configuration data sent to the IME
when a client attaches. This includes the desired input type (e.g., ādatetimeā, āemailAddress
ā, āphoneā) for which to optimize the IME
, whether to enable autocorrect, whether to obscure input, the default action, capitalization mode (TextCapitalization
), and more.
TextInputAction
enumerates the set of special actions supported on all platforms (e.g., āemergencyCall
ā, ādoneā, ānextā). Actions may only be used on platforms that support them. Actions have no intrinsic meaning; developers determine how to respond to actions themselves.
TextEditingValue
represents the current text, selection, and composing state (range being edited) for a run of text.
RawFloatingCursorPoint
represents the position of the āfloating cursorā on iOS
, a special cursor that appears when the user force presses the keyboard. Its position is reported via the client, including state changes (RawFloatingCursorDragState
).
RenderObject
provides the basic infrastructure for managing a tree of visual elements. Render objects define a general protocol for performing layout, painting, and compositing. This protocol is largely abstract, deferring to subclasses to determine the inputs and outputs of layout, how hit testing is performed (though all render objects are HitTestTargets
), and how the render object hierarchy is modeled (RenderObject
extends AbstractNode
, which doesn't specify a concrete child model).
Constraints
represent the immutable inputs to layout. The definition is flexible provided that constraints can indicate whether they represent a single configuration (Constraints.isTight
), are expressed in the canonical form (Constraints.isNormalized
), and can be compared for equality (via Constraints.==
and Constraints.hashCode
).
ParentData
represents opaque data stored in a child by its parent. This data is typically considered an implementation detail of the parent and should therefore not be accessed by the child. Parent data is flexibly defined and only includes a single method (ParentData.detach
, which allows instances to react to their render object being removed from the tree).
RendererBinding
establishes a PipelineOwner
as part of RendererBinding.initInstances
(i.e., the bindingās āconstructorā). The pipeline owner serves as the render tree's AbstractNode.owner
(set by AbstractNode.attach
and propagated by AbstractNode.adoptChild
).
PipelineOwner
is analogous to the BuildOwner
, tracking which render objects need compositing bits, layout, painting, or semantics updates. Objects mark themselves as being dirty by adding themselves to specific lists (e.g.,PipelineOwner._nodesNeedingLayout
, PipeplineOwner._nodesNeedingPaint
). All dirty objects are later cleaned by a corresponding āflushā method (e.g, PipelineOwner.flushLayout
, PipelineOwner.flushPaint
). These methods initiate the corresponding rendering process and are invoked every frame as needed (via RendererBinding.drawFrame
and WidgetsBinding.drawFrame
).
Rendering takes place by flushing dirty render objects in a specific sequence:
PipelineOwner.flushLayout
lays out all dirty render objects (laying out dirty children recursively, based on each RenderObject.performLayout
implementation).
PipelineOwner.flushCompositingBits
updates a cache used to indicate whether descendant render objects create new layers. If so, because painting is often interlaced (i.e., a render object paints before and after its children), certain operations may need to be implemented differently. If all descendants paint into the same layer, an operation (like clipping) can be performed using a single painting context. If not, some effects can only be achieved by inserting new layers. These bits determine which option will be used when painting.
PipelineOwner.flushPaint
paints all dirty render objects (painting dirty children recursively, based on each RenderObject.paint
implementation).
PipelineOwner.flushSemantics
compiles semantic data for all dirty render objects.
Whenever an object marks itself dirty, it will generally also invoke PipelineOwner.requestVisualUpdate
to schedule a frame and eventually update the user interface. Requesting a visual update invokes the visual update handler that was bound when the PipelineOwner
was constructed by RendererBinding
(PipelineOwner.onNeedVisualUpdate
)
The default handler invokesRendererBinding.ensureVisualUpdate
which schedules a frame (viaSchedulerBinding.scheduleFrame
) if it's not possible to apply the visual update during the current frame (e.g., during post frame callbacks).
When a render object is attached to the PipelineOwner
(via RenderObject.attach
), it adds itself to all relevant dirty lists (e.g., RenderObject.markNeedsLayout
, RenderObject.markNeedsPaint
).
Typically, a newly created render object will be dirty at this point since most of the dirty bits are initialized to true (e.g., RenderObject._needsLayout
, RenderObject._needsPaint
). Several other bits depend on the configuration of the node (e.g., RenderObject._needsCompositing
) but will be true if they're applicable.
The render object may also be dirty if it had been previously detached, mutated, and is now being re-attached.
As an exception, the very first painting, layout, and semantics passes are driven by a different flow (via RenderObject.scheduleInitialLayout
, RenderObject.scheduleInitialPaint
, and RenderObject.scheduleInitialSemantics
). This flow occurs when the RenderView
is first constructed (via RendererBinding.initRenderView
). This process calls RenderView.prepareInitialFrame
, which schedules initial layout and painting for the RenderView
(semantics follows a slightly different flow). Marking the root node as being dirty causes the entire tree to perform layout, painting, and semantics as needed.
During a hot reload, the renderer invokes RenderView.reassemble
(via RendererBinding.performReassemble
) which cascades down the render tree. By default, render objects mark themselves as being dirty for layout, compositing bits, painting, and semantics (via RenderObject.reassemble
).
The render tree is composed of RenderObject
instances, a subclass of AbstractNode
.
RenderObject.adoptChild
or RenderObject.dropChild
must be called whenever a child is added or removed, respectively. Modifying the child model marks the subtree dirty for layout, painting, semantics, and compositing bits.
RenderObject.redepthChild
must be called whenever a child's depth changes.
Render objects must be attached to the PipelineOwner
(viaRenderObject.attach
) and detached when they should no longer be rendered (via RenderObject.detach
).
PipelineOwner
maintains a reference to a root node (PipelineOwner.rootNode
). This field is set to the application's RenderView
when initializing the RendererBinding
(via RendererBinding.initRenderView
which sets RendererBinding.renderView
and, via a setter, PipelineOwner.rootNode
). This automatically detaches the old render object (if any) and attaches the new one.
Children are automatically attached and detached when they are adopted or dropped.
However, since parents may be attached and detached arbitrarily, render objects are responsible for calling the corresponding method on their children.
Subclasses that introduce a child model (e.g., ContainerRenderObjectMixin
) must override RenderObject.redepthChildren
to call RenderObject.redepthChild
on each child.
Render objects have a parent (RenderObject.parent
) and an attached state (RenderObject.attached
), indicating whether they'll contribute to rendering.
Parent data may be associated with a child using a ParentData
subclass; this data may only be assigned using RenderObject.setupParentData
. By convention, this data is considered opaque to the child, though a protocol is free to alter this rule (but shouldnāt). The parent data may then be mutated (e.g., via RenderObject.performLayout
).
ContainerRenderObjectMixin
uses this field to implement a linked list of children.
All RenderObjects
implement the visitor pattern via RenderObject.RenderObject.visitChildren
which invokes a RenderObjectVisitor
once per child.
When adding or removing a child, call RenderObject.adoptChild
and RenderObject.dropChild
respectively.
The parentās RenderObject.attach
and RenderObject.detach
methods must call their counterpart on each child.
The parentās RenderObject.redepthChildren
and RenderObject.visitChildren
methods must recursively call RenderObject.redepthChild
and the visitor argument on each child, respectively.
RenderObjectWithChildMixin
stores a single child reference within a render object.
A child setter (RenderObjectWithChildMixin.child
) adopts and drops the child as appropriate (i.e., when adding and removing a child).
Updating the child marks the render object as needing layout and therefore painting.
Attaching, detaching, recomputing depths, and iterating are overridden to incorporate the child.
ContainerRenderObjectMixin
uses each childās parent data (which must implement ContainerParentDataMixin
) to maintain a doubly linked list of children (via ContainerParentDataMixin.nextSibling
and ContainerParentDataMixin.previousSibling
).
The underlying linked list supports insertions and removals (via ContainerParentDataMixin._insertIntoChildList
and ContainerParentDataMixin._removeFromChildList
). Insertions can be positional using an optional preceding child argument.
A variety of container-type methods are exposed (e.g., ContainerRenderObjectMixin.insert
, ContainerRenderObjectMixin.add
, ContainerRenderObjectMixin.move
, ContainerRenderObjectMixin.remove
). These adopt and drop children as needed. A move operation is provided to avoid unnecessarily dropping and re-adopting a child.
Updating (or reordering) the child list marks the render object as needing layout and therefore painting.
These adopt, drop, attach, and detach children appropriately. Additionally, when the child list changes, RenderObject.markNeedsLayout
is invoked (as this may alter layout).
Attaching, detaching, recomputing depths, and iterating are overridden to incorporate all children.
The first child (ContainerRenderObjectMixin.firstChild
), last child (ContainerRenderObjectMixin.lastChild
), and total number of children (ContainerRenderObjectMixin.childCount
) is accessible via the render object.
Render objects mark themselves as needing layout (viaRenderObject.markNeedsLayout
); this schedules a layout pass during the next frame.
Layout is performed by PipelineOwner.flushLayout
when the corresponding dirty list is non-empty (PipelineOwner._nodesNeedingLayout
).
RenderObject.layout
is invoked with a Constraints
subclass as input; some subclasses incorporate other out-of-band input, too. The layout method applies input to produce geometry (generally a size, though the type is unconstrained).
If a protocol uses out-of-band input, and this input changes, the affected render objects must be marked dirty (e.g., a parent that uses its child baseline to perform layout must repeat layout whenever the child's baseline changes).
If a parent depends on the childās layout, it must pass the parentUsesSize
argument to layout.
RenderObjects
that solely determine their sizing using the input constraints set RenderObject.sizedByParent
to true and perform all layout in RenderObject.performResize
.
Layout cannot depend on anything other than the incoming constraints; this includes the render object's position (which is generally determined by the parent render object and stored in ParentData
).
Compositing is the process by which painting (via RenderObject.paint
) is divided into layers (Layer
) before being rasterized and uploaded to the engine. Some render objects share the current layer whereas others introduce one or more new layers.
A repaint boundary corresponds to a render object that always paints into a new OffsetLayer
(stored in RenderObject.layer
). This allows the subtree rooted at this node to paint separately from its parent. Moreover, the offset layer can be re-used (and translated) to avoid unnecessary repainting.
Other render objects that produce new layers can store the root-most layer in RenderObject.layer
to allow future painting operations to update the existing layer rather than creating a new one.
This is possible because Layer
preserves a reference to any underlying EngineLayer
, and SceneBuilder
accepts an oldLayer
argument when building a new scene.
Certain graphical operations are implemented differently depending on whether they're being applied within a single layer or across multiple layers (e.g., clipping).
Render objects often interlace their own painting with their children's painting.
Therefore, to apply these graphical operations correctly, the framework must track which render objects introduce new layers into the scene.
needsCompositing
indicates whether the subtree rooted at a render object (e.g., the object and its descendants) may introduce new layers. This indicates to the PaintingContext
that certain operations (e.g., clipping) will need to be implemented differently.
Render objects that always insert layers (e.g., a video) should override alwaysNeedsCompositing
.
The compositing bits should be marked dirty (via RenderObject.markNeedsCompositingBitsUpdate
) whenever the render tree is mutated; children that are added may introduce layers whereas children that are removed may allow a single layer to be shared. RenderObject.adoptChild
and RenderObject.dropChild
always call this method.
All ancestors are marked dirty until reaching a repaint boundary or a node with a repaint boundary parent. In this case, all upstream objects will already have the correct compositing bits (i.e., repaint boundary status cannot change; thus, all upstream compositing bits will have been configured when the render object was added).
Updates are performed by PipelineOwner.flushCompositingBits
when the corresponding dirty list is non-empty (PipelineOwner._nodesNeedingCompositingBitesUpdate
). The update must be applied before painting.
Dirty render objects are sorted from back to front (using RenderObject.depth
) then updated (via RenderObject._updateCompositingBits
). This method performs a depth-first traversal of the render tree, marking each render object as needing compositing if any of its children need compositing (or it's a repaint boundary or always needs compositing).
If the compositing bits are updated, the node is marked as needing repainting, too.
Render objects mark themselves as needing paint (viaRenderObject.markNeedsPaint
); this schedules a painting pass during the next frame.
Painting is performed by PipelineOwner.flushPaint
when the corresponding dirty list is non-empty (PipelineOwner._nodesNeedingPaint
).
Dirty render objects are sorted from back to front (using RenderObject.depth
) then painted (via PaintingContext.repaintCompositedChild
). This requires the render object to have an associated layer (only render objects that are repaint boundaries are added to this list).
If a render object's layer isn't attached (i.e., isn't being composited), painting must wait until it's reattached. The framework iterates through all ancestor render objects to find the nearest repaint boundary with an attached layer (via RenderObject._skippedPaintingOnLayer
). Each is marked dirty to ensure that it's eventually painted.
If necessary, a new PaintingContext
is created for the render object and forwarded to RenderObject._paintWithContext
.
This method ensures that layout has been performed and, if so, invokes RenderObject.paint
with the provided context.
RenderObject.paintBounds
provides an estimate of how large a region this render object will paint (as a Rect
in local coordinates). This doesn't need to match the render object's layout geometry.
RenderObject.applyPaintTransform
captures how a render object transforms its children by applying the same transformation to a provided matrix (e.x., if the child is painted at (x, y)
, the matrix is translated by (x, y)
). This allows the framework to assemble a sequence of transformations to map between any two render object's coordinate systems (via RenderObject.getTransformTo
). This process allows local coordinates to be mapped to global coordinates and vice versa.
Render objects implement HitTestTarget
and can therefore process incoming pointer events (via RenderObject.handleEvent
). However, render objects do not support hit testing: there is no builtin mechanism for a render object to subscribe to a pointer. This functionality is introduced by RenderBox
.
The GestureBinding
receives pointer events from the engine (via Window.onPointerDataPacket
). These are queued (via GestureBinding._handlePointerDataPacket
), which is flushed immediate (via GestureBinding._flushPointerEventQueue
) or when events are unlocked (via GestureBinding.unlocked
, which is called by the BaseBinding.lockEvents
).
Events are locked during reassembly and the initial warmup frame.
GestureBinding._handlePointerEvent
processes each queued event in sequence using a table of hit test results (GestureBinding._hitTests
) and a pointer router (GestureBinding._pointerRouter
).
The pointer router is a PointerRouter
instance that maps an integral pointer ID to an event handler callback (PointerRoute
, a function that accepts a PointerEvent
). The router also captures a transform matrix to map from screen coordinates to local coordinates.
The hit testing table maps from an integral pointer ID to a record of all successful hits (HitTestResult
). Each entry (HitTestEntry
) records the recipient object (HitTestEntry.target
, a HitTestTarget
) and the transform needed to map from global coordinates to local coordinates (HitTestEntry.transform
).
A HitTestTarget
is an object that can handle pointer events (via HitTestTarget.handleEvent
). Render objects are hit test targets.
Initiating events (PointerUpEvent
, PointerSignalEvent
) trigger a hit test (via GestureBinding.hitTest
). Ongoing events (PointerMoveEvent
) are forwarded to the path associated with the initiating event. Terminating events (PointerUpEvent
, PointerCancelEvent
) remove the entry.
Hit test results are stored in the hit test table so that they can be retrieved for subsequent events associated with the same pointer. Hit tests are performed on HitTestable
objects (via HitTestable.hitTest
).
GestureBinding
defines a default override that adds itself to the HitTestResult
; invoking this also invokes any overrides mixed into the bindings. The binding processes events to resolve gesture recognizers using the gesture arena (via GestureBinding.handleEvent
).
RendererBinding
provides an override that tests the render view first (via RenderView.hitTest
, called by RendererBinding
). This is the entry point for hit testing in the render tree.
The render view tests its render box child (via RenderBox.hitTest
) before adding itself to the result.
RenderView
and RenderBox
aren't HitTestable
but do provide a compatible interface (used by RendererBinding
). RenderObject
does not participate in hit testing by default.
HitTestable
is only used at the binding level. All other hit testing is implementation dependent.
If the hit succeeds, the event is unconditionally forwarded to each HitTestEntry
in the HitTestResult
(via GestureBinding.dispatchEvent
) in depth-first order. GestureBinding
is a HitTestDispatcher
.
Each entry along the path is mapped to a target which handles the event (via HitTestTarget.processEvent
). As a HitTestTarget
, render objects are eligible to process pointers. However, there is no mechanism for them to participate in hit testing.
Events are also forwarded via the pointer router to support gesture recognition, a more select event handling mechanism.
Visual transforms are implemented in RenderObject.paint
. The same transform is applied logically (i.e., when hit testing, computing semantics, mapping coordinates, etc.) via RenderObject.applyPaintTransform
. This method applies the child transformation to the provided matrix (providedMatrix * childTransform
).
RenderObject.transformTo
chains paint transformation matrices mapping from the current coordinate space to an ancestorās coordinate space.
If an ancestor isn't provided, the transformation will map to global coordinates (i.e., the coordinates of the render object just below the RenderView
; to obtain physical screen coordinates, these coordinates must be further transformed by RenderView.applyPaintTransform
).
RenderView
receives a ViewConfiguration
from the platform that specifies the screen's dimensions (ViewConfiguration.size
) and the pixel depth (ViewConfiguration.devicePixelRatio
).
RenderView.performLayout
adopts this size (RenderView.size
) and, if present, lays out the render box child with tight constraints corresponding to the screen's dimensions (ViewConfiguration.size
).
RenderView.paint
and RenderView.hitTest
delegate to the child, if present; the render view always adds itself to the hit test result.
When the configuration is set (via RenderView.configuration
), the layer tree's root is replaced with a TransformLayer
(via RenderView._updateMatricesAndCreateNewRootLayer
) that converts physical pixels to logical pixels. This allows all descendant render objects to be isolated from changes to screen resolution.
RenderView.applyPaintTransform
applies the resolution transform (RenderView._rootTransform
) to map from logical global coordinates to physical screen coordinates.
RenderView.compositeFrame
implements the final stage of the rendering pipeline, compositing all painted layers and uploading the resulting scene to the engine for rasterization (via Window.render
).
If system UI customization is enabled (RenderView.automaticSystemUiAdjustment
), RenderView._updateSystemChrome
will query the layer tree to find SystemUiOverlayStyle
instances associated with the very top and very bottom of the screen (this is the mechanism used to determine how to tint system UI like the status bar or Android's bottom navigation bar). If found, the appropriate settings are communicated to the platform (using SystemChrome
platform methods).
RenderBox
, RenderSliver
.
Each glyph (i.e., character) is laid out on a baseline, with the portion above forming its ascent, and the portion below forming its descent. The glyphās origin precedes the glyph and is anchored to the baseline. A cap line marks the upper boundary of capital letters; the mean line, or median, serves the same purpose for lowercase letters. Cap height and x-height are measured from the baseline to each of these lines, respectively. Ascenders extend above the cap line or the median, depending on capitalization. Descenders extend below the baseline. Tracking denotes letter spacing throughout a unit of text; kerning is similar, but only measured between adjacent glyphs. Height is given as the sum of ascent and descent. Leading denotes additional spacing split evenly above and below the line. Collectively, height and leading comprise the line height.
Font metrics are measured in logical units with respect to a box called the āem-square.ā Internally
, the em-square has fixed dimensions (e.g., 1,000 units per side). Externally, the em-square is scaled to the font size (e.g., 12 pixels per side). This establishes a correspondence between internal/logical units and external/physical units (e.g., 0.012
pixels per unit).
Line height is determined by leading and the fontās height (ascent plus descent). If an explicit font height is provided, ascent and descent are constrained such that (1) their sum equals the desired height and (2) their ratio matches the original ratio. This alters the fontās spacing but not the size of its glyphs.
Note that explicitly setting the height to the font size is not the same as the default behavior.
When height isnāt fixed, the internal ascent and descent are used directly (after scaling). These values are font-specific and need not sum to the font size.
When height is fixed, ascent and descent have no relation to their internal counterparts (other than sharing a ratio). These values are chosen to sum to the target height (e.g., the font size).
Text size is further adjusted according to a text scale factor (an accessibility feature). This factor represents the number of logical pixels per font size pixel (e.g., a value of 1.5
would cause fonts to appear 50% larger). This effect is achieved my multiply the font size by the scale factor.
A given paragraph of text may contain multiple runs. Each run (or text box) is associated with a specific font and may be broken up due to wrapping. Adjacent runs can be aligned in different ways, but are generally positioned to achieve a common baseline. Boxes with different line height can combine to create a larger line height depending on positioning. Overall height is generally measured from the top of the highest box to the bottom of the lowest.
FontStyle
and FontWeight
characterize the glyphs used during rendering (specifying slant and glyph thickness, respectively).
FontFeature
encodes a feature tag, a four-character tag associated with an integer value that customizes font-specific features (i.e. these can be anything from enabling slashed zeros to selecting random glyph variants).
TextAlign
describes the horizontal alignment of text. āLeftā, ārightā, and ācenterā describe the textās alignment with respect to its container. āStartā and āendā do the same, but in a directionality-aware manner. āJustifyā stretches wrapped text such that it fills the available width.
TextBaseline
identifies the horizontal lines used to vertically align glyphs (alphabetic or ideographic).
TextDecoration
is a linear decoration (underline, overline, or a strikethrough) that can be applied to text; overlapping decorations merge intelligently. TextDecorationStyle
alters how decorations are rendered: using a solid, double, dotted, dashed, or wavy line.
TextStyle
describes the size, position, and rendering of text in a way that can be transformed for use by the engine. This description includes colors, spacing, the desired font family, the text decoration, and so on.
Specifying a fixed height generally alters the fontās default spacing. Ordinarily, the fontās ascent and descent (space above and below the baseline) is calculated without constraint. When a height is specified, both metrics must sum to the desired height while maintaining their original ratio; this is simply different than the default behavior.
TextPosition
represents an insertion point in rendered and unrendered text (i.e., a position between letters). This is encoded as an offset indicating the index of the letter immediately after the position, even if that index exceeds string bounds. An affinity is used to disambiguate the following cases where an offset becomes ambiguous after rendereding:
Positions adjacent to automatic line breaks are ambiguous: the insertion point might be the end of the first line or the start of the second (with explicit line breaks, the newline is just another character, so there is no ambiguity).
Positions at the interface of right-to-left and left-to-right strings are ambiguous: the renderer will flip half the string, so itās unclear whether the offset corresponds to the pre-render or post-render version (e.x
., offset 3 can be āabc|ABC
ā or āabcCBA
|ā).
TextAffinity
resolves ambiguity when an offset can correspond to multiple positions after rendering (e.g., because a renderer might insert line breaks or flip portions due to directionality). TextAffinity.upstream
selects the option closer to the start of the string (e.g., the end of the line before a break, āabc|ABC
ā) whereas TextAffinity.downstream
selects the option closer to the end (e.g., the start of the line after a break, āabcCBA
|ā).
TextBox
identifies a rectangular region containing text relative to the parentās top left corner. Provides direction-aware accessors (i.e., TextBox.start
, TextBox.end
).
TextWidthBasis
enumerates approaches for measuring the width of a paragraph. Longest line selects the minimum space needed to contain the longest line (e.g., a chat bubble). Parent selects the width of the container for multi-line text or the actual width for a single line of text (e.g., a paragraph of text).
RenderComparison
describes the renderable difference between two inline spans. Spans may have differences that will affect their layout (and painting), their painting, their metadata, etc.
ParagraphStyle
describes how lines are laid out by ParagraphBuilder
. Among other things, this class allows a maximum number of lines to be set as well as the textās directionality and ellipses behavior; crucially, it allows the paragraphās strut to be configured.
ParagraphConstraints
describe the input to Paragraph
layout. Its only value is āwidth,ā which specifies a maximum width for the paragraph. This maximum is enforced in two ways: (1) if possible, a soft line break (i.e., located between words) is inserted before the maximum is reached. Otherwise, (2) a hard line break (i.e., located within words) is inserted, instead.
If this would result in an empty line (i.e., due to inadequate width), the next glyph is inserted irrespective of the constraint, followed by a hard line break.
This width is used when aligning text (via TextAlign
); any ellipses is ignored for the purposes of alignment. Ellipses length is considered when determining line breaks [?].
StrutStyle
defines a āstrutā which dictates a line of textās minimum height. Glyphs assume the larger of their own dimensions and the strutās dimensions (with ascent and descent considered separately). Conceptually, paragraphās prepend a zero-width strut character to each line. The strut can be forced, causing line height to be determined solely by the strut.
Placeholders reserve rectangular spaces within paragraph layout. These spaces are subsequently painted using arbitrary content (e.g., a widget).
PlaceholderAlignment
expresses the vertical alignment of a placeholder relative to the font. Placeholders can have their top, bottom, or baseline aligned to the parent baseline. Placeholders may also be positioned relative to the fontās ascenders, descenders, or median.
PlaceholderDimensions
describe the size and alignment of a placeholder. If a baseline-relative alignment is used, the type of baseline must be specified (e.g., alphabetic or ideographic). An optional baseline offset indicates the distance from the top of the box to its logical baseline (e.g., this is used to align inline widgets via RenderBox.getDistanceToBaseline
).
InlineSpan
represents an immutable span of content within a paragraph. A span is associated with a text style which is inherited by child spans. Spans are added to a ParagraphBuilder
via InlineSpan.build
(this is handled automatically by TextPainter
); any accessibility text scaling is specified at this point. An ancestor span may be queried to locate the descendent span containing a TextPosition
.
TextSpan
extends InlineSpan
to represent an immutable span of styled text. The provided TextStyle
is inherited by all children, which may override all or some styles. Text spans contain text as well as any number of inline span children. Children form a text span tree that is traversed in order. Each node is also associated with a string of text (styled using the spanās TextStyle
) which effectively precedes any children. TextSpans
are interactive and have an associated gesture recognizer that is managed externally (e.g., by RenderParagraph
).
PlaceholderSpan
extends InlineSpan
to represent a reserved region within a paragraph.
WidgetSpan
extends PlaceholderSpan
to embed a Widget
within a paragraph. Widgets are constrained to the maximum width of the paragraph. The widget is laid out and painted by RenderParagraph
; TextPainter
simply leaves space within the paragraph.
ParagraphBuilder
tracks the number of placeholders added so far; this number is used when building to identify the PlaceholderDimensions
corresponding to this span. These dimensions are typically computed by RenderParagraph
, which lays out all widget spans before building the paragraph so that the necessary amount of space is reserved.
Paragraph is a piece of text wherein each glyph is sized and positioned appropriately; Paragraph.layout
must be invoked to compute these metrics. Paragraph supports efficient resizing and painting (via Canvas.addParagraph
) and must be built by ParagraphBuilder
. Each glyph is assigned an integer offset computed before rendering (thus there is no affinity issue). Note that Paragraph
is a thin wrapper around engine code.
After layout, paragraphs can report height, width, longest line width, and intrinsic width. Maximum intrinsic width maximally reduces the paragraphās height. Minimum intrinsic width is the smallest width allowing correct rendering.
Paragraphs can report all placeholder locations and dimensions (reported as TextBox
instances), the TextPosition
associated with a 2D offset, and word boundaries given an integer offset.
Once laid out, Paragraphs
can provide bounding boxes for a range within the pre-rendered text. Boxes are derived from runs of text, each of which may have a distinct style and therefore height. Thus, boxes can be sized in a variety of ways (via BoxHeightStyle
, BoxWidthStyle
). Boxes can tightly enclose only the rendered glyphs (the default), expand to include different portions of the line height, be sized to the maximum height in a given line, etc.
ParagraphBuilder
assembles a single Paragraph
from a sequence of text and placeholders using the provided ParagraphStyle
. TextStyles
are pushed and popped, allowing styles to be inherited and overridden, and placeholders boxes (e.g., for embedded widgets) are reserved and tracked. The Paragraph
itself is built by the engine.
TextOverflow
describes how visual overflow is handled when rendering text via RenderParagraph
. Options include clipping, fading, adding ellipses, or tolerating overflow.
TextPainter
performs the actual work of building a Paragraph
from an InlineSpan
tree and painting it to the canvas; the caller must explicitly request layout and painting. The painter incorporates the various text rendering APIs
to provide a convenient interface for rendering text. If the containerās width changes, layout and painting must be repeated.
Text layout selects a width that is within the provided minimum and maximum values, but that is as close to the maximum intrinsic width as possible (i.e., consumes the least height). Redundant calls are ignored.
TextPainter
supports a variety of queries; it can provide bounding boxes for a TextSelection
, the height and offset of the glyph at a given TextPosition
, neighboring editable offsets, and can convert a 2D offset into a TextPosition
.
A preferred line height can be computed by rendering a test glyph and measuring the resulting height (via TextPainter.preferredLineHeight
).
Text is a wrapper widget for RichText
which obtains the ambient text style via DefaultTextStyle
.
RichText
is a MultiChildRenderObjectWidget
that configures a single RenderParagraph
with a provided InlineSpan
. Its children are obtained by traversing this span to locate WidgetSpan
instances. Any render objects produced by the associated widgets will be attached to the RichText
ās RenderParagraph
; this is how the RenderParagraph
is able to layout and paint widgets into the paragraphsā placeholders.
TextParentData
is the parent data associated with the inline widgets contained in a paragraph of text. It extends ContainerBoxParentData
to track the text scale applied to the child.
RenderParagraph
displays a paragraph of text, optionally containing inline widgets. It delegates to (and augments) an underlying TextPainter
which builds, lays out, and paints the actual paragraph. Any operations that depend on text layout generally recompute layout blindly; this is acceptable since TextPainter
ignores redundant calls.
RenderParagraph
adapts a TextPainter
to the render tree, adding the ability to render widgets (i.e., WidgetSpan
instances) within the text. Any such widgets are parented to the RenderParagraph
; a list of all descendent placeholders (RenderParagraph._placeholderSpans
) is also cached. The RenderParagraph
proxies the various inputs to the text rendering system, invalidating painting and layout as values change (RenderComparison
allows this to be done efficiently).
Layout is performed in three stages: inline children are laid out to determine placeholder dimensions, text is laid out with placeholder dimensions defined, and children are positioned using the final placeholder positions computed by the engine. Finally, the desired TextOverflow
effect is applied (by clipping, configuring an overflow shader, etc).
Children (i.e., inline widgets) are laid out in sequence with infinite height and a maximum width matching that of the paragraph. A list of PlaceholderDimensions
is assembled using the resulting layout. If the associated InlineSpan
is aligned to the baseline, the offset to the childās baseline is computed (via RenderBox.getDistanceToBaseline
); this ensures that the childās baseline is coincident with the textās baseline. Finally, the dimensions are bound to the TextPainter
(via TextPainter.setPlaceholderDimensions
).
Next, the box constraints are converted to ParagraphConstraints
(i.e., height is ignored). If wrapping or truncation is enabled, the maximum width is retained; otherwise, the width is considered infinite. These constraints are provided to TextPainter.layout
, which builds and lays out the underlying Paragraph
(via the engine).
Finally, children are positioned based on the final text layout. Positions are read from TextPainter.inlinePlaceholderBoxes
, which exposes an ordered list of TextBox
instances.
Since TextPainter
is stateful, and certain operations (e.g., computing intrinsic dimensions) destroy this state, the text must be relaid out before painting.
Any applicable clip is applied to the canvas; if the TextOverflow
setting requires a shader, the canvas is configured accordingly.
The TextPainter
paints the text (via Canvas.drawParagraph
). Empty spaces will appear in the text wherever placeholders were included.
Finally, each child is rendered at the offset corresponding to the associated placeholder in the text. If applicable, the text scale factor is applied to the child during painting (e.g., causing it to appear larger).
All InlineSpan
subclasses can be associated with a gesture recognizer (InlineSpan.recognizer
). This instanceās lifecycle, however, must be maintained by client code (i.e., the RenderParagraph
). Additionally, events must be propagated to each InlineSpan
since spans do not support bubbling themselves.
The RenderParagraph
overrides RenderObject.handleEvent
to attach new pointers (i.e., PointerDownEvent
) directly to the span that was tapped.
The affected span is identified by first laying out the text, mapping the eventās offset to a TextPosition
, then finally locating the span that contains this position (via TextPainter.getPositionForOffset
and InlineSpan.getSpanForPosition
, respectively).
The RenderParagraph
is also responsible for hit testing any render objects included via inline widgets. These are hit tested using a similar approach as RenderBox
that additionally takes into account any text scaling.
The RenderParagraph
is always included in the hit test result.
Intrinsic dimensions cannot be calculated if placeholders are to be aligned relative to the textās baseline; this would require a full layout pass according to the RenderBox
layout protocol.
Intrinsics are dependant on two things: inline widgets (children) and text layout. Children must be measured to establish the size of any placeholders in the text. Next, the text is laid out using the resulting placeholder dimensions so its that intrinsic dimensions can be retrieved (Paragraph.layout
is required to obtain the intrinsic dimensions from the engine).
The minimum and maximum intrinsic width of text is computed without constraint on width. The minimum value corresponds to the width of the first word and the maximum value corresponds to the full string laid out on a single line.
The minimum and maximum intrinsic height of text is computed using the provided width (text layout is width-in-height-out). Moreover, the maximum and minimum values are equivalent: the textās height is solely determined by its width. Thus, both values match the height of the text when rendered within the given width.
Intrinsic dimensions for all children are queried via the RenderBox
protocol. The input dimension (e.g., height) is provided to obtain the opposite intrinsic dimension (e.g., minimum intrinsic width); this value is used to obtain the remaining intrinsic dimension (e.g., minimum intrinsic height). These values are then wrapped in a PlaceholderDimension
instance which is aligned appropriately (via RenderParagraph._placeholderSpans
).
Variants obtaining minimum and maximum intrinsic dimensions are equivalent other than the render box methods they invoke.
Both the intrinsic width and height children must be computed since the placeholder boxes affect both dimensions of text layout. However, when measuring maximum intrinsic width, height can be ignored since text is laid out in a single line.
The intrinsic sizes of all children are provided to the text painter prior to layout (via TextPainter.setPlaceHolderDimensions
). This ensures that the resulting intrinsics accurately reflect any inline widgets. After layout, intrinsics can be read from the Paragraph
directly. Note that this is destructive in that it wipes out earlier dimensions associated with the TextPainter
.
Unlike other frameworks, Flutter
does not have a default text direction (TextDirection
). Throughout the lower levels of the framework, directionality must be specified. At the widget level, an ambient directionality is introduced by the Directionality
widget. Other widgets use the ambient directionality when interacting with lower level aspects of the framework.
Often, a visual and a directional variant of a widget is provided (EdgeInsets
vs. EdgeInsetsDirectional
). The former exposes top, left, right, and bottom properties; the latter exposes top, start, end, and bottom properties.
Painting is a notable exception; the canvas (Canvas) is always visually oriented, with the X and Y axes running from left-to-right and top-to-bottom, respectively.
Minikin measures and lays out the text.
Minikin uses ICU
to split text into lines.
Minikin uses HarfBuzz
to retrieve glyphs from a font.
Skia paints the text and text decorations on a canvas.
Path
describes a sequence of potentially disjoint movements on a plane. Paths tracks a current point as well as one or more subpaths (created via Path.moveTo
). Subpaths may be closed (i.e., the first and last points are coincident), open (i.e., the first and last points are distinct), or self intersecting (i.e., movements within the path intersect). Paths incorporate lines, arcs, beziers, and more; each operation begins at the current point and, once complete, defines the new current point. The current point begins at the origin. Paths can be queried (via Path.contains
), transformed (via Path.transform
), and merged (via Path.combine
, which accepts a PathOperation
).
PathFillType
defines the criteria determining whether a point is contained by the path. PathFillType.evenOdd
casts a ray from the point outward, summing the number of edge crossings; an odd count indicates that the point is internal. PathFillType.nonZero
considers the pathās directionality. Again casting a ray from the point outward, this method sums the number of clockwise and counterclockwise crossings. If the counts arenāt equal, the point is considered to be internal.
Canvas
represents a graphical context supporting a number of drawing operations. These operations are captured by an associated PictureRecorder
and, once finalized, transformed into a Picture
. The Canvas
is associated with a clip region (i.e., an area within which painting will be visible), and a current transform (i.e., a matrix to be applied to any drawing), both managed using a stack (i.e., clip regions and transforms can be pushed and popped as drawing proceeds). Any drawing outside of the canvasās culling box (ācullRect
ā) may be discarded; by default, however, affected pixels are retained. Many operations accept a Paint
parameter which describes how the drawing will be composited (e.g., the fill, stroke, blending, etc).
Canvas
exposes a rich API for drawing. The majority of these operations are implemented within the engine.
Canvas.clipPath
, Canvas.clipRect
, etc., refine (i.e., reduce) the clip region. These operations compute the intersection of the current clip region and the provided geometry to define a new clip region. The clip region can be anti-aliased to provide a gradual blending.
Canvas.translate
, Canvas.scale
, Canvas.transform
, etc., alter the current transformation matrix (i.e., by multiplying it by an additional transform). The former methods apply standard transformations, whereas the latter applies an arbitrary 4x4 matrix (specified in column-major order).
Canvas.drawRect
, Canvas.drawLine
, Canvas.drawPath
, etc., perform fundamental drawing operations.
Canvas.drawImage
, Canvas.drawAtlas
, Canvas.drawPicture
, etc., copy pixels from a rendered image or recorded picture into the current canvas.
Canvas.drawParagraph
paints text into the canvas (via Paragraph._paint
).
Canvas.drawVertices
, Canvas.drawPoints
, etc., describe solids using a collection of points. The former constructs triangles from a set of vertices (Vertices) and a vertex mode (VertexMode
); this mode describes how vertices are composed into triangles (e.g., VertexMode.triangles
specifies that each sequence of three points defines a new triangle). The resulting triangles are filled and composited using the provided Paint
and BlendMode
. The latter paints a set of points using a PointMode
describing how the collection of points is to be interpreted (e.g., as defining line segments or disconnected points).
The save stack tracks the current transformation and clip region. New entries can be pushed (via Canvas.save
or Canvas.saveLayer
) and popped (via Canvas.restore
). The number of items in this stack can also be queried (via Canvas.getSaveCount
); there is always at least one item on the stack. All drawing operations are subject to the transform and clip at the top of the stack.
All drawing operations are performed sequentially (by default or when using Canvas.save
/Canvas.restore
). If the operation utilizes blending, it will be blended immediately after completing.
Canvas.saveLayer
allows drawing operations to be grouped together and composited as a whole. Each individual operation will still be blended within the saved layer; however, once the layer is completed, the composite drawing will be blended as a whole using the provided Paint
and bounds.
For example, an arbitrary drawing can be made consistently translucent by first painting it using an opaque fill, and then blending the resulting layer with the canvas. If instead each component of the drawing were individually blended, overlapping regions would appear darker.
This is particularly useful for antialiased clip regions (i.e., regions that arenāt pixel aligned). Without layers, any operations intersecting the clip would needed to be antialiased (i.e., blended with the background). If a subsequent operation intersects the clip at this same point, it would be blended with both the background and the previous operation; this produces visual artifacts (e.g., color bleed). If both operations were combined into a layer and composited as a whole, only the final operation would be blended.
Note that though this doesnāt introduce a new framework layer, it does cause the engine to switch to a new rendering target. This is fairly expensive as it flushes the GPU
ās command buffer and requires data to be shuffled.
Paint
describes how a drawing operation is to be applied to the canvas. In particular, it specifies a number of graphical parameters including the color to use when filling or stroking lines (Paint.color
, Paint.colorFilter
, Paint.shader
), how new painting is to be blended with old painting (Paint.blendMode
, Paint.isAntiAlias
, Paint.maskFilter
), and how edges are to be drawn (Paint.strokeWidth
, Paint.strokeJoin
, Paint.strokeCap
). Fundamental to most drawing is whether the result is to be stroked (e.g., drawn as an outline) or filled; Paint.style
exposes a PaintingStyle
instance specifying the mode to be used.
If stroking, Paint.strokeWidth
is measured in logical pixels orthogonal to the path being painted. A value of 0.0
will cause the line to be rendered as thin as possible (āhairline renderingā).
Any lines that are drawn will be capped at their endpoints according to a StrokeCap
value (via Paint.strokeCap
; StrokeCap.butt
is the default and does not paint a cap). Caps extend the overall length of lines in proportion to the stroke width.
Discrete segments are joined according to a StrokeJoin
value (via Paint.strokeJoin
; StrokeJoin.miter
is the default and extends the original line such that the next can be drawn directly from it). A limit may be specified to prevent the original line from extending too far (via Paint.strokeMiterLimit
; once exceeded, the join reverts to StrokeJoin.bevel
).
ColorFilter
describes a function mapping from two input colors (e.g., the paintās color and the destinationās color) to a final output color (e.g., the final composited color). If a ColorFilter
is provided, it overrides both the paint color and shader; otherwise, the shader overrides the color.
MaskFilter
applies a filter (e.g., a blur) to the drawing once it is complete but before it is composited. Currently, this is limited to a Gaussian blur.
Shader
ęÆå¼ęä½æēØēSkiaēč²åØēęęćę”ę¶äøå
å«å¤äøŖę§ä»¶ļ¼å
ę¬Gradient
åImageShader
ćčæäæ©ęÆē±»ä¼¼ēļ¼åč
éčæå¹³ę»å°ę··åé¢č²ę„ēęåē“ ļ¼čåč
åē“ę„ä»å¾åäøčÆ»åå®ä»¬ćäø¤č
é½ęÆęå¹³éŗļ¼ä»„ä¾æåÆ仄å°åå§åē“ ę©å±å°å
¶č¾¹ēä¹å¤ļ¼åÆ仄åØä»»äøę¹åäøęå®äøåēTileMode
ļ¼ļ¼ImageShader
čæęÆęå°ä»»ęē©éµåŗēØäŗęŗå¾åć
TextRange
represents a range using [start, end) character indices: start is the index of the first character and end is the index after the last character. If both are -1, the range is collapsed (empty) and outside of the text (invalid). If both are equal, the range is collapsed but potentially within the text (i.e., an insertion point). If start <= end, the range is said to be normal.
TextSelection
expands on range to represent a selection of text. A range is specified as a [baseOffset
, extentOffset
) using character indices. The lesser will always become the base offset, with the greater becoming the extent offset (i.e., the range is normalized). If both offsets are the same, the selection is collapsed, representing an insertion point; selections have a concept of directionality (TextSelection.isDirectional
) which may be left ambiguous until the selection is uncollapsed. Both offsets are resolved to positions using a provided affinity. This ensures that the selection is unambiguous before and after rendering (e.g., due to automatic line breaks).
TextSelectionPoint
pairs an offset with the text direction at that offset; this is helpful for determining how to render a selection handle at a given position.
TextEditingValue
captures the current editing state. It exposes the full text in the editor (TextEditingValue.text
), a range in that text that is still being composed (TextEditingValue.composing
), and any selection present in the UI (TextEditingValue.selection
). Note that an affinity isnāt necessary for the composing range since it indexes unrendered text.
TextSelectionDelegate
supports reading and writing the selection (via TextSelectionDelegate.textEditingValue
), configures any associated selection actions (e.g., TextSelectionDelegate.canCopy
), and provides helpers to manage selection UI (e.g., TextSelectionDelegate.bringIntoView
, TextSelectionDelegate.hideToolbar
). This delegate is utilized primarily by TextSelectionControls
to implement the toolbar and selection handles.
ToolbarOptions
is a helper bundling all options that determine toolbar behavior within an EditableText
-- that is, how the overridden TextSelectionDelegate
methods behave.
TextSelectionControls
is an abstract class that builds and manages selection-related UI including the toolbar and selection handles. This class also implements toolbar behaviors (e.g., TextSelectionControls.handleCopy
) and eligibility checks (e.g., TextSelectionControls.canCopy
), deferring to the delegate where appropriate (e.g., TextSelectionDelegate.bringIntoView
to scroll the selection into view). These checks are mainly used by TextSelectionControls
ā build methods (e.g., TextSelectionControls.buildHandle
, TextSelectionControls.buildToolbar
), which construct the actual UI. Concrete implementations are provided for Cupertino
and Material
(_CupertinoTextSelectionControls
and _MaterialTextSelectionControls
, respectively), producing idiomatic UI for the corresponding platform. The build process is initiated by TextSelectionOverlay
.
TextSelectionOverlay
is the visual engine underpinning selection UI. It integrates TextSelectionControls
and TextSelectionDelegate
to build and configure the text selection handles and toolbar, and TextEditingValue
to track the current editing state; the editing state may be updated at any point (via TextSelectionOverlay.update
). Updates are made build-safe by scheduling a post-frame callback if in the midst of a persistent frame callback (building, layout, etc; this avoids infinite recursion in the build method).
The UI is inserted into the enclosing Overlay
and hidden and shown as needed (via TextSelectionOverlay.hide
, TextSelectionOverlay.showToolbar
, etc).
The toolbar and selection handles are positioned using leader/follower layers (via CompositedTransformLeader
and CompositedTransformFollower
). A LayerLink
instance for each type of UI is anchored to a region within the editable text so that the two layers are identically transformed (e.g., to efficiently scroll together). When this happens, TextSelectionOverlay.updateForScroll
marks the overlay as needing to be rebuilt so that the UI can adjust to its new position.
The toolbar is built directly (via TextSelectionControls.buildToolbar
), whereas each selection handle corresponds to a _TextSelectionHandleOverlay
widget. These widgets invoke a handler when the selection range changes to update the TextEditingValue
(via TextSelectionOverlay._handleSelectionHandleChanged
).
TextSelectionGestureDetector
is a stateful widget that recognizes a sequence of selection-related gestures (e.g., a tap followed by a double tap), unlike a typical detector which recognizes just one. The text field (e.g., TextField
) incorporates the gesture detector when building the corresponding UI.
_TextSelectionGestureDetectorState
coordinates the text editing gesture detectors, multiplexing them as described above. A map of recognizer factories is assembled and assigned callbacks (via GestureRecognizerFactoryWithHandlers
) given the widgetās configuration. These are passed to a RawGestureDetector
widget which constructs the recognizers as needed.
_TransparentTapGestureRecognizer
is a TapGestureRecognizer
capable of recognizing while ceding to other recognizers in the arena. Thus, the same tap may be handled by multiple recognizers. This is particularly useful since selection handles tend to overlap editable text; a single tap in the overlap region is generally processed by the selection handle, whereas a double tap is processed by the editable text.
TextSelectionGestureDetectorBuilderDelegate
provides a hook for customizing the interaction model (typically implemented by the text field, e.g., _CupertinoTextFieldState
, _TextFieldState
). The delegate also exposes the GlobalKey
associated with the underlying EditableTextState
.
TextSelectionGestureDetectorBuilder
configures a TextSelectionGestureDetector
with sensible defaults for text editing. The delegate is used to obtain a reference to the editable text and to customize portions of the interaction model.
Platform-specific text fields extend TextSelectionGestureDetectorBuilder
to provide idiomatic interaction models (e.g., _TextFieldSelectionGestureDetectorBuilder
).
TextInputFormatter
provides a hook to transform text just before EditableText.onChange
is invoked (i.e., when a change is committed -- not as the user types). Blocklisting, allowlisting, and length-limiting formatters are available (BlacklistingTextInputFormatter
, WhitelistingTextInputFormatter
, and LengthLimitingTextInputFormatter
, respectively).
TextEditingController
provides a bidirectional interface for interacting with an EditableText
or subclass thereof; as a ValueNotifier
, the controller will notify whenever state changes, including as the user types. The text (TextEditingController.text
), selection (TextEditingController.selection
), and underlying TextEditingValue
(TextEditingController.value
) can be read and written, even in response to notifications. The controller may also be used to produce a TextSpan
, an immutable span of styled text that can be painted to a layer.
EditableText
is the fundamental text input widget, integrating the other editable building blocks (e.g., TextSelectionControls
, TextSelectionOverlay
, etc.) with keyboard interaction (via TextInput
), scrolling (via Scrollable
), and text rendering to implement a basic input field. EditableText
also supports basic gestures (tapping, long pressing, force pressing) for cursor and selection management and IME
interaction. A variety of properties allow editing behavior and text appearance to be customized, though the actual work is performed by EditableTextState
. When EditableText
receives focus but is not fully visible, it will be scrolled into view (via RenderObject.showOnScreen
).
The resulting text is styled and structured (via TextStyle
and StrutStyle
), aligned (via TextAlign
), and localized (via TextDirection
and Locale
). EditableText
also supports a text scale factor.
EditableText
layout behavior is dependant on the maximum and minimum number of lines (EditableText.maxLines
, EditableText.minLines
) and whether expansion is enabled (EditableText.expands
).
If maximum lines is one (the default), the field will scroll horizontally on one line.
If maximum lines is null, the field will be laid out for the minimum number of lines, and grow vertically.
If maximum lines is greater than one, the field will be laid out for the minimum number of lines, and grow vertically until the maximum number of lines is reached.
If a multiline field reaches its maximum height, it will scroll vertically.
If a field is expanding, it is sized to the incoming constraints.
EditableText
follows a simple editing flow to allow the application to react to text changes and handle keyboard actions (via EditableTextState._finalizeEditing
).
EditableText.onChanged
is invoked as the fieldās contents are changed (i.e., as characters are explicitly typed).
EditableText.onEditingComplete
(by default) submits changes, clearing the controllerās composing bit, and relinquishes focus. If a non-completion action was selected (e.g., ānextā), focus is retained to allow the submit handler to manage focus itself. A custom handler can be provided to alter the latter behavior.
EditableText.onSubmitted
is invoked last, when the user has indicated that editing is complete (e.g., by hitting ādoneā).
EditableTextState
applies the configuration described by EditableText
to implement a text field; it also manages the flow of information with the platform TextInput
service. Additionally, the state object exposes a simplified, top-level interface for interacting with editable text. The editing value can be updated (via EditableTextState.updateEditingValue
), the toolbar toggled (via EditableTextState.toggleToolbar
), the IME
displayed (via EditableTextState.requestKeyboard
), editing actions performed (via EditableTextState.performAction
), text scrolled into view (via EditableText.bringIntoView
) and prepared for rendering (via EditableText.buildTextSpan
). In this respect, EditableTextState
is the glue binding many of the editing components together.
Is a: TextInputClient
, TextSelectionDelegate
Breakdown how it is updated by the client / notifies the client of changes to keep things in sync
updateEditingValue
is invoked by the client when user types on keyboard (same for performAction
/ floatingCursor
).
EditableTextState
participates in the keep alive protocol (via AutomaticKeepAliveClientMixin
) to ensure that it isnāt prematurely destroyed, losing editing state (e.g., when scrolled out of view).
When text is specified programmatically (via EditableTextState.textEditingValue
, EditableTextState.updateEditingValue
), the underlying TextInput
service must be notified so that platform state remains in sync (applying any text formatters beforehand). EditableTextState._didChangeTextEditingValue
RenderEditable
TextField
TextFieldState
CupertinoTextField
The toolbar UI is built by TextSelectionControls.buildToolbar
using the line height, a bounding rectangle for the input (in global, logical coordinates), an anchor position and, if necessary, a tuple of TextSelectionPoints
.
EditableText
triggers on gesture, overlay does the work
Selection handles are visual handles rendered just before and just after a selection. Handles need not be symmetric; TextSelectionHandleType
characterizes which variation of the handle is to be rendered (left, right, or collapsed).
Each handle is built by TextSelectionControls.buildHandle
which requires a type and a line height.
The handleās size is computed by TextSelectionControls.getHandleSize
, typically using the associated render editableās line height (RenderEditable.preferredLineHeight
), which is derived from the text painterās line height (TextPainter.preferredLineHeight
). The painter calculates this height through direct measurement.
The handleās anchor point is computed by TextSelectionControls.getHandleAnchor
using the type of handle being rendered and the associated render editableās line height, as above.
EditableText
triggers on select / cursor position, overlay does the work
EditableText
uses the handleās size and anchor to ensure that selection handles are fully visible on screen (via RenderObject.showOnScreen
).
IME
(input method editor)?At the lowest level, images are represented as a Uint8List
(i.e., an opaque list of unsigned bytes). These bytes can be expressed in any number of image formats, and must be decoded to a common representation by a codec.
instantiateImageCodec
accepts a list of bytes and returns the appropriate codec from the engine already bound to the provided image. This function accepts an optional width and height; if these do not match the imageās intrinsic size, the image is scaled accordingly. If only one dimension is provided, the other dimension remains the intrinsic dimension. PaintingBinding.instantiateImageCodec
provides a thin wrapper around this function with the intention of eventually supporting additional processing.
Codec represents the application of a codec on a pre-specified image array. Codecs process both single frames and animated images. Once the Codec
is retrieved via instantiateImageCodec
, the decoded FrameInfo
(which contains the image) may be requested via Codec.nextFrame
; this may be invoked repeatedly for animations, and will automatically wrap to the first frame. The Codec
must be disposed when no longer needed (the image data remains valid).
DecoderCallback
provides a layer of indirection between image decoding (via the Codec
returned by instantiateImageCodec
) and any additional decoding necessary for an image (e.g., resizing). It is primarily used with ImageProvider
to encapsulate decoding-specific implementation details.
FrameInfo
corresponds to a single frame in an animated image (single images are considered one-frame animations). Duration, if application, is exposed via FrameInfo.duration
. Otherwise, the decoded Image
may be read as FrameInfo.image
.
Image is an opaque handle to decoded image pixels managed by the engine, with a width and a height. The decoded bytes can be obtained via Image.toByteData
which accepts an ImageByteFormat
specifying (e.g., ImageByteFormat.rawRgba
, ImageByteFormat.png
). However, the raw bytes are often not required as the Image
handle is sufficient to paint images to the screen.
ImageInfo
associates an Image
with a pixel density (i.e., ImageInfo.scale
). Scale describes the number of image pixels per one side of a logical pixel (e.g., a scale of 2.0
implies that each 1x1 logical pixel corresponds to 2x2 image pixels; that is, a 100x100 pixel image would be painted into a 50x50 logical pixel region and therefore have twice the resolution depending on the display).
The image framework must account for a variety of cases that complicate image handling. Some images are obtained asynchronously; others are arranged into image sets so than an optimal variant can be selected at runtime (e.g., for the current resolution). Others correspond to animations which update at regular intervals. Any of these images may be cached to avoid unnecessary loading.
ImageStream
provides a consistent handle to a potentially evolving image resource; changes may be due to loading, animation, or explicit mutation. Changes are driven by a single ImageStreamCompleter
, which notifies the ImageStream
whenever concrete image data is available or changes (via ImageInfo
). The ImageStream
forwards notifications to one or more listeners (i.e., ImageStreamListener
instances), which may be invoked multiple times as the image loads or mutates. Each ImageStream
is associated with a key that can be used to determine whether two ImageStream
instances are backed by the same completer [?].
ImageStreamListener
encapsulates a set of callbacks for responding to image events. If the image is being loaded (e.g., via the network), an ImageChunkListener
is invoked with an ImageChunkEvent
describing overall progress. If an image has become available, an ImageListener
is invoked with the final ImageInfo
(including a flag indicating whether the image was loaded synchronously). Last, if the image has failed to load, an ImageErrorListener
is invoked.
The chunk listener is only called when an image must be loaded (e.g., via NetworkImage
). It may also be called after the ImageListener
if the image is an animation (i.e., another frame is being fetched).
The ImageListener
may be invoked multiple times if the associated image is an animation (i.e., once per frame).
ImageStreamListeners
are compared on the basis of the contained callbacks.
ImageStreamCompleter
manages image loading for an ImageStream
from an asynchronous source (typically a Codec
). A list of ImageStreamListener
instances are notified whenever image data becomes available (i.e., the completer ācompletesā), either in part (via ImageStreamListener.onImageChunk
) or in whole (via ImageStreamListener.onImage
). Listeners may be invoked multiple times (e.g., as chunks are loaded or with multiple animation frames). The completer notifies listeners when an image becomes available (via ImageStreamCompleter.setImage
). Adding listeners after the image has been loaded will trigger synchronous notifications; this is how the ImageCache
avoids refetching images unnecessarily.
The corresponding Image
must be resolved to an ImageInfo
(i.e., by incorporating scale); the scale is often provided explicitly.
OneFrameImageStreamCompleter
handles one-frame (i.e., single) images. The corresponding ImageInfo
is provided as a future; when this future resolves, OneFrameImageStreamCompleter.setImage
is invoked, notifying listeners.
MultiFrameImageStreamCompleter
handles multi-frame images (e.g., animations or engine frames), completing once per animation frame as long as there are listeners. If the image is only associated with a single frame, that frame is emitted immediately. An optional stream of ImageChunkEvents
allows loading status to be conveyed to the attached listeners. Note that adding a new listener will attempt to decode the next frame; this is safe, if inefficient, as Codec.getNextFrame
automatically cycles.
The next frame is eagerly decoded by the codec (via Codec.getNextFrame
). Once available, a non-repeating callback is scheduled to emit the frame after the corresponding duration has lapsed (via FrameInfo.duration
); the first frame is emitted immediately. If there are additional frames (via Codec.frameCount
), or the animation cycles (via Codec.repetitionCount
), this process is repeated. Frames are emitted via MultiFrameImageStreamCompleter.setImage
, notifying all subscribed listeners.
In this way, the next frame is decoded eagerly but only emitted during the first application frame after the duration has lapsed. If at any point there are no listeners, the process is paused; no frames are decoded or emitted until a listener is added.
A singleton ImageCache
is created by the PaintingBinding
during initialization (via PaintingBinding.createImageCache
). The cache maps keys to ImageStreamCompleters
, retaining only the most recently used entries. Once a maximum number of entries or bytes is reached, the least recently accessed entries are evicted. Note that any images actively retained by the application (e.g., Image
, ImageInfo
, ImageStream
, etc.) cannot be invalidated by this cache; the cache is only useful when locating an ImageStreamCompleter
for a given key. If a completer is found, and the image has already been loaded, the listener is notified with the image synchronously.
ImageCache.putIfAbsent
serves as the main interface to the cache. If a key is found, the corresponding ImageStreamCompleter
is returned. Otherwise, the completer is built using the provided closure. In both cases, the timestamp is updated.
Because images are loaded asynchronously, the cache policy can only be enforced once the image loads. Thus, the cache maintains two maps: ImageCache._pendingImages
and ImageCache._cache
. On a cache miss, the newly built completer is added to the pending map and assigned an ImageStreamListener
; when the listener is notified, the final image size is calculated, the listener removed, and the cache policy applied. The completer is then moved to the cache map.
If an image fails to load, it does not contribute to cache size but it does consume an entry. If an image is too large for the cache, the cache is expanded to accommodate the image with some headroom.
ImageConfiguration
describes the operating environment so that the best image can be selected from a set of alternatives (i.e., a double resolution image for a retina display); this is the primary input to ImageProvider
. A configuration can be extracted from the element tree via createLocalImageConfiguration
.
ImageProvider
identifies an image without committing to a specific asset. This allows the best variant to be selected according to the current ImageConfiguration
. Any images managed via ImageProvider
are passed through the global ImageCache
.
ImageProvider.obtainKey
produces a key that uniquely identifies a specific image (including scale) given an ImageConfiguration
and the providerās settings.
ImageProvider.load
builds an ImageStreamCompleter
for a given key. The completer begins fetching the image immediately and decodes the resulting bytes via the DecoderCallback
.
ImageProvider.resolve
wraps both methods to (1) obtain a key (via ImageProvider.obtainKey
), (2) query the cache using the key, and (3) if no completer is found, create an ImageStreamCompleter
(via ImageProvider.load
) and update the cache.
precacheImage
provides a convenient wrapper around ImageProvider
so that a given image can be added to the ImageCache
. So long as the same key is used for subsequent accesses, the image will be available immediately (provided that it has fully loaded).
ImageProvider
federates access to images, selecting the best image given the current environment (i.e., ImageConfiguration
). The provider computes a key that uniquely identifies the asset to be loaded; this creates or retrieves an ImageStreamCompleter
from the cache. Various provider subclasses override ImageProvider.load
to customize how the completer is configured; most use SynchronousFuture
to try to provide the image without needing to wait for the next frame. The ImageStreamCompleter
is constructed with a future resolving to a bound codec (i.e., associated with raw image bytes). These bytes may be obtained in a variety of ways: from the network, from memory, from an AssetBundle
, etc. The completer accepts an optional stream of ImageChunkEvents
so that any listeners are notified as the image loads. Once the raw image has been read into memory, an appropriate codec is provided by the engine (via a DecoderCallback
, which generally delegates to PaintingBinding.instantiateImageCodec
). This codec is used to decode frames (potentially multiple times for animated images). As frames are decoded, listeners (e.g., an image widget) are notified with the finalized ImageInfo
(which includes decoded bytes and scale data). These bytes may be painted directly via paintImage
.
FileImage
provides images from the file system. As its own key, FileImage
overrides the equality operator to compare the target file name and scale. A MultiFrameImageStreamCompleter
is configured with the provided scale, and a Codec
instantiated using bytes loaded from the file (via File.readAsBytes
). The completer will only notify listeners when the image is fully loaded.
MemoryImage
provides images directly from an immutable array of bytes. As its own key, MemoryImage
overrides the equality operator to compare scale as well as the actual bytes. A MultiFrameImageStreamCompleter
is configured with the provided scale, and a Codec
instantiated using the provided bytes. The completer will only notify listeners when the image is fully loaded.
NetworkImage
defines a thin interface to support different means of providing images from the network; it relies on instances of itself for a key.
io.NetworkImage
implements this interface using Dart
ās standard HttpClient
to retrieve images. As its own key, io.NetworkImage
overrides the equality operator to compare the target URL
and scale. A MultiFrameImageStreamCompleter
is configured with the provided scale, and a Codec
instantiated using the consolidated bytes produced by HttpClient.getUrl
. Unlike the other providers, io.NetworkImage
will report loading status to its listeners via a stream of ImageChunkEvents
. This relies on the āContent-Lengthā header being correctly reported by the remote server.
AssetBundleImageProvider
provides images from an AssetBundle
using AssetBundleImageKey
. The key is comprised of a specific asset bundle, asset key, and image scale. A MultiFrameImageStreamCompleter
is configured with the provided scale, and a Codec
instantiated using bytes loaded from the bundle (via AssetBundle.load
). The completer will only notify listeners when the image is fully loaded.
ExactAssetImage
is a subclass that allows the bundle, asset, and image scale to be set explicitly, rather than read from an ImageConfiguration
.
AssetImage
is a subclass that resolves to the most appropriate asset given a set of alternatives and the current runtime environment. Primarily, this subclass selects assets optimized for the deviceās pixel ratio using a simple naming convention. Assets are organized into logical directories within a given parent. Directories are named āNx/ā, where N is corresponds to the imageās intended scale; the default asset (with 1:1 scaling) is rooted within the parent itself. The variant that most closely matches the current pixel ratio is selected.
The main difference from the superclass is method by which keys are produced; all other functionality (e.g., AssetImage.load
, AssetImage.resolve
) is inherited.
A JSON
-encoded asset manifest is produced from the pubspec file during building. This manifest is parsed to locate variants of each asset according to the scheme described above; from this list, the variant nearest the current pixel ratio is identified. A key is produced using this assetās scale (which may not match the deviceās pixel ratio), its fully qualified name, and the bundle that was used. The completer is configured by the superclass.
The equality operator is overridden such that only the unresolved asset name and bundle are consulted; scale (and the best fitting asset name) are excluded from the comparison.
ResizeImage
wraps another ImageProvider
to support size-aware caching. Ordinarily, images are decoded using their intrinsic dimensions (via instantiateImageCodec
); consequently, the version of the image stored in the ImageCache
corresponds to the full size image. This is inefficient for images that are displayed at a different size. ResizeImage
addresses this by augmenting the underlying key with the requested dimensions; it also applies a DecoderCallback
that forwards these dimensions via instantiateImageCodec
.
The first time an image is provided, it is loaded using the underlying provider (via ImageProvider.load
, which doesnāt update the cache). The resulting ImageStreamCompleter
is cached using the ResizeImage
ās key (i.e., _SizeAwareCacheKey
).
Subsequent accesses will hit the cache, which returns an image with the corresponding dimensions. Usages with different dimensions will result in additional entries being added to the cache.
There are several auxiliary classes allowing image rendering to be customized. BlendMode
specifies how pixels from source and destination images are combined during compositing (e.g., BlendMode.multiply
, BlendMode.overlay
, BlendMode.difference
). ColorFilter
specifies a function combining two colors into an output color; this function is applied before any blending. ImageFilter
provides a handle to an image filter applied during rendering (e.g., Gaussian blur, scaling transforms). FilterQuality
allows the quality/performance of said filter to be broadly customized.
Canvas exposes the lowest level API for painting images into layers. The principal methods include Canvas.drawImage
, which paints an image at a particular offset, Canvas.drawImageRect
, which copies pixels from a source rectangle to a destination rectangle, Canvas.drawAtlas
, which does the same for a variety of rectangles using a āsprite atlas,ā and Canvas.drawImageNine
, which slices an image into a non-uniform 3x3 grid, scaling the cardinal and center boxes to fill a destination rectangle (the corners are copied directly). Each of these methods accept a Paint
instance to be used when compositing the image (e.g., allowing a BlendMode
to be specified); each also calls directly into the engine to perform any actual painting.
paintImage
wraps the canvas API to provide an imperative API for painting images in a variety of styles. It adds support for applying a box fit (e.g., BoxFit.cover
to ensure the image covers the destination) and repeated painting (e.g., ImageRepeat.repeat
to tile an image to cover the destination), managing layers as necessary.
Image encapsulates a variety of widgets, providing a high level interface to the image rendering machinery. This widget configures an ImageProvider
(selected based on the named constructor, e.g., Image.network
, Image.asset
, Image.memory
) which it resolves to obtain an ImageStream
. Whenever this stream emits an ImageInfo
instance, the widget is rebuilt and repainted. Conversely, if the widget is reconfigured, the ImageProvider
is re-resolved, and the process repeated. From this flow, Image
extracts the necessary data to fully configure a RawImage
widget, which manages the actual RenderImage
If a cache width or cache height are provided, the underlying ImageProvider
is wrapped in a ResizeImage
(via Image._resizeIfNeeded
). This ensures that the image is decoded and cached using the provided dimensions, potentially limiting the amount of memory used.
Image adds support for image chrome (e.g., a loading indicator) and semantic annotations.
If animations are disabled by TickerMode
, Image
pauses rendering of any new animation frames provided by the ImageStream
for consistency.
The ImageConfiguration
passed to ImageProvider
is retrieved from the widget environment via createLocalImageConfiguration
.
RawImage
is a LeafRenderObjectWidget
wrapping a RenderImage
and all necessary configuration data (e.g., the ui.Image
, scale, dimensions, blend mode).
RenderImage
is a RenderBox
leaf node that paints a single image; as such, it relies on the widget system to repaint whenever the associated ImageStream
emits a new frame. Painting is performed by paintImage
using a destination rectangle sized by layout and positioned at the current offset. Alignment, box fit, and repetition determines how the image fills the available space.
There are two types of dimensions considered during layout: the imageās intrinsic dimensions (e.g., the number of bytes comprising the image divided by scale) and the requested dimensions (e.g., the value of width and height specified by the caller).
During layout, the incoming constraints are applied to the requested dimensions (via RenderImage._sizeForConstraints
): first, the requested dimensions are clamped to the constraints. Next, the result is adjusted to match the imageās intrinsic aspect ratio while remaining as large as possible. If there is no image associated with the render object, the smallest possible size is selected.
The intrinsic dimension methods apply the same logic. However, instead of using the incoming constraints, one dimension is fixed (i.e., corresponding to methodās parameter) whereas the other is left unconstrained.
RenderSliverPersistentHeader
provides a base class for implementing persistent headers within a viewport, adding support for varying between a minimum and maximum extent during scrolling. Some subclasses introduce pinning behavior, whereas others allow the header to scroll into and out of out of view. SliverPersistentHeader
encapsulates this behavior into a stateless widget, delegating configuration (and child building) to a SliverPersistentHeaderDelegate
instance.
Persistent headers can be any combination of pinnable and floating (or neither); those that float, however, can also play a snapping animation. All persistent headers expand and contract in response to scrolling; only floating headers do so in response to user scrolling anywhere in the viewport.
A floating header reappears whenever the user scrolls its direction. The header expands to its maximum extent as the user scrolls toward it, and shrinks as the user scrolls away.
A pinned header remains at the top of the viewport. Unless floating is also enabled, the header will only expand when approaching its actual position (e.g., the top of the viewport).
Snapping causes a floating header to animate to its expanded or contracted state when the user stops scrolling, regardless of scroll extent.
Persistent headers contain a single box child and track a maximum and minimum extent. The minimum extent is typically based on the boxās intrinsic dimensions (i.e., the objectās natural size). As per the render box protocol, by reading the childās intrinsic size, the persistent header will be re-laid out if this changes.
RenderSliverPersistentHeader
doesnāt configure parent data or determine how to position its child along the main axis. It does, however, provide support for hit testing children (which requires subclasses to override RenderSliverPersistentHeader.childMainAxisPosition
). It also paints its child without using parent data, computing the childās offset in the same way as RenderSliverSingleBoxAdapter
(i.e., to determine whether its viewport-naive box is offset on the canvas if partially scrolled out of view).
Two major hooks are exposed:
RenderSliverPersistentHeader.updateChild
: supports updating the contained box whenever the persistent header lays out or the box itself changes size. This is provided two pieces of layout information.
Shrink offset is the delta between the current and maximum extents (i.e., how much more room there is to grow). Always positive.
Overlaps content is true if the headerās leading edge is not at its layout position in the viewport.
RenderSliverPersistentHeader.layoutChild
: invoked by subclasses to updates then lay out children within the largest visible portion of the header (between maximum and minimum extent). Subclasses provide a scroll offset, maximum extent, and overlap flag, all of which may differ from the incoming constraints.
Shrink offset is set to the scroll offset (how far the header is before the viewportās leading edge) and is capped at the max extent. Conceptually, this represents how much of the header is off screen.
If the child has changed size or the shrink offset has changed, the box is given an opportunity to update its appearance (via RenderSliverPersistentHeader.updateChild
). This is done within a layout callback since, in the common case, the child is built dynamically by the delegate. This is an example of interleaving build and layout.
This flow is facilitated by the persistent header widgets (e.g., _SliverPersistentHeaderRenderObjectWidget
), which utilize a custom render object element (_SliverPersistentHeaderElement
).
When the child is updated (via RenderSliverPersistentHeader.updateChild
), the specialized element updates its childās element (via Element.updateChild
) using the widget produced by the delegateās build method.
Finally, the child is laid out with its main extent loosely constrained to the portion of the header thatās visible -- with the minimum extent as a lower bound (i.e., so that the box is never forced to be smaller than the minimum extent).
RenderSliverScrollingPersistentHeader
expands to its maximum extent when scrolled into view, and shrinks to its minimum extent before being scrolled out of view.
Lays out the child in the largest visible portion of the header (up to the maximum extent), then returns its own geometry.
SliverGeometry.scrollExtent
: always the max extent since this is how much scrolling is needed to scroll past the header.
SliverGeometry.paintOrigin
: paints before itself if thereās a negative overlap (e.g., to fill empty space in the viewport from overscroll).
SliverGeometry.paintExtent
, SliverGeometry.layoutExtent
: the largest visible portion of the header, clamped to the remaining paint extent.
SliverGeometry.maxPaintExtent
: the maximum extent; itās not possible to provide more content.
Tracks the childās position across the scroll (i.e., the distance from the headerās leading edge to the childās leading edge). The child is aligned with the trailing edge of the header to ensure it scrolls into view first.
Calculated as the portion of the maximum extent in view (which is negative when scrolled before the leading edge), minus the childās extent after layout.
RenderSliverPinnedPersistentHeader
is similar to its scrolling sibling, but remains pinned at the top of the viewport regardless of offset. It also avoids overlapping earlier slivers (e.g., useful for building stacking section labels).
The pinned header will generally have a layout offset of zero when scrolled before the viewportās leading edge. As a consequence, any painting that it performs will be coincident with the viewportās leading edge. This is how pinning is achieved.
Recall that layout offset within a viewport only increases when slivers report a non-zero layout extent. Slivers generally report a zero layout extent when they precede the viewportās leading edge (i.e., when viewport scroll offset exceeds their extent); thus, when the header precedes the viewportās leading edge, it will likely have a zero layout offset. Additionally, recall that layout offset is used when the viewport paints its children. Since the header will have a zero layout offset, at least in the case of RenderViewport
, the sliver will be painted at the viewportās painting origin.
If an earlier, off-screen sliver consumes layout, this will bump out where the header paints. This might cause strange behavior.
If another pinned header precedes this one, it will not consume layout. However, by utilizing the overlap offset, the current header is able to avoid painting on top of the preceding header.
Next, it lays out the child in the largest visible portion of the header (up to the maximum extent), then returns its own geometry.
SliverGeometry.scrollExtent
: always the max extent since this is how much scrolling is needed to scroll past the header (though, practically speaking, it can never truly be scrolled out of view).
SliverGeometry.paintOrigin
: always paints on the first clean pixel to avoid overlapping earlier slivers.
SliverGeometry.paintExtent
: always paints the entire child, even when scrolled out of view. Clamped to remaining paint extent.
SliverGeometry.layoutExtent
: the pinned header will consume the portion of its maximum extent that is actually visible (i.e., it only takes up space when its true layout position falls within the viewport). Otherwise (e.g., when pinned), it occupies zero layout space and therefore may overlap any subsequent slivers. Clamped to remaining paint extent.
SliverGeometry.maxScrollObstructionExtent
: the minimum extent, since this is the most that the viewport can be obscured when pinned.
The child is always positioned at zero as this sliver always paints at the viewportās leading edge.
RenderSliverFloatingPersistentHeader
is similar to its scrolling sibling, but reattaches to the viewportās leading edge as soon as the user scrolls in its direction. It then shrinks and detaches if the user scrolls away.
The floating header tracks scroll offset, detecting when the user begins scrolling toward the header. The header maintains an effective scroll offset that matches the real scroll offset when scrolling away, but that enables floating otherwise.
It does this by jumping ahead such that the sliverās trailing edge (as measured using the effective offset and maximum extent) is coincident with the viewportās leading edge. This is the āfloating threshold.ā All
subsequent scrolling deltas are applied to the effective offset until the user scrolls the header before the floating threshold. At this point, normal behavior is resumed.
RenderSliverFloatingPersistentHeader.performLayout
detects the userās scroll direction and manages an effective scroll offset. The effective scroll offset is used for updating geometry and painting, allowing the header to float above other slivers.
The effective scroll offset matches the actual scroll offset the first time layout is attempted or whenever the user is scrolling away from the header and the header isnāt currently floating.
Otherwise, the header is considered to be floating. This occurs when the user scrolls toward the header, or the headerās actual or effective scroll offset is less than its maximum extent (i.e., the headerās effective trailing edge is at or after the viewportās leading edge).
When floating first begins (i.e., because the user scrolled toward the sliver), the effective scroll offset jumps to the headerās maximum extent. This effectively positions its trailing edge at the viewportās leading edge.
As scrolling continues, the delta (i.e., the change in actual offset) is applied to the effective offset. As a result, geometry is updated as though the sliver were truly in this location.
The effective scroll offset is permitted to become smaller, allowing the header to reach its maximum extent as it scrolls into view.
The effective scroll offset is also permitted to become larger such that the header shrinks. Once the header is no longer visible, the effective scroll offset jumps back to the real scroll offset, and the header is no longer floating.
Once the effective scroll offset has been updated, the child is laid out using the effective scroll offset, the maximum extent, and an overlap flag.
The header may overlap content whenever it is floating (i.e., its effective scroll offset is less than its actual scroll offset).
Finally, the headerās geometry is computed and the child is positioned.
RenderSliverFloatingPersistentHeader.updateGeometry
computes the headerās geometry as well as the childās final position using the effective and actual scroll offsets.
SliverGeometry.scrollExtent
: always the max extent since this is how much scrolling is needed to scroll past the header.
SliverGeometry.paintOrigin
: always paints on the first clean pixel to avoid overlapping earlier slivers.
SliverGeometry.paintExtent
: the largest visible portion of the header (using its effective offset), clamped to the remaining paint extent.
SliverGeometry.layoutExtent
: the floating header will consume the portion of its maximum extent that is actually visible (i.e., it only takes up space when its true layout position falls within the viewport). Otherwise (e.g., when floating), it occupies zero layout space and therefore may overlap any subsequent slivers. Clamped to remaining paint extent.
SliverGeometry.maxScrollObstructionExtent
: the maximum extent, since this is the most that the viewport can be obscured when floating.
The childās position is calculated using the sliverās effective trailing edge (i.e., as measured using the sliverās effective scroll offset). When the sliverās actual position precedes the viewportās leading edge, its layout offset will typically be zero, and thus the sliver will paint at the viewportās leading edge.
RenderSliverFloatingPinnedPersistentHeader
is identical to its parent (RenderSliverFloatingPersistentHeader
) other than in how it calculates its geometry and childās position. Like its parent, the header will reappear whenever the user scrolls toward it. However, the header will remained pinned at the viewportās leading edge even when the user scrolls away.
The calculated geometry is almost identical to its parent. The key difference is that the pinned, floating header always paints at least its minimum extent (room permitting). Additionally, its child is always positioned at zero since painting always occurs at the viewportās leading edge.
SliverGeometry.scrollExtent
: always the max extent since this is how much scrolling is needed to scroll past the header (though it will continue to paint itself).
SliverGeometry.paintOrigin
: always paints on the first clean pixel to avoid overlapping earlier slivers.
SliverGeometry.paintExtent
: the largest visible portion of the header (using its effective offset). Never less than the minimum extent (i.e., the child will always be fully painted), never more than the remaining paint extent. Unlike the non-floating pinned header, this will vary so that the header visibly grows to its maximum extent.
SliverGeometry.layoutExtent
: the pinned, floating header will consume the portion of its maximum extent that is actually visible (i.e., it only takes up space when its true layout position falls within the viewport). Otherwise (e.g., when floating), it occupies zero layout space and therefore may overlap any subsequent slivers. Clamped to the paint extent, which may be less than the remaining paint extent if still growing between minimum and maximum extent.
SliverGeometry.maxScrollObstructionExtent
: the maximum extent, since this is the most that the viewport can be obscured when floating.
Scene
is an opaque, immutable representation of composited UI. Each frame, the rendering pipeline produces a scene that is uploaded to the engine for rasterization (via Window.render
). A scene can also be rasterized directly into an Image
(via Scene.toImage
). Generally, a Scene
encapsulates a stack of rendering operations (e.g., clips, transforms, effects) and graphics (e.g., pictures, textures, platform views).
SceneBuilder
is lowest level mechanism (within the framework) for assembling a scene. SceneBuilder
manages a stack of rendering operations (via SceneBuilder.pushClipRect
, SceneBuilder.pushOpacity
, SceneBuilder.pop
, etc) and associated graphics (via SceneBuilder.addPicture
, SceneBuilder.addTexture
, SceneBuilder.addRetained
, etc). As operations are applied, SceneBuilder
produces EngineLayer
instances that it both tracks and returns. These may be cached by the framework to optimize subsequent frames (e.g., updating an old layer rather than recreating it, or reusing a previously rasterized layer). Once complete, the final Scene
is obtained via SceneBuilder.build
.
Drawing operations are accumulated in a Picture
(via PictureRecorder
) which is added to the scene via SceneBuilder.addPicture
.
External textures (represented by a texture ID established using the platform texture repository) are added to the scene via SceneBuilder.addTexture
. This supports a āfreezeā parameter so that textures may be paused when animation is paused.
Previously rasterized EngineLayers
can be reused as an efficiency operation via SceneBuilder.addRetained
. This is known as retained rendering.
Picture
is an opaque, immutable representation of a sequence of drawing operations. These operations can be added directly to a Scene
(via SceneBuilder.addPicture
) or combined into another picture (via Canvas.drawPicture
). Pictures can also be rasterized (via Picture.toImage
) and provide an estimate of their memory footprint.
PictureRecorder
is an opaque mechanism by which drawing operations are sequenced and transformed into an immutable Picture
. Picture recorders are generally managed via a Canvas
. Once recording is completed (via PictureRecorder.endRecording
), a final Picture
is returned and the PictureRecorder
(and associated Canvas
) are invalidated.
Layer
represents one slice of a composited scene. Layers are arranged into a hierarchy with each node potentially affecting the nodes below it (i.e., by applying rendering operations or drawing graphics). Layers are mutable and freely moved around the tree, but must only be used once. Each layer is composited into a Scene
using SceneBuilder
(via Layer.addToScene
); adding the root layer is equivalent to compositing the entire tree (e.g., layers add their children). Note that the entire layer tree must be recomposited when mutated; there is no concept of dirty states.
SceneBuilder
provides an EngineLayer
instance when a layer is added. This handle can be stored in Layer.engineLayer
and later used to optimize compositing. For instance, rather than repeating an operation, Layer.addToScene
might pass the engine layer instead (via Layer._addToSceneWithRetainedRendering
), potentially reusing the rasterized texture from the previous frame. Additionally, an engine layer (i.e., āoldLayer
ā) can be specified when performing certain rendering operations to allow these to be implemented as inline mutations.
Layers provide support for retained rendering (via Layer.markNeedsAddToScene
).This flag tracks whether the previous engine layer can be reused during compositing (i.e., whether the layer is unchanged from the last frame). If so, retained rendering allows the entire subtree to be replaced with a previously rasterized bitmap (via SceneBuilder.addRetained
). Otherwise, the layer must be recomposited from scratch. Once a layer has been added, this flag is cleared (unless the layer specifies that it must always be composited).
All layers must be added to the scene initially. Once added, the cached engine layer may be used for retained rendering. Other layers disable retained rendering altogether (via Layer.alwaysNeedsAddToScene
). Generally, when a layer is mutated (i.e., by modifying a property or adding a child), it must be re-added to the scene.
If a layer must be added to the scene, all ancestor layers must be added, too. Retained rendering allows a subtree to be replaced with a cached bitmap. If a descendent has changed, this bitmap becomes invalid and therefore the ancestor layer must also be added to the scene (via Layer.updateSubtreeNeedsAddToScene
).
Note that this property isnāt enforced during painting; the layer tree is only made consistent when the scene is composited (via ContainerLayer.buildScene
).
A layerās parent must be re-added when it receives a new engine layer [?]
Generally, only container layers use retained rendering?
Metadata can be embedded within a layer for later retrieval (via Layer.find
and Layer.findAll
).
ContainerLayer
manages a list of child layers, inserting them into the composited scene in order. The root layer is a subclass of ContainerLayer
(TransformLayer
); thus, ContainerLayer
is responsible for compositing the full scene (via ContainerLayer.buildScene
). This involves making retained rendering state consistent (via ContainerLayer.updateSubtreeNeedsAddToScene
), adding all the children to the scene (via ContainerLayer.addChildrenToScene
), and marking the container as no longer needing to be added (via Layer._needsAddToScene
). ContainerLayer
implements a number of child operations (e.g., ContainerLayer.append
, ContainerLayer.removeAllChildren
), and multiplexes most of its operations across these children (e.g., ContainerLayer.find
; note that later children are āaboveā earlier children).
ContainerLayer
will use retained rendering for its children provided that they arenāt subject to an offset (i.e., are positioned at the layerās origin). This is the entrypoint for most retained rendering in the layer tree.
ContainerLayer.applyTransform
transforms the provided matrix to reflect how a given child will be transformed during compositing. This method assumes all children are positioned at the origin; thus, any layer offsets must be removed and expressed as a transformation (via SceneBuilder.pushTransform
or SceneBuilder.pushOffset
). Otherwise, the resulting matrix will be inaccurate.
OffsetLayer
is a ContainerLayer
subclass that supports efficient repainting (i.e., repaint boundaries). Render objects defining repaint boundaries correspond to OffsetLayers
in the layer tree. If the render object doesnāt need to be repainted or is simply being translated, its existing offset layer may be reused. This allows the entire subtree to avoid repainting.
OffsetLayer
applies any offset as a top-level transform so that descendant nodes are composited at the origin and therefore eligible for retained rendering.
OffsetLayer.toImage
rasterizes the subtree rooted at this layer (via Scene.toImage
). The resulting image, consisting of raw pixel data, is rendered into a bounding rectangle using the specified pixel ratio (ignoring the deviceās screen density).
TransformLayer
is a ContainerLayer
subclass that applies an arbitrary transform; it also happens to (typically) be the root layer corresponding to the RenderView
. Any layer offset (via Layer.addToScene
) or explicit offset (via OffsetLayer.offset
) is removed and applied as a transformation to ensure that TransformLayer.applyTransform
behaves consistently.
PhysicalModelLayer
is the engine by which physical lighting effects (e.g., shadows) are integrated into the composited scene. This class incorporates an elevation, clip region, background color, and shadow color to cast a shadow behind its children (via SceneBuilder.pushPhysicalShape
).
AnnotatedRegionLayer
incorporates metadata into the layer tree within a given bounding rectangle. An offset (defaulting to the origin) determines the rectangleās top left corner and a size (defaulting to the entire layer) represents its dimensions. Hit testing is performed by AnnotatedRegionLayer.find
and AnnotatedRegionLayer.findAll
. Results appearing deeper in the tree or later in the child list take precedence.
FollowerLayer
and LeaderLayer
allow efficient transform linking (e.g., so that one layer appears to scroll with another layer). These layers communicate via LayerLink
, passing the leaderās transform (including layerOffset
) to the follower. The follower uses this transform to appear where the follower is rendered. CompositedTransformFollower
and CompositedTransformTarget
widgets create and manage these layers.
There are a variety of leaf and interior classes making up the layer tree.
Leaf nodes generally extend Layer
directly. PictureLayer
describes a single Picture
to be added to the scene. TextureLayer
is analogous, describing a texture (by ID) and a bounding rectangle; PlatformViewLayer
is nearly identical, applying a view instead of a texture.
Interior nodes generally extend ContainerLayer
or one of its subclasses. OffsetLayer
is key to implementing efficient repainting. ClipPathLayer
, ClipRectLayer
, ColorFilterLayer
, OpacityLayer
, etc., apply the corresponding effect by pushing a scene builder state, adding all children (via ContainerLayer.addChildrenToScene
, potentially utilizing retained rendering), then popping the state.
EngineLayer
and its various specializations (e.g., OpacityEngineLayer
, OffsetEngineLayer
) represent opaque handles to backend layers. SceneBuilder
produces the appropriate handle for each operation it supports. These may then be used to enable retained rendering and inline updating.
Canvas
provides an interface for performing drawing operations within the context of a single PictureRecorder
. That is, all drawing performed by a Canvas
is constrained to a single Picture
(and PictureLayer
). Rendering and drawing operations spanning multiple layers are supported by PaintingContext
, which manages Canvas
lifecycle based on the compositing requirements of the render tree.
PaintingContext.repaintCompositedChild
is a static method that paints a render object into its own layer. Once nodes are marked dirty for painting, they will be processed during the painting phase of the rendering pipeline (via PipelineOwner.flushPaint
). This performs a depth-first traversal of all dirty render objects. Note that only repaint boundaries are ever actually marked dirty; all other nodes traverse their ancestor chain to find the nearest enclosing repaint boundary, which is then marked dirty. The dirty node is processed (via PaintingContext._repaintCompositedChild
), retrieving or creating an OffsetLayer
to serve as the subtreeās container layer. Next, a new PaintingContext
is created (associated with this layer) to facilitate the actual painting (via PaintingContext._paintWithContext
).
Layers can be attached and detached from the rendering pipeline. If a render object with a detached layer is found to be dirty (by PipelineOwner.flushPaint
), the ancestor chain is traversed to find the nearest repaint boundary with an attached (or as yet uncreated) layer. This node is marked dirty to ensure that, when the layer is reattached, it will be repainted as expected (via RenderObject._skippedPaintingOnLayer
).
PaintingContext
provides a layer of indirection between the paint method and the underlying canvas to allow render objects to adapt to changing compositing requirements. That is, a render object may introduce a new layer in some cases and re-use an existing layer in others. When this changes, any ancestor render objects will need to adapt to the possibility of a descendent introducing a new layer (via RenderObject.needsCompositing
); in particular, this requires certain operations to be implemented by introducing their own layers (e.g., clipping). PaintingContext
manages this process and provides a number of methods that encapsulate this decision (e.g., PaintingContext.pushClipPath
). PaintingContext
also ensures that children introducing repaint boundaries are composited into new layers (via PaintingContext.paintChild
). Last, PaintingContext
manages the PictureRecorder
provided to the Canvas
, appending picture layers (i.e., PictureLayer
) whenever a recording is completed.
PaintingContext
is initialized with a ContainerLayer
subclass (i.e., the container layer) to serve as the root of the layer tree produced via that context; painting never takes place within this layer, but is instead captured by new PictureLayer
instances which are appended as painting proceeds.
PaintingContext
may manage multiple canvases due to compositing. Each Canvas
is associated with a PictureRecorder
and a PictureLayer
(i.e., the current layer) for the duration of its lifespan. Until a new layer must be added (e.g., to implement a compositing effect or because a repaint boundary is encountered), the same canvas (and PictureRecorder
) is used for all render objects.
Recording begins the first time the canvas is accessed (via PaintingContext.canvas
). This allocates a new picture layer, picture recorder, and canvas instance; the picture layer is appended to the container layer once created.
Recording ends when a new layer is introduced directly or indirectly; at this point, the picture is stored in the current layer (via PictureRecorder.endRecording
) and the canvas and current layer are cleared. The next time a child is painted, a new canvas, picture recorder, and picture layer will be created.
Composited children are painted using a new PaintingContext
initialized with a corresponding OffsetLayer
. If a composited child doesnāt actually need to be repainted (e.g., because it is only being translated), the OffsetLayer
is reused with a new offset.
PaintingContext
provides a compositing naive API. If compositing is necessary, new layers are automatically pushed to achieve the desired effect (via PaintingContext.pushLayer
). Otherwise, the effect is implemented directly using the Canvas
(e.g., via PictureRecorder
).
Pushing a layer ends the current recording, appending the new layer to the container layer. A PaintingContext
is created for the new layer (if present, the layerās existing children are removed since theyāre likely outdated). If provided, a painting function is applied using the new context; any painting is contained within the new layer. Subsequent operations with the original PaintingContext
will result in a new canvas, layer, and picture recorder being created.
ClipContext
is PaintingContext
ās base class. Its primary utility is in providing support for clipping without introducing a new layer (e.g., methods suitable for render objects that do not need compositing).
Compositing describes the process by which layers are combined by the engine during rasterization. Within the context of the framework, compositing typically refers to the allocation of render objects to layers with respect to painting. Needing compositing does not imply that a render object will be allocated its own layer; instead, it indicates that certain effects must be implemented by introducing a new layer rather than modifying the current one (e.g., applying opacity or clipping). For instance, if a parent first establishes a clip and then paints its child, a decision must be made as to how that clip is implemented: (1) as part of the current layer (i.e., Canvas.clipPath
), or (2) by pushing a ContainerLayer
(i.e., ClipPathLayer
). The āneeds compositingā bit is how this information is managed.
Conceptually, this is because this node might eventually paint a descendent that pushes a new layer (e.g., because it is a repaint boundary). Thus, any painting that might have non-local consequences must be implemented in a way that would work across layers (i.e., via compositing).
Note that this is different than from marking a render object as being a repaint boundary. Doing this causes a new layer to be introduced when painting the child (via PaintingContext.paintChild
). This invalidates the needs compositing bits for all the ancestors of the repaint boundary. This might cause some ancestors to introduce new layers when painting, but only if they utilize any non-local operations
RenderObject.markNeedsCompositingBitsUpdate
marks a render object as requiring a compositing bit update (via PipelineOwner
. _nodesNeedingCompositingBitsUpdate
). If a node is marked dirty, all of its ancestors are marked dirty, too. As an optimization, this walk may be cut off if the current node or the current nodeās parent is a repaint boundary (or the parent is already marked dirty).
If a nodeās compositing bits need updating, itās possible that it will now introduce a new layer (i.e., it will need compositing). If so, all ancestors will need compositing, too, since they may paint a descendent that introduces a new layer. As described, certain non-local effects will need to be implemented via compositing.
The walk may be cut off at repaint boundaries since all ancestors must already have been marked as needing compositing.
Adding or removing children (via RenderObject.adoptChild
and RenderObject.dropChild
) might alter the compositing requirements of the render tree. Similarly, changing the RenderObject.alwaysNeedsCompositing
bits would require that the bits be updated.
During the rendering pipeline, PipelineOwner.flushCompositingBits
updates all dirty compositing bits (via RenderObject._updateCompositingBits
). A given node needs compositing if (1) itās a repaint boundary (RenderObject.isRepaintBoundary
), (2) itās marked as always needing compositing (RenderObject.alwaysNeedsCompositing
), or (3) any of its descendants need compositing.
This process walks all descendants that have been marked dirty, updating their bits according to the above policy. Note that this will typically only walk the path identified when the compositing bits were marked dirty.
If a nodeās compositing bit is changed, it will be marked dirty for painting (as painting may now require additional compositing).
If a render object always introduces a layer, it should toggle alwaysNeedsCompositing
. If this changes (other than when altering children, which calls this automatically), markNeedsCompositingBitsUpdate
should be called.
Certain render objects introduce repaint boundaries within the render tree (via RenderObject.isRepaintBoundary
). These render objects are always painted into a new layer, allowing them to paint separately from their parent. This effectively decouples the subtree rooted at the repaint boundary from previously painted nodes.
This is useful when part of the UI remains mostly static and part of the UI updates frequently. A repaint boundary helps to avoid repainting the static portion of the UI.
Render objects that are repaint boundaries are associated with an OffsetLayer
, a special type of layer that can update its position without re-rendering.
Render objects that form repaint boundaries are handled differently during painting (via PaintingContext.paintChild
).
If the child doesnāt need painting, the entire subtree is skipped. The corresponding layer is updated to reflect the new offset (via PaintingContext._compositeChild
) and appended to the current parent layer.
If the child needs painting, the current offset layer is cleared or created (via PaintingContext._repaintCompositedChild
) then used for painting (via RenderObject._paintWithContext
).
Otherwise, painting proceeds through RenderObject._paintWithContext
.
Since repaint boundaries always push new layers, all ancestors must be marked as needing compositing. If any such node utilizes a non-local painting effect, that node will need to introduce a new layer, too.
PaintingContext.paintChild
manages this process by (1) ending any ongoing recording, (2) creating a new layer for the child, (3) painting the child in the new layer using a new painting context (unless it is a clean repaint boundary, which will instead be used as is), (4) appending that layer to the current layer tree, and (5) creating a new canvas and layer for any future painting with the original painting context.
Methods in PaintingContext
will consulting the compositing bits to determine whether to implement a behavior by inserting a layer or altering the canvas (e.g., clipping).
All render objects may be associated with a ContainerLayer
(via RenderObject.layer
). If defined, this is the last layer that was used to paint the render object (if multiple layers are used, this is the root-most layer).
Repaint boundaries are automatically assigned an OffsetLayer
(via PaintingContext._repaintCompositedChild
). All other layers must specifically set this field when painting.
Render objects that do not directly push layers must set this to null.
ContainerLayer
tracks any associated EngineLayer
(via ContainerLayer.engineLayer
). Certain operations can use this information to allow a layer to be updated rather than rebuilt.
Methods accept an āoldLayer
ā parameter to enable this optimization.
Stored engine layers can be reused directly (via SceneBuilder.addRetained
); this is retained rendering.
A layer can be rasterized to an image via OffsetLayer.toImage
. A convenient way to obtain an image of the entire UI is to use this method on the RenderView
ās own layer. To obtain a subset of the UI, an appropriate repaint boundary can be inserted and its layer used similarly.
There are two code paths. Layers that do not require compositing handle transforms and clips within their own rendering context (thus, the effects are limited to that single layer), via Canvas
. Those that are composited, however, introduce these effects as special ContainerLayers
, via PaintingContext
.
A variety of useful ContainerLayers
are included: AnnotatedRegionLayer
, which is useful for storing metadata within sections of a layer, BackdropFilterLayer
, which allows the background to be blurred or filtered, OpacityLayer
, which allows opacity to be varied, and OffsetLayer
, which is the key mechanism by which the repaint boundary optimization works.
Canvas.saveLayer
/ Canvas.restore
allow drawing commands to be grouped such that effects like blending, color filtering, and anti-aliasing can be applied once for the group instead of being incrementally stacked (which could lead to invalid renderings).
Textures are entirely managed by the engine and referenced by a simple texture ID. Layers depicting textures are associated with this ID and, when composited, integrated with the backend texture. As such, these are painted out-of-band by the engine.
Texture layers are integrated with the render tree via TextureBox
. This is a render box that tracks a texture ID and paints a corresponding TextureLayer
. During layout, this box expands to fill the incoming constraints.
Scrollable
provides the interaction model for scrolling without specifying how the actual viewport is managed (a ViewportBuilder
must be provided). UI concerns are customized directly or via an inherited ScrollConfiguration
that exposes an immutable ScrollBehavior
instance. This instance is used to build platform-specific chrome (i.e., a scrolling indicator) and provides ambient ScrollPhysics
, a class that describes how scrolling UI will respond to user gestures.
ScrollPhysics
is consulted throughout the framework to construct physics simulations for ballistic scrolling, to validate and adjust user interaction, to manage momentum across interactions, and to identify overscroll regions.
ScrollableState
connects the Scrollable
to a ScrollPosition
via a ScrollController
. This controller is responsible for producing the ScrollPosition
from a given ScrollContext
and ScrollPhysics
; it also provides the initialScrollOffset
.
For example, PageView
injects a page-based scrolling mechanism by having its ScrollController
(PageController
) return a custom scroll position subclass.
The ScrollContext
exposes build contexts for notifications and storage, a ticker provider for animations, and methods to interact with the scrollable; its analogous to BuildContext
for an entire scrollable widget.
ScrollPosition
tracks scroll offset as pixels (reporting changes via Listenable
), applies physics to interactions via ScrollPhysics
, and through subclasses like ScrollPositionWithSingleContext
(which implement ScrollActivityDelegate
and makes concrete much of the actual scrolling machinery), starts and stops ScrollActivity
instances to mutate the represented scroll position.
The actual pixel offset and mechanisms for reacting to changes in the associated viewport are introduced via the ViewportOffset
superclass.
Viewport metrics are mixed in via ScrollMetrics
, which redundantly defines pixel offset and defines a number of other useful metrics like the amount of content above and below the current viewport (extentBefore
, extentAfter
), the pixel offset corresponding to the top and bottom of the current viewport (minScrollExtent
, maxScrollExtent
) and the viewport size (viewportDimension
).
The scroll position may need to be corrected (via ScrollPosition.correctPixels
[replaces pixels outright] / ViewportOffset.correctBy
[applies a delta to pixels]) when the viewport is resized, as triggered by shrink wrapping or relayout. Every time a viewport (via RenderViewport
) is laid out, the new content extents are checked by ViewportOffset.applyContentDimensions
to ensure the offset wonāt change; if it does, layout must be repeated.
ViewportOffset.applyViewportDimension
and ViewportOffset.applyContentDimensions
are called to determine if this is the case; any extents provided represent viewport slack -- how far the viewport can be scrolled in either direction beyond what is already visible. Activities are notified via ScrollActivity.applyNewDimensions
().
The original pixel values corresponds to certain children being visible. If the dimensions of the viewport change, the pixel offset required to maintain that same view may change. For example, consider a viewport sized to a single letter displaying āA,ā āB,ā and āCā in a column. When āBā is visible, pixels will correspond to āAāās height. Suppose the viewport expands to fit the full column. Now, pixels will be zero (no offset is needed). [?]
The same is true if the viewportās content changes size. Again, consider the aforementioned āA-B-Cā scenario with āBā visible. Instead of the viewport changing size, suppose āAā is resized to be zero pixels tall. To keep āBā in view, the pixel offset must be updated (from non-zero to zero). [?]
ScrollController
provides a convenient interface for interacting with one or more ScrollPositions
; in effect, it calls the corresponding method in each of its positions. As a Listenable
, the controller aggregates notifications from its positions.
ScrollNotifications
are emitted by scrollable (by way of the active ScrollActivity
). As a LayoutChangedNotification
subclass, these are emitted after build and layout have already occurred, thus only painting can be performed in response without introduce jank.
Listening to a scroll position directly avoids the delay, allowing layout to be performed in response to offset changes. Itās not clear why this is faster - both paths seem to trigger at the same time [?]
The ScrollPositionWithSingleContext
starts and manages ScrollActivity
instances via drag, animateTo
, jumpTo
, and more.
ScrollActivity
instances update the scroll position via ScrollActivityDelegate
; ScrollPositionWithSingleContext
implements this interface and applies changes requested by the current activity (setPixels
, applyUserOffset
) and starts follow-on activities (goIdle
, goBalastic
).
Any changes applied by the activity are processed by the scroll position, then passed back to the activity which generates scroll notifications (e.g., dispatchScrollUpdateNotification
).
DragScrollActivity
, DrivenScrollActivity
, and BallisticScrollActivity
apply user-driven scrolling, animation-driven scrolling, and physics-driven scrolling, respectively.
ScrollPosition.beginActivity
starts activities and tracks all state changes. This is possible because the scroll position is always running an activity, even when idle (IdleScrollActivity
). These state changes generate scroll notifications via the activity.
The underlying Scrollable
uses a gesture recognizer to detect and track dragging if ScrollPhysics.shouldAcceptUserOffset
allows. When a drag begins, the Scrollable
ās scroll position is notified via ScrollPosition.drag
.
ScrollPositionWithSingleContext
implements this method to create a ScrollDragController
which serves as an integration point for the Scrollable
, which receives drag events, and the activity, which manages scroll state / notifications. The controller is returned as a Drag
instance, which provides a mechanism to update state as events arrive.
As the user drags, the drag controller forwards a derived user offset back to ScrollActivityDelegate.applyUserOffset
(ScrollPositionWithSingleContext
) which applies ScrollPhysics.applyPhysicsToUserOffset
and, if valid, invokes ScrollActivityDelegate.setPixels
. This actually updates the scroll offset and generates scroll notifications.
When the drag completes, a ballistic simulation is started via ScrollActivityDelegate.goBallistic
. This delegates to the scroll positionās ScrollPhysics
instance to determine how to react.
Interestingly, the DragScrollActivity
delegates most of its work to the drag controller and is mainly responsible for forwarding scroll notifications.
animateTo
?The DrivenScrollActivity
is much more straightforward. It starts an animation controller which, on every tick, updates the current pixel value via setPixels
. When animating, if the container over-scrolls, an idle activity is started. If the animation completes successfully, a ballistic activity is started instead.
The
ScrollPosition
writes the current scroll offset to PageStorage
if ScrollPosition.keepScrollOffset
is true.
ScrollView
is a base class that builds a scrollable and a viewport, deferring to its subclass to specify how its slivers are constructed. The subclass overrides buildSlivers
to do this (ScrollView.build
creates the Scrollable
, which uses ScrollView.buildViewport
as its viewportBuilder
, which uses ScrollView.buildSlivers
to obtain the sliver children).
ViewportOffset
is an interface that tracks the current scroll offset (ViewportOffset.pixels
) and direction (ViewportOffset.userScrollDirection
, which is relative to the positive axis direction, ignoring growth direction); it also offers a variety of helpers (e.g., ViewportOffset.animateTo
). The offset represents how much content has been scrolled off screen, or more precisely, the number of logical pixels by which all children have been shifted opposite the viewportās scrolling direction (Viewport.axisDirection
). For a web page, this would be how many pixels of content are above the browserās viewport. This interface is implemented by ScrollPosition
, tying together viewports and scrollables. Pixels can be negative when scrolling before the center sliver. [?]
Viewport
is a layout widget that is larger on the inside. The viewport is associated with a scroll offset (ViewportOffset
), an interface that is implemented by ScrollPosition
and typically fulfilled by ScrollPositionWithSingleContext
. As the user scrolls, this offset is propagated to descendant slivers via layout. Finally, slivers are repainted at their new offsets, creating the visual effect of scrolling.
ShrinkWrappingViewport
is a variant of viewport that sizes itself to match its children in the main axis (instead of expanding to fill the main axis).
NestedScrollViewViewport
is a specialized viewport used by NestedScrollView
to coordinate scrolling across two viewports (supported by auxiliary widgets like SliverOverlapAbsorberHandle
).
ScrollView
couples a viewport with a scrollable, delegating to its subclass to provide slivers; as such, the ScrollView
provides a foundation for building scrollable UI. CustomScrollView
accepts an arbitrary sliver list whereas BoxScrollView
-- and its subclasses ListView
and GridView
-- apply a single layout model (e.g., list or grid) to a collection of slivers.
RenderAbstractViewport
provides a common interface for all viewport subtypes. This allows the framework to locate and interact with viewports in a generic way (via RenderAbstractViewport.of
). It also provides a generic interface for determining offsets necessary to reveal certain children.
RenderViewportBase
provides shared code for render objects that host slivers. By establishing an axis and axis direction, RenderViewportBase
maps the offset-based coordinate space used by slivers into cartesian space according to a managed offset value (ViewportOffset.pixels
). RenderViewportBase.layoutChildSequence
serves as the foundation for sliver layout (and is typically invoked by performLayout
in subclasses). RenderViewportBase
also establishes the cache extent (the area to either side of the viewport that is laid out but not visible) as well as entry points for hit testing and painting.
RenderViewport
displays a subset of its sliver children based on its current viewport offset. A center sliver (RenderViewport.center
) is anchored at offset zero. Slivers before center (āreverse childrenā) grow opposite the axis direction (GrowthDirection.reverse
) whereas the center along with subsequent slivers (āforward childrenā) grow forward (GrowthDirection.forward
); both groups are anchored according to the same axis direction (this is why both start from the same edge), though conceptually reverse slivers are laid out in the opposite axis direction (e.g., their āleadingā and ātrailingā edges are flipped).
The anchor point can be adjusted, changing the visual position of offset zero (RenderViewport.anchor
is in the range [0, 1], with zero corresponding to the axis origin [?]).
Conceptually, children are ordered: RN-R0, center, FN-F0.
RenderShrinkWrappingViewport
similar to RenderViewport
except sized to match the total extent of visible children within the bounds of incoming constraints.
RenderNestedScrollViewViewport
Throughout this section, words like āmain extent,ā ācross extent,ā ābefore,ā āafter,ā āleading,ā and ātrailingā are used to eliminate spatial bias from descriptions. This is because viewports can be oriented along either axis (e.g., horizontal, vertical) with varying directionality (e.g., down, right). Moreover, the ordering of children along the axis is subject to the current growth direction (e.g., forward or reverse).
Viewports have two sets of dimensions: outer and inner. The portion of the viewport that occupies space on screen has a main axis and cross axis extent (e.g., height and width); these are the viewportās āouterā, āoutsideā, or āphysicalā dimensions. The inside of the viewport, which matches or exceeds the outer extent, includes all the content contained within the viewport; these are described using āinnerā, āinsideā, or ālogicalā dimensions. The inner edges correspond to the edges of the viewportās contents; the outer edges correspond to the edges of the visible content. When otherwise unspecified, the viewportās leading / trailing edges generally refer to its outer (i.e., physical) edges.
The viewport is comprised of a bidirectional list of slivers. āForward sliversā include a ācenter sliver,ā and are laid out in the default axis direction (GrowthDirection.forward
). āReverse sliversā immediately precede the center sliver and are laid out opposite the axis direction (GrowthDirection.reverse
). The line between forward and reverse slivers, at the center sliverās leading edge, is called the ācenterline,ā coincident with the zero scroll offset. Within this document, the region within the viewport comprised of reverse slivers is called the āreverse regionā with its counterpart being the āforward region.ā Note
that the viewportās inner edges fully encompass both regions.
The viewport positions the center sliver at scroll offset zero by definition. However, the viewport also tracks a current scroll offset (RenderViewport.offset
) to determine which of its sliver children are in view and therefore should be rendered. This offset represents the distance between the center sliverās leading edge (i.e., scroll offset zero) and the viewportās outer leading edge, and increases opposite the axis direction.
Equivalently, this represents the number of pixels by which the viewportās contents have been shifted opposite its axis direction.
For example, if the center sliverās leading edge is aligned with the viewportās leading edge, the offset would be zero. If its trailing edge is aligned with the viewportās leading edge, the offset would be the sliverās extent. If its leading edge is aligned with the viewportās trailing edge, the offset would be the viewportās extent, negated.
There are two major classes of parent data used by slivers: SliverPhysicalParentData
(used by RenderViewport
) and SliverLogicalParentData
(used by RenderShrinkWrappingViewport
). These differ in how they represent the childās position. The former stores absolute coordinates from the parentās visible top left corner whereas the latter stores the distance from the parentās zero scroll offset to the childās nearest edge. Physical coordinates are more efficient for children that must repaint often but incur a cost during layout. Logical coordinates optimize layout at the expense of added cost during painting.
Viewports use two subclasses to support multiple sliver children, SliverPhysicalContainerParentData
and SliverLogicalContainerParentData
. These are identical to their superclasses (where the āparentā isnāt a sliver but the viewport itself), mixing in ContainerParentDataMixin
<RenderSliver
>.
The center sliver may be offset by RenderSliver.centerOffsetAdjustment
(added to the current ViewportOffset.pixels
value). This effectively shifts the zero scroll offset (e.g., to visually center the center sliver).
The zero scroll offset can itself be shifted by a proportion of the viewportās main extent via RenderViewport.anchor
. Zero positions the zero offset at the viewportās leading edge; one positions the offset at the trailing edge (and 0.5
would position it at the midpoint).
These adjustments are mixed into the calculation early on (see RenderViewport.performLayout
and RenderViewport._attemptLayout
). Conceptually, it is easiest to ignore them other than to know that they shift the centerlineās visual position.
The center sliver may also paint itself at an arbitrary offset via SliverGeometry.paintOrigin
, though this wonāt actually move the zero offset.
Forward and reverse slivers are laid out separately and are generally isolated from one another [?]. Reverse slivers are laid out first, then forward slivers. Reverse slivers and forward slivers share the same axis direction (i.e., the generated constraints reference the same SliverConstraints.axisDirection
), though reverse sliver calculations (e.g., for painting or layout offset) effectively flip this direction. Thus, it is most intuitive to think of reverse slivers as having their leading and trailing edges flipped, etc.
Layout offset is effectively measured from the viewportās outer leading edge to the nearest edge of the sliver (i.e., the offset is relative to the viewportās current view).
More accurately, a sliverās layout offset is measured from the zero scroll offset of its parent which, for a viewport, coincides with the centerline. However, since layout offset is iteratively computed by summing layout extents (in RenderViewportBase.layoutChildSequence
) and these extents are zero unless a sliver is visible, this formal definition boils down to the practical definition described above. [?]
This property explains why RenderViewportBase.computeAbsolutePaintOffset
is able to produce paint offsets trivially from layout offsets (this is surprising since layout offsets are ostensibly measured from the zero scroll offset whereas paint offsets are measured from the boxās top left corner).
Even though layout offsets after the trailing edge are approximate (due to an implementation detail of RenderViewportBase.layoutChildSequence
), this direct mapping remains safe as out-of-bounds painting will be clipped.
Scroll offset can be interpreted in two ways.
When considering a sliver or viewport in isolation, scroll offset refers to a one dimensional coordinate space anchored at the objectās leading edge and extending toward its trailing edge.
When considering a sliver in relation to a parent container, scroll offset represents the first offset in the sliverās coordinate space that would be visible in its parent (e.g. offset zero implies that the leading edge of the sliver is visible; offset N implies that all except the leading N pixels are visible -- if N is greater than the sliverās extent, some of those pixels are empty space). Conceptually, a scroll offset represents how far a sliverās leading precedes the viewport.
When zero, the sliver has been fully scrolled after the leading edge (and possibly after the trailing edge).
When less than the sliverās scroll extent, a portion of the sliver precedes the leading edge.
When greater, the sliver is entirely before the leading edge.
Scroll extent represents how much space a sliver might consume; it need only be accurate when the constraints would permit the sliver to fully paint (i.e., the desired paint extent fits within the remaining paintable space).
Slivers preceding the leading edge or appearing within the viewport must provide valid scroll extents if they might conceivably be painted.
Slivers beyond the trailing edge may approximate their scroll extents since no pixels remain for painting.
Overlap is the pixel offset (in the main axis direction) necessary to fully āescapeā any earlier sliverās painting. More formally, this is the distance from the sliverās current position to the first pixel that hasnāt been painted on by an earlier sliver. Typically, this pixel is after the sliverās offset (e.g., because a preceding sliver painted beyond its layout extent). However, in some cases, the overlap can be negative indicating that the first such pixel is before the sliverās offset (i.e., earlier than the sliverās offset).
All slivers preceding the viewportās trailing edge receive unclamped values for the remaining paintable and cacheable extents, even if those slivers are located far offscreen. Slivers implement a variety of layout effects and therefore may consume visible (or cacheable) pixels at their discretion.
Viewports support the concept of maximum scroll obstruction (RenderViewportBase.maxScrollObstructionExtentBefore
), a region of the viewport that is covered by āpinnedā slivers and that effectively reduces the viewportās scrollable bounds. This is a secondary concept used only when computing the scroll offset to reveal a certain sliver (AbstractRenderViewport.getOffsetToReveal
).
Viewports provide a mechanism for querying the paint and (approximate) scroll offsets of their children. The implementation depends on the type of viewport; efficiency may also be affected by the parent model (e.g., RenderViewport
uses SliverPhysicalContainerParentData
, allowing paint offsets to be returned immediately).
A logical index is assigned to all children. The center child is assigned zero; subsequent children (forward slivers) are assigned increasing indices (e.g., 1, 2, 3) whereas preceding children (reverse slivers) are assigned decreasing indices (e.g., -1, -2, -3).
Children are stored sequentially (R reverse slivers + the center sliver + F forward slivers), starting with the ālastā reverse sliver (-R), proceeding toward the āfirstā reverse sliver (-1), then the center sliver (0), then ending on the last forward sliver (F+1).
The first child (RenderViewport.firstChild
) is the ālastā reverse sliver.
Viewports define a painting order and a hit-test order. Reverse slivers are painted from last to first (-1), then forward slivers are painted from last to first (center). Hit-testing proceeds in the opposite direction: forward slivers are tested from the first (center) to the last, then reverse slivers are tested from the first (-1) to the last.
Whereas an ordinary viewport expands to fill the main axis, shrink-wrapping viewports are sized to minimally contain their children. Consequently, as the viewport is scrolled, its size will change to accommodate the visible children which may require layout to be repeated. Shrink-wrapping viewports do not support reverse children.
The shrink-wrapping viewport uses logical coordinates instead of physical coordinates since it performs layout frequently.
A pixel offset directly applied to the ViewportOffset.pixels
value allowing descendent slivers to adjust the overall scrolling position. This is done to account for errors when estimating overall scroll extent for slivers that build content dynamically.
Such slivers often cannot measure their actual extent without building and laying out completely first. Doing this continuously would be prohibitively slow and thus relative positions are used (i.e., position is reckoned based on where neighbors are without necessarily laying out all children).
The scroll offset correct immediately halts layout, propagating to the nearest enclosing viewport. The value is added directly to the viewportās current offset (e.g., a positive correction increases ViewportOffset.pixels
, translating content opposite the viewportās axis direction -- i.e., scrolling forward).
Conceptually, this allows slivers to address logical inconsistencies that arise due to, e.g., estimating child positions by determining a scroll offset that would have avoided the problem, then reattempting layout using this new offset.
Adjusting the scroll offset will not reset other child state (e.g., the position of children in a SliverList
); thus, when such a sliver requests a scroll offset correction, the offset selected is one that would cause any existing, unaltered state to be consistent.
For example, a SliverList
may not have enough room for newly revealed children when scrolling backwards (e.g., because the incoming scroll offset indicates an inadequate number of pixels preceding the viewportās leading edge). The SliverList
calculates the scroll offset correction that would have avoided this logical inconsistency, adjusting it (and any affected layout offsets) to ensure that children appear at the same visual location in the viewport.
If a chain of corrections occurs, layout will eventually fail.
RenderViewport
kicks off layout via RenderViewport.performLayout
.
Out-of-band data is incorporated into layout, supplementing constraints and geometry.
minScrollExtent
: total scroll extent of reverse slivers, negated.
maxScrollExtent
: total scroll extent of forward slivers.
hasVisualOverflow
: whether clipping is needed when painting.
RenderViewport.performLayout
applies the center sliver adjustment and repeatedly attempts layout until layout succeeds or a maximum number of attempts is reached.
Main and cross axis extents are derived from width and height given the viewportās axis.
Attempt to layout all children (RenderViewport._attemptLayout
), passing the current scroll offset (ViewportOffset.pixels
) including any visual adjustment of the center sliver (RenderSliver.centerOffsetAdjustment
).
If the attempt fails due to a scroll offset correction being returned, that correction is applied (ViewportOffset.pixels
is adjusted additively, without notifying listeners) and layout re-attempted.
If the attempt succeeds, the viewport must verify that the final content dimensions would not invalidate the viewportās current pixel offset (via ViewportOffset.applyContentDimensions
); if it does, layout must be re-attempted.
The out-of-band minimum and maximum scroll extents are transformed into āslackā values, representing how much the viewport can be scrolled in either direction from the zero offset, excluding pixels visible in the viewport.
As an example, if layout was attempted assuming a large viewport offset, but layout indicates that there is no slack on either side of the viewport, this would be considered an invalid state -- the viewport shouldnāt have been scrolled so far. [?]
The typical implementation of this method (ScrollPositionWithSingleContext.applyContentDimensions
) never requests re-layout. Intead, the current scroll activity is given an opportunity to react to any changes in slack (ScrollActivity.applyNewDimensions
).
RenderViewport._attemptLayout
lays out the reverse slivers then, if successful, the forward slivers; else, it returns a scroll offset correction. Before attempting layout, this method clears stale layout information (e.g., out-of-band state) and calculates a number of intermediate values used to determine the constraints passed to the first sliver in each sequence (i.e., the arguments to RenderViewportBase.layoutChildSequence
).
All out-of-band data is cleared and the intermediate viewport metrics calculated.
centerOffset
: the offset from the viewportās outer leading edge to the centerline (i.e., from the current scroll offset to the zero scroll offset).
Negative when scrolled into the forward region, positive when scrolled into the reverse region.
If the anchor (i.e., visual adjustment of the centerline) is non-zero, applies an additional offset proportional to the main extent. This offset affects all other metrics, too.
reverseDirectionRemainingPaintExtent
: total visible pixels available for painting reverse slivers.
forwardDirectionRemainingPaintExtent
: total visible pixels available for painting forward slivers.
fullCacheExtent
: the viewportās visual extent plus the two adjacent cache extents (RenderViewportBase.cacheExtent
).
Subsequent definitions refer to the region defined by the leading cache extent, the visible viewport, and the trailing cache extent as the ācacheable viewport.ā
centerCacheOffset
: the offset from the leading edge of the cacheable viewport to the centerline.
reverseDirectionRemainingCacheExtent
: total visible and cacheable pixels available for painting reverse slivers.
forwardDirectionRemainingCacheExtent
: total visible and cacheable pixels available for painting forward slivers.
If present, reverse slivers are laid out via RenderViewportBase.layoutChildSequence
. The parameters to this method serve as initial values for the first reverse sliver; as layout progresses, they are incrementally adjusted for each subsequent sliver.
In the following descriptions, since reverse slivers are effectively laid out in the opposite axis direction, the meaning of āleadingā and ātrailingā are flipped. That is, what was previously the trailing edge of the viewport / sliver is now described as its leading edge.
child: the initial reverse sliver (immediately preceding the center sliver).
scrollOffset
: the amount the first reverse sliver has been scrolled before the leading edge of the viewport.
If the forward region is visible, this is always zero. This is because the initial reverse sliver must necessarily be after the leading edge.
overlap: zero, since nothing has been painted.
layoutOffset
: the offset to apply to the sliverās layout position (i.e., to push it past any pixels earmarked for other slivers), effectively measured from the viewportās leading edge (though technically measured from the centerline). Only increased by slivers that are on screen.
If the forward region is not visible, this is always zero. This is because layout may start from the leading edge unimpeded.
If the forward region is visible, this is forwardDirectionRemainingPaintExtent
; those pixels are earmarked for forward sliver layout (which, being visible, will reduce the available space in the viewport). Thus, the first reverse sliver must be ābumped upā by a corresponding amount.
If the forward region is partially visible, the above represents how much of the visible viewport has been allocated for forward layout.
If the forward region is solely visible, the above still applies (i.e., layout offset must be adjusted by forwardDirectionRemainingPaintExtent
which, in this case, is the full extent).
No adjustment beyond this is necessary because the forward region appears after the reverse region and can therefore only interact with reverse layout by consuming paint extent.
remainingPaintExtent
: total visible pixels available for reverse slivers (reverseDirectionRemainingPaintExtent
).
advance: childBefore
to iterate backwards.
remainingCacheExtent
: total visible and cacheable pixels available for painting (reverseDirectionRemainingCacheExtent
).
cacheOrigin
: how far before the leading edge is cacheable (always negative). Forward region not eligible.
Forward slivers are laid out via RenderViewportBase.layoutChildSequence
. The parameters to this method serve as initial values for the center sliver. These values are the opposite of the reverse values, with a handful of exceptions.
āLeadingā and ātrailingā once again refer to their intuitive definitions.
child: the center sliver (i.e., the first forward sliver).
scrollOffset
: the amount the centerline has been scrolled before the leading edge of the viewport.
If the reverse region is visible, this is always zero. This is because the center sliver must necessarily be after the leading edge.
overlap: number of pixels needed to jump from the starting layout position to the first pixel not painted on by an earlier sliver (before or after the sliverās leading edge).
If reverse slivers were laid out, the reverse region is treated as a self-contained block that fully consumes its portion of the viewport. Thus, there can be no overlap. [?]
If reverse slivers were not laid out, the initial overlap is how far the centerline is from the viewportās leading edge (i.e., the full extent of the reverse region, negated). Intuitively, a negative overlap indicates that the first clean pixel is before the center sliverās leading edge and not after.
RenderViewportBase.layoutChildSequence
tracks the furthest offset painted so far (maxPaintOffset
). Passing a negative value ensures that the reverse region (e.g., due to overscroll) is captured by this calculation; else it would remain empty.
For example, failing to do this might cause a pinned app bar to separate from the top of the viewport during overscroll.
layoutOffset
: the offset to apply to the sliverās layout position (i.e., to push it past any pixels earmarked for other slivers), effectively measured from the viewportās leading edge (though technically measured from the centerline). Only increased by slivers that are on screen.
If the reverse region is not visible, this is always zero. This is because layout may start from the leading edge unimpeded.
If the reverse region is visible, the layout offset must be bumped by the full extent of the reverse region -- not just the visible portion (note that this differs from the reverse case). Forward slivers are laid out after reverse slivers, and so must account for the full amount of potentially consumed layout.
If the reverse region is partially visible, the offset used is forwardDirectionRemainingPaintExtent
. This represents how much of the visible viewport has been allocated for reverse layout.
If the reverse region is solely visible, the offset used is the full distance from the viewportās outer leading edge to the centerline. That is, the entire reverse region is considered as having been used for layout.
Though this is an upper-bound approximation, this value is acceptable since forward slivers wonāt be painted (they are offscreen by assumption). Consequently, layout offsets within the forward region serve mainly for ordering, not precise positioning. [?]
remainingPaintExtent
: total visible pixels available for forward slivers (forwardDirectionRemainingPaintExtent
).
Note that slivers preceding the leading edge will receive the full value even if they are off screen; it is up to the sliverās interpretation of its scroll offset to determine whether it consumes pixels.
advance: childAfter
to iterate forwards.
remainingCacheExtent
: total visible and cacheable pixels available for painting (forwardDirectionRemainingCacheExtent
).
cacheOrigin
: how far before the leading edge is cacheable (always negative). Reverse region not eligible.
RenderViewportBase.layoutChildSequence
is the engine that drives sliver layout. It processes a sequence of children one by one, passing initial constraints and adjusting layout state as geometry is produced.
Layout state is initialized. The initial layout offset is captured and the effective scroll direction computed (this direction will be flipped to account for reverse growth).
maxPaintOffset
: the layout offset corresponding to the farthest pixel painted so far. Adjusted by incoming overlap (e.g., to allow slivers to avoid overlapping with earlier slivers; or, when negative, to allow slivers to fill empty space due to overscroll).
For every child, invoke RenderSliver.performLayout
, passing an appropriate SliverConstraint
instance.
scrollDirection
: the effective scroll direction.
scrollOffset
: the incoming (e.g., center sliverās) scroll offset, decremented by all preceding sliversā scroll extents. Negative values are clamped at zero since fully exposed slivers have a zero scroll offset.
Conceptually, this is the amount that the current sliverās leading edge precedes the viewportās leading edge as reckoned using earlier sliversā extents.
overlap: the difference between the farthest pixel painted thus far (maxPaintOffset
) and the current layout offset.
Conceptually, this is the offset needed to jump to the nearest clean pixel from the sliverās layout position. Can be negative.
precedingScrollExtent
: sum of all preceding sliversā scroll extents, including those that are off screen.
Note that a sliver may consume infinite scroll extent, so this quantity may be infinite.
remainingPaintExtent
: the original paintable extent decreased by the total layout extent consumed so far (paint extent is not used because painting can overlap whereas layout cannot).
Conceptually, this is how much the sliver may choose to paint starting from its scroll offset.
parentUsesSize
: this is always true; if a sliver must lay out, so too must the viewport.
correctedCacheOrigin
(intermediate value, negative): ensure the cache origin falls within the sliver, clamping it to the sliverās leading edge.
cacheExtentCorrection
(intermediate value, negative): how much of the cache region could not be filled by the sliver (i.e., because it precedes its leading edge).
cacheOrigin
: the corrected cache origin.
Even if the sliver significantly precedes the viewportās leading edge (i.e., has a large positive scroll offset), the cache origin is added to the sliverās scroll offset (i.e., the offset needed to reach the viewportās leading edge from the sliverās leading edge) and will therefore correctly index into the leading cache region.
remainingCacheExtent
: the original cacheable extent (the leading and trailing cache regions plus the viewportās main extent) decreased by the total cache extent consumed so far.
Conceptually, this is how much the sliver may choose to paint starting from the corrected cache origin. Always larger than remainingPaintExtent
.
This is adjusted by cacheExtentCorrection
to discount the portion of the cache that is inaccessible to the sliver. The sliver is effectively starting further into the cacheable region, not bumping the cacheable region out to ensure that the cacheable region ends at the same place.
If layout returns an offset correction, control returns to the caller which, in most cases, applies the correction and reattempts layout.
Layout state is mutated to account for the geometry produced by the sliver.
effectiveLayoutOffset
: the layout offset as adjusted by the sliverās SliverGeometry.paintOrigin
(negative values shift the offset toward the leading edge; the default is zero).
Conceptually, this is where the sliver has been visually positioned in the viewport. Note that it will still consume SliverGeometry.layoutExtent
pixels at its original layout offset and not the effective layout offset.
maxPaintOffset
: update if the farthest pixel painted by this sliver (the effective layout offset increased by SliverGeometry.paintExtent
) exceeds the previous maximum.
scrollOffset
: decrease by the scroll extent of the sliver. Unlike sliverScrollOffset
, which is clamped at zero, this quantity is permitted to become negative (representing the proportion of total scroll extent after the viewportās leading edge).
layoutOffset
: increase by the layout extent of the sliver. Layout extent must be less than paint extent, which itself must be less than remaining paint extent. Thus, only those slivers consuming space in the visible viewport will increase layout offset.
Out of band data is updated to reflect the total forward and reverse scroll extent encountered so far (_maxScrollExtent
and _minScrollExtent
, respectively), as well as whether clipping is needed (_hasVisualOverflow
).
Update cache state to if any cache was consumed (i.e., SliverGeometry.cacheExtent
> 0).
remainingCacheExtent
: reduce the cacheable extent by the number of pixels that were either consumed or unreachable due to preceding the sliverās leading edge (cacheExtentCorrection
).
cacheOrigin
: if the leading cacheable extent has not been exhausted, update the origin to point to the next eligible pixel; this offset must be negative. Otherwise, set the cache origin to zero. Subsequent slivers may still attempt to fill the trailing cacheable extent if space permits.
Update the sliverās parent data to reflect its position.
If the sliver precedes the viewportās leading edge or is otherwise visible, the effective layout offset correctly represents this sliverās position.
Otherwise, construct an approximate position by adding together the initial layout offset and the proportion of total scroll extent after the viewportās leading edge.
The latter term is equivalent to the initial scroll offset decreased by the total scroll extent processed so far. Once negative, this value represents how far slivers have extended into the visible portion of the viewport (and beyond).
Layout offsets only increase for slivers that are painted (and have a SliverGeometry.layoutExtent
that is non-zero); thus, only the portion of scroll extent falling within the visible region can possibly increment layout offset. As a result, the above quantity represents an upper-bound offset that can be used to reliably order, if not paint, offscreen slivers.
Viewports that use physical parent data (e.g., RenderViewport
) must compute the absolute paint offset to store. Otherwise, the layout offset may be stored directly.
The out-of-band data tracked by shrink-wrapping viewports is extended.
_shrinkWrapExtent
: the total maximum paint extent (SliverGeometry.maxPaintExtent
) of all slivers laid out so far. Maximum paint extent represents how much a sliver could paint without constraint. Thus, this extent represents the amount of space needed to fully paint all children.
There is no minimum extent since there are no reverse slivers.
There are several major differences when performing layout.
RenderShrinkWrappingViewport.performLayout
is the entrypoint to viewport layout, repeating layout until there are no offset corrections and final dimensions are deemed valid.
If there are no children, the viewport expands to fill the cross axis, but is as small as possible in the main axis; it then returns. Else, the extents are initialized to be as big as the constraints permit.
Layout is attempted (RenderShrinkWrappingViewport._attemptLayout
) using the unadulterated viewport offset (ViewportOffset.pixels
). Any center offset adjustment is ignored. If layout produces a scroll offset correction, that correction is applied, and layout reattempted.
Effective extent is computed by clamping the maximum extent needed to paint all children (_shrinkWrapExtent
) to the incoming constraints.
Final layout dimensions (inner and outer) are verified. If either would invalidate the viewport offset (ViewportOffset.pixels
), layout must be reattempted.
The outer dimension is set to the effective extent and verified via ViewportOffset.applyViewportDimensions
.
The inner dimension is verified via ViewportOffset.applyContentDimensions
, which requires āslackā values representing how far before and after the zero offset the viewport can scroll. Since there are no reverse slivers, the minimum extent is zero. The maximum extent is the difference between the total scrollable extent and the total paintable extent of all children (clamped to zero). That is, how much additional scrolling is needed even after all children are painted.
Last, the viewport is sized to match its effective extent, clamped by the incoming constraints.
RenderShrinkWrappingViewport._attemptLayout
is a thin wrapper around RenderViewportBase.layoutChildSequence
that configures the layout parameters passed to the first child in the sequence.
child
: the first child, since there are no reverse slivers.
scrollOffset
: incoming offset, clamped above zero (i.e., the amount the viewport is scrolled).
scrollOffset
: incoming offset, clamped below zero (i.e., the amount the viewport is over-scrolled).
layoutOffset
: always zero.
remainingPaintExtent
: the main axis extent (i.e., the full viewport)
advance
: childAfter
to iterate forwards.
remainingCacheExtent
: the main axis extent plus two cache extents.
cacheOrigin
: cacheExtent
pixels before the layout offset.
SliverMultiBoxAdaptorParentData
extends SliverLogicalParentData
to support multiple box children (via ContainerParentDataMixin
<RenderBox
>) as well as the keep alive protocol (via KeepAliveParentDataMixin
), allowing certain children to be cached when not visible. In addition to tracking each childās layout offset and neighbors in the child list, this parent data also contains an integer index assigned by the manager.
SliverGridParentData
is a subclass of SliverMultiBoxAdaptorParentData
that also tracks the childās cross axis offset (i.e., the distance from the childās left or top edge from the parentās left or top edge, depending on whether the axis is vertical or horizontal, respectively).
RenderSliverMultiBoxAdaptor
provides a base class for slivers that manage multiple box children to efficiently fill remaining space in the viewport. Generally, subclasses will create children dynamically based on the current scroll offset and remaining paint extent. A manager (the RenderSliverBoxChildManager
) provides an interface to create, track, and remove children in response to layout; this allows a clean separation between how children are laid out and how they are produced. Subclasses implement layout themselves, delegating to the superclass as needed. Only visible and cached children are associated with the render object at any given time; cached children are managed using the keep alive protocol (via RenderSliverWithKeepAliveMixin
).
Conceptually, this adaptor adds visibility pruning to the sliver protocol. Ordinarily, all slivers are laid out as the viewport is scrolled -- including those that are off screen. Using this adaptor, children are created and destroyed only as they become visible and invisible. Once destroyed, children are removed from the render tree, eliminating unnecessary building and layout.
Additional constraints are imposed on the children of this render object:
Children cannot be removed once theyāve been laid out in a given pass.
Children can only be added by the manager, and then only with a valid index (i.e., an unused index).
RenderSliverBoxChildManager
provides a bidirectional interface for managing, creating, and removing box children. This interface serves to decouple the RenderSliverMultiBoxAdaptor
from the source of children.
The manager uses parent data to track child indices. These are integers that provide a stable, unique identifier for children as they are created and destroyed.
SliverMultiBoxAdaptorElement
implements the RenderSliverBoxChildManager
interface to link element creation and destruction to an associated widgetās SliverChildDelegate
. This delegate typically provides (or builds) children on demand.
RenderSliverFixedExtentBoxAdaptor
is a subclass of RenderSliverMultiBoxAdaptor
that arranges a sequence of box children with equal main axis extent. This is more efficient since the adaptor may reckon each childās position directly (i.e., without layout). Children that are laid out are provided tight constraints.
SliverChildDelegate
is a subclass that assists in generating children dynamically and destroying those children when theyāre no longer needed (it also supports destruction mitigation to avoid rebuilding expensive subtrees). Children are provided on demand (i.e., lazily) using a build method that accepts an index and produces the corresponding child.
SliverChildListDelegate
is a subclass of SliverChildDelegate
that provides children using an explicit list. This negates the benefit of only building children when they are in view. This delegate may also wrap produced children in RenderBoundary
and AutomaticKeepAlive
widgets.
SliverChildBuilderDelegate
is a subclass of SliverChildDelegate
that provides children by building them on demand. This improves performance by only building those children that are currently visible. This delegate may also wrap produced children in RenderBoundary
and AutomaticKeepAlive
widgets.
SliverMultiBoxAdaptorWidget
SliverChildDelegate
Keep alive is a mechanism for retaining the render object, element, and state associated with an item when that item might otherwise have been destroyed (i.e., because it is scrolled out of view).
RenderSliverMultiBoxAdaptor
incorporates RenderSliverWithKeepAliveMixin
to ensure that its associated parent data incldues the KeepAliveParentDataMixin
. This mixin introduces ākeepAlive
ā / ākeptAlive
ā flags that form the basis of all caching decisions (the former indicates whether keep alive behavior is requested; the latter indicates whether the item is currently being kept alive).
The ākeepAlive
ā flag signals that the associated child is to be retained even when it would have been destroyed. This flag is altered by KeepAlive
, a parent data widget associated with the KeepAliveParentDataMixin
.
If an item is to be marked as keep alive, no additional work is necessary; the item must have previously been alive, else it would have been destroyed.
If an item is to be unmarked as keep alive, its parent must perform layout so that the item may be cleaned up (i.e., if it would have been destroyed).
The KeepAlive
widget can be applied out of turn (i.e., KeepAlive.debugCanApplyOutOfTurn
returns true). This implies that the associated element can alter the childās parent data without triggering a rebuild (via ParentDataElement.applyWidgetOutOfTurn
). This allows parent data to be altered in the middle of building, layout, and painting.
This is useful when handling lists since their children may request to be kept alive during building (via AutomaticKeepAlive
). Since enabling this feature will never invalidate the parentās own layout (see discussion above), this is always safe. The benefit is that the bit can be flipped without requesting another frame.
AutomaticKeepAlive
manages an immediate KeepAlive
child based on the KeepAliveNotifications
emitted by descendent widgets.
These notifications indicate that the subtree is to be kept alive (and must therefore use the KeepAliveParentDataMixin
).
A handler associated with the notification is invoked by the client to indicate that it may be released. If the client is deactivated (i.e., removed from the tree), it must call the handler; else, the widget will be leaked. The handler must first be called if a new KeepAliveNotification
is generated.
AutomaticKeepAliveClientMixin
provides helpers to help State
subclasses appropriately manage KeepAliveNotifications
.
The ākeepAlive
ā flag is honored by RenderSliverMultiBoxAdaptor
, which maintains a keep alive bucket and ensures that the associated children remain in the render tree but are not actually rendered.
RenderSliverBoxChildManager.childCount
: the manager must provide an accurate child count if there are finite children.
RenderSliverBoxChildManager.didAdoptChild
: invoked when a child is adopted (i.e., added to the child list) or needs a new index (e.g., moved within the list). This is the only place where the childās index is set.
Note that any operation that might change the index (e.g., RenderSliverMultiBoxAdaptor.move
, which is called indirectly by RenderObjectElement.updateChild
when the child order changes) will always call RenderSliverBoxChildManager.didAdoptChild
to update the index.
RenderSliverBoxChildManager.didStartLayout
, RenderSliverBoxChildManager.didFinishLayout
: invoked at the beginning and end of layout.
In SliverMultiBoxAdaptorElement
, the latter invokes a corresponding method on the delegate providing the first and last indices that are visible in the list.
RenderSliverBoxChildManager.setDidUnderflow
: indicates that the manager could not fill all remaining space with children. Generally invoked unconditionally at the beginning of layout (i.e., without underflow), then again if there is space left over in the viewport (i.e., with underflow).
Used to determine whether additional children will affect whatās visible in the list.
RenderSliverBoxChildManager.estimateMaxScrollOffset
: estimates the maximum scroll extent that can be consumed by all visible children. Provided information about the children current in view (first and last indices and leading and trailing scroll offsets).
In SliverMultiBoxAdaptorElement
, this defers to the associated widget. If no implementation is provided, it multiples the average extent (calculated for all visible children) by the overall child count to obtain an approximate answer.
Leading and trailing scroll offset describe the leading and trailing edges of the children, even if the leading and trailing children arenāt entirely in view.
RenderSliverBoxChildManager.createChild
: returns a child with the given index after incorporating it into the render object (creates the initial child if a position isnāt specified). May cache previously built children.
In SliverMultiBoxAdaptorElement
, this method utilizes Element.updateChild
to update or inflate a new child using the widget tree built by a SliverChildDelegate
.
All children are cached once built. This cache is cleared whenever the list is rebuilt since the associated delegate may have changed [?].
If a child is updated (e.g., because it was found in the cache), it will already have the correct index.
If the child is inflated, the index will need to be set (via RenderSliverBoxChildManager.didAdoptChild
).
Element.inflateWidget
causes the render object (RenderSliverMultiBoxAdaptor
) to create and mount the child, which inserts it into a slot associated with the childās index (via SliverMultiBoxAdaptorElement.insertChildRenderObject
). This invokes RenderSliverMultiBoxAdaptor.insert
to update the render objectās child list and adopt the new child. This calls back into the manager, which sets the index appropriately (the index is stored in a member variable).
RenderSliverBoxChildManager.removeChild
: removes a child, generally in response to āgarbage collectionā when the child is no longer visible; also used to destroy a child that was previously kept alive.
In SliverMultiBoxAdaptorElement
, this method invokes Element.updateChild
with the new widget set to null, causing the child to be deactivated and detached. This invokes SliverMultiBoxAdaptorElement.removeChildRenderObject
using the childās index as the slot identifier, which calls RenderSliverMultiBoxAdaptor.remove
to update the render objectās child list. Once updated, the child render object is dropped and removed. If the child had been kept alive, it is removed from the keep alive bucket, instead.
Concrete indices are provided to the child manager when producing children (e.g., building via the child delegate). These are computed incrementally, starting from zero. Sequential indices are computed based on items currently in view and where the child is added (before or after the current children). Negative indices are valid.
The only method that updates the index stored in SliverMultiBoxAdaptorParentData
is RenderSliverBoxChildManager.didAdoptChild
.
This method is invoked whenever the render object adds a child to its child list. It is not invoked when the keep alive bucket is altered (i.e., it only considers the effective child list).
Changes are generally driven by the widget layer and the SliverChildDelegate
in particular. As the listās manager (and a convenient integration point between the render tree and the widget), SliverMultiBoxAdaptorElement
facilitates the process.
SliverMultiBoxAdaptorElement
tracks when children are created and destroyed (either explicitly or during a rebuild), setting SliverMultiBoxAdaptorElement._currentlyUpdatingChildIndex
to the intended index just before the render tree is updated. The index also serves as the slot for overridden RenderObjectElement
operations.
Updates pass through the element to the RenderSliverMultiBoxAdaptor
(according to Flutter
ās usual flow), which invokes RenderSliverBoxChildManager.didAdoptChild
any time the index might change.
Finally, the currently updating index is written to the childās parent data by the manager.
The multi-box adaptor effectively has two child lists: the effective child list, as implemented by the ContainerRenderObjectMixin
, and a keep alive bucket, managed separately.
The keep alive bucket associates indices to cached render box children. The children in this list are kept alive, and therefore attached to the render tree, but are not otherwise visible or interactive.
Keep alive children are not included in the effective child list. Therefore, the helpers provided by ContainerRenderObjectMixin
do not interact with these children (though a few operations have been overridden).
Keep alive children remain attached to the render tree in all other respects required by the render object protocol (they are adopted, dropped, attached, detached, etc).
Both child models (i.e., the child list and the keep alive bucket) are considered for non-rendering operations. The keep alive bucket is ignored when iterating for semantics, notifying the manager when a child is adopted, hit testing, painting, and performing layout.
Child render boxes are obtained via RenderSliverMultiBoxAdaptor._createOrObtainChild
, which consults the keep alive bucket before requesting the manager to provide a child. Note that the manager incorporates an additional caching layer (i.e., to avoid rebuilding children unless the entire list has been rebuilt).
If the child is found in the keep alive bucket, it is first dropped (i.e., because it is being moved to a different child model) and then re-inserted into the child list. The list is marked dirty because its children have changed.
If the manager provides the child, it also inserts it into the render objectās child list.
Child render boxes are destroyed by RenderSliverMultiBoxAdaptor._destroyOrCacheChild
, which consults the keep alive flag before requesting the manager to remove the child.
If the child has the keep alive flag enabled, it is first removed from the child list and then readopted (i.e., switched to the keep alive child model). The manager is not notified of this adoption. Last, the list is marked dirty because its children have changed.
If the manager destroys the child, it also removes it from the render objectās child list.
The child list may be manipulated using the usual render object container methods. Insert, remove, remove all, and move have been overridden to support both child models (i.e., keep alive and container).
RenderSliverMultiBoxAdaptor.insert
cannot be used to add children to the keep alive bucket.
The initial child is bootstrapped via RenderSliverMultiBoxAdaptor.addInitialChild
to allow the initial layout offset and starting index to be specified. The child is sourced from the keep alive bucket or, if not found, the manager. If no child can be provided, underflow is indicated. Importantly, this child is not laid out.
Subsequent children are inserted and laid out via RenderSliverMultiBoxAdaptor.insertAndLayoutLeadingChild
and RenderSliverMultiBoxAdaptor.insertAndLayoutChild
. The former method positions children at the head of the child list whereas the latter positions them at its tail.
Newly inserted leading children become RenderSliverMultiBoxAdaptor.firstChild
; non-leading children may or may not become RenderSliverMultiBoxAdaptor.lastChild
depending on where theyāre inserted.
If the child was successfully inserted, it is then laid out with the provided constraints. Otherwise, underflow is indicated.
Children that are no longer visible are removed (or shifted to the keep alive bucket) via RenderSliverMultiBoxAdaptor.collectGarbage
. This method destroys a number of children at the start and end of the list. It also cleans up keep alive overhead, and so must always be called during layout.
Iterates the keep alive bucket to find children that no longer have the keep alive flag set (recall that toggling the flag doesnāt force a rebuild). Such children must be destroyed by the manager; this is what will actually remove them from the keep alive bucket (via RenderSliverMultiBoxAdaptor.remove
).
Any time the adaptor is rebuilt (e.g., because the delegate has changed), all children must also be rebuilt. In some cases, additional children may be built, too.
Rebuilding may update or remove existing children, causing the list to relayout.
Relayout will cause additional children to be built if there is space in the viewport and the new delegate can provide them.
Recall that updating or creating a render object (e.g., in response to Element.updateChild
) will modify fields in that render object. Most render objects will schedule layout, painting, etc., in response to such changes.
The element associated with the multi-box adaptor (SliverMultiBoxAdaptorElement
) maintains an ordered cache of all child elements and widgets that have been built since the list itself was rebuilt. This list includes keep alive entries.
This cache must be recreated whenever the list is rebuilt since the delegate may itself produce different children.
Rebuilding attempts to preserve children (i.e., by matching keys) while gracefully handling index changes. Children are updated with the rebuilt widget so that unchanged subtrees can be reused.
A candidate set of child elements is computed by assigning all the old elements their original indices.
If a child is being moved (as determined by SliverChildDelegate.findIndexByKey
), that child is unconditionally assigned its new index.
If an earlier item was assigned that index, it will be overwritten and deactivated.
All candidate children are built and updated (via SliverMultiBoxAdaptorElement.updateChild
). This will attempt to reuse elements where possible. If a candidate cannot be built (i.e., because the delegate does not build anything at that index), it is deactivated and removed.
The element and widget cache is repopulated with those children that are successfully built.
If at any point the adaptorās child list is mutated (i.e., because a child is added or removed), it will be dirtied for layout, painting, and compositing.
If a child is being kept alive (i.e., not visible but otherwise retained), care is taken to avoid using when manipulating the child list (e.g., it never serves as the āafterā argument when inserting children).
Last, if the list underflowed during the most recent layout attempt, the element will attempt to build one additional child (using the next available index). If this produces a child, it will be inserted into its parentās child list, scheduling a relayout. This, in turn, will provide the delegate an opportunity to build additional children to fill the remaining space in the viewport.
The child associated with the greatest index is tracked as SliverMultiBoxAdaptorElement._currentBeforeChild
. This serves as the āafterā argument whenever the child list is manipulated, preserving the index order.
The childās position relative to the visible portion of the viewport is calculated using the incoming scroll offset and the childās layout position in the list. These quantities are mapped to a concrete offset from the top-left corner of the canvas, taking into account the viewportās axis direction. If the child intersects with the viewport, it is painted at the resulting offset.
RenderSliverFixedExtentList
is a trivial RenderSliverFixedExtentBoxAdaptor
subclass that lays out a sequence of children using a fixed main axis extent in order and without gaps (cross axis extent is determined from the incoming constraints).
Calculations are reckoned directly (using index and item extent). Indices start at zero and increase along the sliverās axis direction. Items are laid out at the position corresponding to the itemās index multiplied by item extent. Max scroll extent is trivially the number of children multiplied by item extent. Two calculations, however, are a bit confusing:
RenderSliverFixedExtentBoxAdaptor.getMaxChildIndexForScrollOffset
: returns the maximum index for the child at the current scroll offset or earlier.
Conceptually, this computes the maximum index of all children preceding or intersecting the listās leading edge (i.e., including off-screen children).
RenderSliverFixedExtentBoxAdaptor.getMinChildIndexForScrollOffset
: returns the minimum index for the child at the current scroll offset or later.
Conceptually, this computes the minimum index for all children exceeding or intersecting the listās leading edge (i.e., including on-screen children).
These are identical except when children do not intersect the leading edge (i.e., the leading edge is coincident with the seam between children).
Layout is performed sequentially, with new children retrieved from the manager as space allows or no more children are available. Depending on the SliverChildDelegate
in use, this might lead to the interleaving of build and layout phases (i.e., if children are built on demand rather than provided up front). Indices are assigned sequentially as layout progresses and relevant lifecycle methods within the manager are invoked (e.g., RenderSliverBoxChildManager.didStartLayout
).
Target first and last indices are computed based on the children that would be visible given the scroll offset and the remaining paint extent.
Shrink wrapping viewports have infinite extent. In this case, there is no last index.
Any children that were visible but are now outside of the target index range are garbage collected (via RenderSliverMultiBoxAdaptor.collectGarbage
). This also cleans up any expired keep alive children.
If an initial child hasnāt been attached to the list, it is created but not laid out (via RenderSliverMultiBoxAdaptor.addInitialChild
). This establishes the initial index and layout offset as all other children are positioned relative to this one.
If this fails, layout concludes with scroll extent and maximum paint extent set to the estimated maximum extent. All other geometry is zero.
If indices have become visible that precede the first childās index, the corresponding children are built and laid out (via RenderSliverMultiBoxAdaptor.insertAndLayoutLeadingChild
).
These children will need to be built since they could not have been attached to the list by assumption.
If one of these children cannot be built, layout halts with a scroll offset correction. Though this correction is currently computed incorrectly, the intent is to scroll the viewport such that the leading edge is coincident with the earliest, successfully built child.
Identify the child with the largest index that has been built and laid out so far. This is the trailing child.
If there were leading children, this will be the leading child adjacent to the initial child. If not, this is the initial child itself (which is now laid out if necessary).
Lay out every remaining child until there is no more room (i.e., the target index is reached) or no more children (i.e., a child cannot be built). Update the trailing child as layout progresses.
The trailing child serves as the āafterā argument when inserting children (via RenderSliverMultiBoxAdaptor.insertAndLayoutChild
).
The children may have already been attached to the list. If so, the child is laid out without being rebuilt.
Layout offsets are calculated directly using the childās index and the fixed extent.
Compute the estimated maximum extent using the first and last index that were actually reified as well as the enclosing leading and trailing scroll offsets.
Return the resulting geometry:
SliverGeometry.scrollExtent
: estimated maximum extent (this is correct for fixed extent lists).
SliverGeometry.maxPaintExtent
: estimated maximum extent (this is the most that can be painted).
SliverGeometry.paintExtent
: the visible portion of the range defined by the leading and trailing scroll offsets.
SliverGeometry.hasVisualOverflow
: true if the target index wasnāt reached or the list precedes the viewportās leading edge (i.e., incoming scroll offset is greater than zero).
If the list was fully scrolled, it will not have had an opportunity to lay out children. However, it is still necessary to report underflow to the manager.
RenderSliver
is the base class used by render objects that implement scrolling effects. Layout utilizes SliverConstraints
to produce SliverGeometry
(RenderSliver.geometry
); painting is unchanged, with the origin positioned at the top-left corner. Hit testing is implemented similarly to RenderBox
, relying instead on extents within the viewportās main axis.
RenderSliverHelpers
incorporates a variety of helpers, including support for implementing RenderObject.applyPaintTransform
and RenderObject.hitTestChildren
for slivers with box children.
RenderSliverSingleBoxAdapter
RenderSliverToBoxAdapter
adapts a render box to the sliver protocol, allowing it to appear alongside and within other slivers.
Most slivers contain box children. Some, like RenderSliverPadding
, have a single sliver child. Few slivers have multiple sliver children (generally, this is the purview of viewports).
Some slivers manage children dynamically. This is quite complicated and discussed in a subsequent section.
RenderSliver.calculateCacheOffset
and RenderSliver.calculatePaintOffset
accept a region (defined by a āfromā and ātoā relative to the sliverās zero offset) and determine the contained extent that is visible. This value is derived from the current scroll offset and remaining paint/cache extent from the provided constraints.
The result is only meaningful for slivers that paint exactly as much as is scrolled into view. For example, if a sliver paints more than it was scrolled (e.g., a pinned sliver that always paints the same amount), this calculation will be invalid.
RenderSliver.getAbsoluteSizeRelativeToOrigin
: the size of the sliver in relation to the sliverās leading edge rather than the canvasās origin. This size is derived from the cross axis and paint extents; the paint extent is negated if the sliverās contents precede its leading edge from the perspective of the canvas.
RenderSliver.childScrollOffset
: the distance from the parentās zero scroll offset to a childās zero scroll offset; unaffected by scrolling. Defaults to returning zero (i.e., child and parent are aligned).
RenderSliver.childMainAxisPosition
: the distance from the parentās visible leading edge to the childās visible leading edge, in the axis direction. If the actual leading edge is not visible, the edge defined by the intersection of the sliver with the viewportās leading edge is used instead. The child must be visible, though the parent need not be (e.g., a persistent header that pins its child even when the anchoring sliver is scrolled away).
Slivers that contain box children cannot measure distance using the childās visible edge since box children are viewport naive. As a result, the boxās actual leading edge is always used.
RenderSliver.childCrossAxisPosition
: for vertical axes, this is the distance from the parentās left side to the childās left side. For horizontal axes, this is the distance from parentās top side to the childās top side.
RenderSliver.geometry
is analogous to RenderBox.size
and contains the output of layout.
RenderSliver.centerOffsetAdjustment
is an offset applied to the viewportās center sliver (e.g., to center it). Implicitly shifts neighboring slivers. Positive offsets shift the sliver opposite the axis direction.
There are two options for encoding parent data (i.e., a sliverās position within its parent): using absolute coordinates from the parentās visible top left corner (via SliverPhysicalParentData.paintOffset
) or using a delta from the parentās zero scroll offset to the nearest child edge (via SliverLogicalParentData.layoutOffset
). The former optimizes painting at the expense of layout whereas the latter makes the opposite trade-off.
When thinking about paint offsets, itās crucial to note the distinction between a sliverās top-left corner and its visible top-left corner. If a sliver straddles the leading edge in a downward-scrollable viewport, its children are positioned relative to the top-left corner of the viewport, not the top-left corner that precedes the leading edge. This is always the edge that is nearest the top-left corner regardless of axis or growth direction.
Logical offsets are relative to the parentās zero scroll offset. This is either the parentās leading or trailing edge, depending on growth direction.
Slivers follow a similar model to RenderBox
, tracking whether a point is contained within the current sliver or one of its descendents. The current sliver determines whether this is the case by first testing its children and then itself (RenderSliver.hitTest
invokes RenderSliver.hitTestChildren
then RenderSliver.hitTestSelf
).
If a hit is detected, the sliver adds a SliverHitTestEntry
to the SliverHitTestResult
, and returns true. The sliver will then receive subsequent pointer events via RenderSliver.handleEvent
.
SliverHitTestEntry
associates a sliver with the coordinate that was hit.
SliverHitTestResult
is a HitTestResult
with a convenience method for mapping to sliver coordinates (SliverHitTestResult.addWithAxisOffset
). This adapts a main and cross axis position in the parentās coordinate space to the childās coordinate space by subtracting the offset to the child. A paint offset is also used since HitTestResult
is also responsible for mapping pointer events, which use canvas coordinates. [?]
Hit testing will only be performed once layout has completed; painting will not have occurred yet.
Coordinates are expressed as main and cross axis positions in the sliverās own coordinate space. Any positions provided to children must be mapped into that childās coordinate space.
Positions are relative to the current scroll offset; that is, a main axis position of zero corresponds to the first visible offset within the sliver, not the leading edge of the sliver.
The interactive region is defined by the current hit test extent anchored at the current scroll offset. The sliverās full cross axis extent is considered interactive, unless the default hit testing behavior is overridden.
Slivers may paint anywhere in the viewport using a non-zero paint origin. Special care must be taken to transform coordinates if this is the case. For instance, a sliver that contains a sliver child that always paints a button at the viewportās trailing edge will filter out events that lie outside of its own interactive bounds unless it is explicitly written to handle its childās special painting behavior.
Sliver constraints can be converted to box constraints via SliverConstraints.asBoxConstraints
. This method accepts minimum and maximum bounds for the sliverās main extent (e.g., if the main axis is horizontal, this would correspond to width). The cross axis constraint is always tight.
Constraints provide information about scrolling state from the perspective of the sliver being laid out so that it can select appropriate geometry.
SliverConstraints.axis
, SliverConstraints.axisDirection
, SliverConstraints.crossAxisDirection
: the current orientation and directionality of the main and cross axes.
SliverConstraints.growthDirection
: how a sliverās contents are ordered relative to the main axis direction (i.e., first-to-last vs. last-to-first).
SliverConstraints.normalizedGrowthDirection
: the growth direction that would produce the same child ordering assuming a āstandardā axis direction (AxisDirection.down
or AxisDirection.right
). This is the opposite of SliverConstraints.growthDirection
if axis direction is up or left.
SliverConstraints.cacheOrigin
: how far before the current scroll offset to begin painting to support caching (i.e., the number of pixels to āreach backā from the first visible offset in the sliver). Always between -SliverConstraints.scrollOffset
and zero.
SliverConstraints.overlap
: a delta from the current scroll offset to the furthest pixel that has not yet been painted by an earlier sliver, in the axis direction. For example, if an earlier sliver painted beyond its layout extent, this would be the offset to āescapeā that painting. May be negative if the furthest pixel painted precedes the current sliver (e.g., due to overscroll).
SliverConstraints.precedingScrollExtent
: the total scrolling distance consumed by preceding slivers (i.e., the sum of all preceding scroll extents). Scroll extent is approximate for slivers that could not fully paint in the available space; thus, this value may be approximate. If a preceding sliver is built lazily (e.g., a list that builds children on demand) and has not finished building, this value will be infinite. It may also be infinite if a preceding sliver is infinitely scrollable.
SliverConstraints.remainingPaintExtent
: the number of pixels available for painting in the viewport. May be infinite (e.g., for shrink-wrapping viewports). Zero when all pixels have been painted (e.g., when a sliver is beyond the viewportās trailing edge).
Whether this extent is actually consumable by the sliver depends on the sliverās intended behavior.
SliverConstraints.remainingCacheExtent
: the number of pixels available for painting in the viewport and cache regions, starting from the scroll offset adjusted by the cache origin. Always larger than remaining paint extent.
Whether this extent is actually consumable by the sliver depends on the sliverās intended behavior.
SliverConstraints.scrollOffset
: the offset of the first visible part of the sliver, as measured from the sliverās leading edge in the axis direction. Equivalently, this is how far the sliverās leading edge precedes its parentās leading edge.
All slivers after the leading edge have a scroll offset of zero, including those after the trailing edge.
The scroll offset may be larger than the sliverās actual extent if the sliver is far before the parentās leading edge.
SliverConstraints.userScrollDirection
: the ScrollDirection
in relation to the axis direction and the contentās logical ordering (as determined by growth direction).
Note that this describes the direction in which the content is actually moving on the userās screen, not the intuitive direction that the user is scrolling (e.g., scrolling down a webpage translates the content up; thus, the scroll direction would be ScrollDirection.reverse
. If the content were reversed within that web page, it would be ScrollDirection.forward
).
SliverConstraints.viewportMainAxisExtent
: the number of pixels that the viewport can display in its main axis.
Slivers define several extents (i.e., regions within the viewport) with scroll, layout, and paint being particularly important. Scroll extent corresponds to the amount of scrolling necessary to go from the sliverās leading edge to its trailing edge. Layout extent corresponds to the number of pixels physically occupied by the sliver in the visible viewport. Paint extent corresponds to the number of pixels painted on by the sliver in the visible viewport. Paint extent and layout extent typically go to zero as the sliver is scrolled out of the viewport, whereas scroll extent generally remains fixed.
A sliverās actual position within its parent (sometimes called its ālayout positionā) is encoded in one of two ways: as an absolute paint offset that can be directly used when painting or as a layout offset relative to the parentās zero scroll offset (SliverPhysicalParentData
and SliverLogicalParentData
, respectively). Though a sliver may have a particular position, it is free to paint anywhere in the viewport.
Sliver geometry captures several such extents as well as the sliverās current relationship with the viewport.
SliverGeometry.paintOrigin
: where to begin painting relative to the sliverās current layout position, expressed as an offset in the axis direction (e.g., a negative value would precede the current position).
The sliver still consumes layout extent at its original layout position. Subsequent layout is unaffected by the paint origin.
Taken into account when determining the next sliverās overlap constraint, particularly if this sliver paints further than any preceding sliver.
SliverGeometry.paintExtent
: the number of contiguous pixels consumed by this sliver in the visible region. Measured from the sliverās layout position, as adjusted by the paint origin (SliverGeometry.paintOrigin
, typically zero).
Paint extent ā¤ remaining paint extent: it is inefficient to paint out of bounds.
Typically, but not necessarily, goes to zero when scrolled out of the viewport.
The paint extent combined with the paint origin determines the next sliverās overlap constraint.
SliverGeometry.maxPaintExtent
: the number of pixels this sliver could paint in an unconstrained environment (i.e., a shrink-wrapping viewport). Approximate.
SliverGeometry.layoutExtent
: the number of contiguous pixels consumed by this sliver in the visible region for layout (i.e., the amount of space this sliver will physically occupy in the viewport). Measured from the sliverās layout position regardless of paint origin. Layout positions are advanced by summing layout extents.
Layout extent ā¤ paint extent: cannot take up more space than is painted.
Defaults to paint extent.
Typically, but not necessarily, goes to zero when scrolled out of the viewport. Can still take up space even when logically offscreen.
SliverGeometry.scrollExtent
: the amount of scrolling needed to reach this sliverās trailing edge from its leading edge. Used to calculate all subsequent sliverās scroll offsets.
Approximate, except when sliver is able to fully paint.
Typically, but not necessarily, constant.
Typically, but not necessarily, an upper bound on paint extent.
SliverGeometry.cacheExtent
: the number of contiguous pixels consumed by this sliver in the cacheable region. Measured from the sliverās layout position, as adjusted by the cache origin (i.e., the offset into the available cache region from the sliverās current scroll offset).
Layout extent ā¤ paint extent ā¤ cache extent: the cacheable viewport includes the visible region; thus, any visible pixels consumed count toward the sliverās cache extent.
Slivers that paint at positions other than their layout position will need to calculate how much of the cache region was actually consumed.
SliverGeometry.hitTestExtent
: the number of contiguous pixels painted by this sliver that are interactive in the visible region. Measured from the sliverās layout position. Does not take the paint origin into account.
Hit test extent ā¤ paint extent: can only handle events in painted region.
Defaults to paint extent.
Typically, but not necessarily, goes to zero when scrolled out of the viewport.
Subject to interpretation by RenderSliver.hitTest
which might, for instance, take into account paint origin; if applicable, the parent sliver must be written to forward hit events that fall outside of its own interactive bounds.
SliverGeometry.maxScrollObstructionExtent
: only meaningful for slivers that affix themselves to the leading or trailing edges of the viewport (e.g., a pinned header), effectively reducing the amount of content visible within the viewport (i.e., the viewportās outer dimension). If applicable, this is the extent of that reduction, in pixels.
SliverGeometry.hasVisualOverflow
: whether a clip is necessary due to visual overflow. The viewport will add a clip if any of its children report visual overflow.
SliverGeometry.visible
: whether this sliver should be painted. Defaults to true when paint extent is greater than zero.
SliverGeometry.scrollOffsetCorrection
: a correction indicating that the incoming scroll offset was invalid (e.g., [?]) and therefore must be corrected. If set, no further geometry need be returned as layout will be recomputed from scratch. Must be propagated to the containing viewport.
SliverToBoxAdapter
is a single-child render object widget that incorporates a RenderSliverToBoxAdapter
into the render tree.
RenderSliverSingleBoxAdapter
lays the groundwork for managing a single render box child.
Children are positioned using physical coordinates (SliverPhysicalParentData
); these coordinates are measured from the parentās visible top-left corner to the childās visible top-left corner.
It provides a hook for updating the childās position using the parentās constraints and geometry (RenderSliverSingleBoxAdapter.setChildParentData
). This calculation takes into account that render boxes may be partially off screen and must therefore be offset during painting to ensure that the correct slice of the box is visible.
For instance, assuming a typical downward-oriented viewport, a non-zero scroll offset indicates how much of the parent is currently above the top of the viewport. To paint the correct slice of the viewport, the render box must be translated above the canvas by that same amount.
To crystallize this, consider an upward-oriented viewport. The childās paint offset is zero as its parent is scrolled into view (the boxās visible top-left corner is coincident with its parentās visible top-left corner). Once the sliver has been fully scrolled across the viewport and begins to scroll off the viewportās top edge, its parentās first visible top-left corner effectively becomes the top of the viewport and remains there as scrolling continues (i.e., this is now the top-left visible corner of the sliver). Therefore, to correctly paint the box shifting out of view, it must be offset to account for the shifted frame of reference.
The childās main axis position is calculated using the boxās actual edge since render box is viewport naive. Thus, the boxās main axis position may be negative if it precedes the viewportās leading edge.
Hit testing and painting trivially wrap the boxās own implementation.
RenderSliverToBoxAdapter
extends RenderSliverSingleBoxAdapter
and does the actual work of adapting the sliver layout protocol to the render box equivalent.
Boxes receive tight cross axis constraints and unbounded main axis constraints (i.e., the box is free to select its main dimension without constraint).
This extent serves as the sliverās scroll extent and max paint extent (i.e., because the size of the box is precisely how much this sliver can scroll and paint).
The adapter calculates how much of the desired extent would be visible given the sliverās current scroll offset and the remaining space in the viewport (via RenderSliver.calculatePaintOffset
).
This quantity is used as the paint, layout, and hit-testing extents (i.e., because this is how much of the box is visible, consuming space, and interactive, respectively).
If the box cannot be rendered in full (i.e., there isnāt enough room in the viewport or the sliver isnāt past the leading edge), a clip is applied by indicating that there is visual overflow.
RenderSliverPadding
: adds padding around a child sliver.
Lays out child with the incoming constraints modified to account for padding.
SliverGeometry.scrollExtent
: decreased to reflect leading padding pushing content toward the viewportās leading edge.
SliverGeometry.cacheOrigin
: offset to reflect padding consuming cache space.
SliverGeometry.remainingPaintExtent
, SliverGeometry.remainingCacheExtent
: reduced by any leading padding that is visible.
SliverGeometry.crossAxisExtent
: reduced by the total cross axis padding.
Calculate the padding sliverās geometry using the childās geometry.
SliverGeometry.paintExtent
: either the total visible main axis padding plus the childās layout extent or, if larger, the leading padding plus the childās paint extent (this would allow the childās painting to overflow trailing padding). Clamped to remaining paint extent.
SliverGeometry.scrollExtent
:the childās scroll extent plus main axis padding.
SliverGeometry.layoutExtent
: the total visible main axis padding plus the childās layout extent. Clamped to the paint extent (i.e., cannot consume more space than is painted).
SliverGeometry.hitTestExtent
: the total visible main axis padding plus the childās paint extent or, if larger, the leading padding plus the childās hit testing extent (allowing the childās hit testing region to overflow).
Finally, a position is selected according to axis direction, growth direction, and the leading padding that is currently visible.
RenderSliverFillRemaining
is a single box adapter that causes its child to fill the remaining space in the viewport. Subsequent slivers are never seen as this sliver consumes all usable space.
This sliver calculates the remaining paint extent in the main axis, ensuring that this is passed as a tight constraint to the box child (unlike RenderSliverToBoxAdapter
which leaves the main extent unconstrained). The child is subsequently laid out.
The sliver then calculates its own geometry.
SliverGeometry.scrollExtent
: the full extent of the viewport. This is an upper bound since it isnāt actually possible to scroll beyond this sliver: it consumes all viewport space.
SliverGeometry.paintExtent
, SliverGeometry.layoutExtent
: matches the extent of the child that fits within the viewport.
Finally, the box is positioned (using the incoming constraints and calculated geometry) such that it is correctly offset during painting if it cannot fit in the viewport.
TableColumnWidth
describes the width of a single column in a RenderTable
. Implementations can produce a flex factor for the column (via TableColumnWidth.flex
, which may iterate over every cell) as well as a maximum and minimum intrinsic width (via TableColumnWidth.maxIntrinsicWidth
and TableColumnWidth.minIntrinsicWidth
, which also have access to the incoming maximum width constraint). Intrinsic dimensions are expensive to compute since they typically visit the entire subtree for each cell in the column. Subclasses implement a subset of these methods to provide different mechanisms for sizing columns.
FixedColumnWidth
produces tight intrinsic dimensions, returning the provided constant without additional computation.
FractionColumnWidth
applies a fraction to the incoming maximum width constraint to produce tight intrinsic dimensions. If the incoming constraint is unbounded, the resulting width will be zero.
MaxColumnWidth
and MinColumnWidth
encapsulate two TableColumnWidth
instances, returning the greater or lesser value produced by each method, respectively.
FlexColumnWidth
returns the specified flex value which corresponds to the portion of free space to be utilized by the column (i.e., free space is distributed according to the ratio of the columnās flex factor to the total flex factor). The intrinsic width is set to zero so that the column does not consume any inflexible space.
IntrinsicColumnWidth
is the most expensive strategy, sizing the column according to the contained cellsā intrinsic widths. A flex factor allows columns to expand even further by incorporating a portion of any unclaimed space. The minimum and maximum intrinsic widths are defined as the maximum value reported by all contained render boxes (via RenderBox.getMinIntrinsicWidth
and RenderBox.getMaxIntrinsicWidth
with unbounded height).
TableCellVerticalAlignment
specifies how a cell is positioned within a row. Top and bottom ensure that the corresponding side of the cell and row are coincident, middle vertically centers the cell, baseline aligns cells such that all baselines are coincident (cells lacking a baseline are top aligned), and fill sizes cells to the height of the cell (if all cells fill, the row will have zero height).
TableBorder
describes the appearance of borders around and within a table. Similar to Border
, TableBorder
exposes BorderSide
instances for each of the cardinal directions (TableBorder.top
, TableBorder.bottom
, etc). In addition, TableBorder
describes the seams between rows (TableBorder.horizontalInside
) and columns (TableBorder.verticalInside
). Borders are painted via TableBorder.paint
using row and column offsets determined by layout (e.g., the number of pixels from the bounding rectangleās top and left edges for horizontal and vertical borders; there will be one entry for each interior seam).
RenderTable
accepts a list of decorations to be applied to each row in order. These decorations span the full extent of each row, unlike any cell-based decorations (which would be limited to the dimensions of the cell; cells may be offset within the row due to TableCellVerticalAlignment
and ).
TableCellParentData
extends BoxParentData
to include the cellās vertical alignment (via TableCellVerticalAlignment
) as well as the most recent zero-indexed row and column numbers (via TableCellParentData.y
and TableCellParentData.x
, respectively). The cellās coordinates are set during RenderTable
layout whereas the vertical alignment is set by TableCell
, a ParentDataWidget
subclass.
RenderTable
is a render box implementing table layout and painting. Columns may be associated with a sizing strategy via a mapping from index to TableColumnWidth
(columnWidths
); a default strategy (defaultColumnWidth
) and default vertical alignment (defaultVerticalAlignment
) round out layout. The table is painted with a border (TableBorder
) and each rowās full extent may be decorated (rowDecorations
, via Decoration
). RenderBox
children are passed as a list of rows; internally, children are stored in row-major order using a single list. The number of columns and rows can be inferred from the child list, or specifically set. If these values are subsequently altered, children that no longer fit in the table will be dropped.
Children are stored in row-major order using a single list. Tables accept a flat list of children (via RenderTable.setFlatChildren
), using a column count to divide cells into rows. New children are adopted (via RenderBox.adoptChild
) and missing children are dropped (via RenderBox.dropChild
); children that are moved are neither adopted nor dropped. Children may also be added using a list of rows (via RenderTable.setChildren
); this clears all children before adding each row incrementally (via RenderTable.addRow
). Note that this may unnecessarily drop children, unlike RenderTable.setFlatChildren
.
Children are visited in row-major order. That is, the first row is iterated in order, then the second row, and so on. This is the order used when painting; hit testing uses the opposite order (i.e., starting from the last item in the last row).
A collection of TableColumnWidth
instances describe how each column consumes space in the table. During layout, these instances are used to produce concrete widths given the incoming constraints (via RenderTable._computeColumnWidths
).
Intrinsic widths and flex factors are computed for each column by locating the appropriate TableColumnWidth
and passing the maximum width constraint as well as all contained cells.
The columnās width is initially defined as its maximum intrinsic width (flex factor only increases this width). Later, column widths may be reduced to satisfy incoming constraints.
Table width is therefore computed by summing the maximum intrinsic width of all columns.
Flex factors are summed for all flexible columns; maximum intrinsic widths are summed for all inflexible columns. These values are used to identify and distribute free space.
If there are flexible columns and room for expansion given the incoming constraints, free space is divided between all such columns. That is, if the table width (i.e., total maximum intrinsic width) is less than the incoming maximum width (or, if unbounded, the minimum width), there is room for flexible columns to expand.
Remaining space is defined as the relevant width constraint minus the maximum intrinsic widths of all inflexible columns.
This space is distributed in proportion to the ratio of the columnās flex factor to the sum of all flex factors.
If this would expand the column, the delta is computed and applied both to the columnās width and the tableās width.
If there were no flexible columns, ensure that the table is at least as wide as the minimum width constraint.
The difference between the table width (i.e., total maximum intrinsic width) and the minimum width is evenly distributed between all columns.
Ensure that the table does not exceed the maximum width constraint.
Columns may be sized using an arbitrary combination of intrinsic widths and flexible space. Some columns also specify a minimum intrinsic width. As a result, itās not possible to resize a table using flex alone. An iterative approach is necessary to resize columns to respect the maximum width constraint without violating their other layout characteristics. The amount by which the table exceeds the maximum width constraint is the deficit.
Flexible columns are repeatedly shrunk until theyāve all reached their minimum intrinsic widths (i.e., no flexible columns remain) or the deficit has been eliminated.
The deficit is divided according to each columnās flex factor (in relation to the total flex factor, which may change as noted below).
If this amount would shrink the column below its minimum width, the column is clamped to this width and the deficit reduced by the corresponding delta. The column is no longer considered flexible (reducing the total flex factor for subsequent calculations). Otherwise, the deficit is reduced by the full amount.
This process is iterative because some columns cannot be shrunk by the full amount.
Any remaining deficit must be addressed using inflexible columns (all flexible space has been consumed). Columns are considered āavailableā if they havenāt reached their minimum width. Available columns are repeatedly shrunk until the deficit is eliminated or there are no more available columns.
The deficit is divided evenly between available columns.
If this amount would shrink the column below its minimum width, the column is clamped to this width and the deficit reduced by the corresponding delta (this reduces the number of available columns). Otherwise, the deficit is reduced by the full amount.
This process is iterative because some columns cannot be shrunk by the full amount.
The tableās intrinsic widths are calculated as the sum of each columnās largest intrinsic width (using maximum or minimum dimensions with no height constraint, via TableColumnWidth
).
The tableās minimum and maximum intrinsic heights are equivalent, representing the sum of the largest intrinsic height found in each row (using the calculated column width as input). That is, each row is as tall as its tallest child, with the tableās total height corresponding to the sum of all such heights.
Concrete column widths are computed (via RenderTable._computeColumnWidths
) using the width argument as a tight constraint.
Next, the largest maximum intrinsic height for each row is calculated (via RenderBox.getMaxIntrinsicHeight
) using the calculated column width. The maximum row heights are summed to produce the tableās intrinsic height.
If the table has zero columns or rows, itās as small as possible given the incoming constraints.
First, concrete columns widths are calculated. These widths are incrementally summed to produce a list of x-coordinates describing the left edge of each column (RenderTable._columnLefts
). The copy of this list used by layout is flipped for right-to-left locales. The overall table width is defined as the sum of all columns widths (e.g., the last column x-coordinate plus the last columnās width).
Next, a list of a y-coordinates describing the top edge of each row is calculated incrementally (RenderTable._rowTops
). Child layout proceeds as this list is calculated (i.e., row-by-row).
The list of row tops (RenderTable._rowTops
) is cleared and seeded with an initial y-coordinate of zero (i.e., layout starts from the origin along the y-axis). The current row height is zeroed as are before- and after-baseline distances. These values track the maximum dimensions produced as cells within the row are laid out. The before-baseline distance is the maximum distance from a childās top to its baseline; the after-baseline distance is the maximum distance from a childās baseline to its bottom.
Layout pass: iterate over all non-null children within the row, updating parent data (i.e., x- and y-coordinates within the table) and performing layout based on the childās vertical alignment (read from parent data and set by TableCell
, a ParentDataWidget
subclass).
Children with top, middle, or bottom alignment are laid out with unbounded height and a tight width constraint corresponding to the columnās width.
Children with baseline alignment are also laid out with unbounded height and a tight width constraint.
Children with a baseline (via RenderBox.getDistanceToBaseline
) update the baseline distances to be at least as large as the childās values.
Children without a baseline update the rowās height to be at least as large as the childās height. These children are positioned at the columnās left edge and the rowās top edge (this is the only position set during the first pass).
Children with fill alignment are an exception; these are laid out during the second pass, once row height is known.
If a baseline is produced during the first pass, row height is updated to be at least as large as the total baseline distance (i.e., the sum of before- and after-baseline distances).
The tableās baseline distance is defined as the first rowās before-baseline distance.
Positioning pass: iterate over all non-null children within the row, positioning them based on vertical alignment.
Children with top, middle, and bottom alignment are positioned at the columnās left edge and the rowās top, middle, or bottom edges, respectively.
Children with baseline alignment and an actual baseline are positioned such that all baselines align (i.e., each childās baseline is coincident with the maximum before baseline distance). Those without baselines have already been positioned.
Children with fill alignment are now laid out with tight constraints matching the rowās height and the columnās width; children are positioned at the columnās left edge and the rowās top edge.
Proceed to the next row by calculating the next rowās top using the row height (and adding it to RenderTable._rowTops
).
The tableās width and height (i.e., size) is defined as the sum of columns widths and row heights, respectively.
If the table has zero columns or rows, its border (if defined) is painted into a zero-height rectangle matching the tableās width.
Each non-null decoration (RenderTable._rowDecorations
) is painted via Decoration.createBoxPainter
. Decorations are positioned using the incoming offset and the list of row tops (RenderTable._rowTops
).
Each non-null child is painted at the position calculated during layout, adjusted by the incoming offset.
Finally, the tableās border is painted using the list of row and column edges (these lists are filtered such that only interior edges are passed to TableBorder.paint
).
The border will be sized to match the total width consumed by columns and total height consumed by rows.
The painted height may fall short of the render objectās actual height (i.e., if the total row height is less than the minimum height constraint). In this case, there will be empty space below the table.
Table layout always satisfies the minimum width constraint, so there will never be empty horizontal space.
A decoration is a high level description of graphics to be painted onto a canvas, generally corresponding to a box but can also describe other shapes, too. In addition to being paintable, decorations support interaction and interpolation.
DecorationImage
describes an image (as obtained via ImageProvider
) to be inscribed within a decoration, accepting many of the same arguments as paintImage
. The alignment, repetition, and box fit determine how the image is laid out within the decoration and, if enabled, horizontal reflection will be applied for right-to-left locales.
DecorationImagePainter
(obtained via DecorationImage.createPainter
) performs the actual painting; this is a thin wrapper around paintImage
that resolves the ImageProvider
and applies any clipping and horizontal reflection.
BoxShadow
is a Shadow
subclass that additionally describes spread distance (i.e., the amount of dilation to apply to the casting elementās mask before computing the shadow). Shadows are typically arranged into a list to support a single decoration casting multiple shadows.
BorderRadiusGeometry
describes the border radii of a particular box (via BorderRadius
or BorderRadiusDirectional
depending on text direction sensitivity). BorderRadiusGeometry
is composed of four immutable Radius
instances.
BorderSide
describes a single side of a border; the precise interpretation is determined by the enclosing ShapeBorder
subclass. Each side has a color, a style (via BorderStyle
), and a width. A width of 0.0
will enable hairline rendering; that is, the border will be 1 physical pixel wide (BorderStyle.none
is necessary to prevent the border from rendering). When hairline rendering is utilized, pixels may appear darker if they are painted multiple times by the given path. Border sides may be merged provided that they share a common style and color. Doing so produces a new BorderSide
having a width equal to the sum of its constituents.
ShapeBorder
is the base class of all shape outlines, including those used by box decorations; in essence, it describes a single shape with edges of defined width (typically via BorderSide
). Shape borders can be interpolated and combined (via the addition operator or ShapeBorder.add
). Additionally, borders may be scaled (affecting properties like border width and radii) and painted directly to a canvas (via ShapeBorder.paint
); painting may be adjusted based on text direction. Paths describing the shapeās inner and outer edges may also be queried (via ShapeBorder.getInnerPath
and ShapeBorder.getOuterPath
).
BoxBorder
is a subclass of ShapeBorder
that is further specialized by Border
and BorderDirectional
(the latter adding text direction sensitivity). These instances describe a set of four borders corresponding to the cardinal directions; their precise arrangement is left undefined until rendering. Borders may be combined (via Border.merge
) provided that all associated sides share a style and color. If so, the corresponding widths are added together.
Borders must be made concrete by providing a rectangle and, optionally, a BoxShape
. The provided rectangle determines how the borders are actually rendered; uniform borders are more efficient ot paint.
BoxShape
describes how a box decoration (or border) is to be rendered into its bounds. If rectangular, painting is coincident with the bounds. If circular, the box is painted as a uniform circle with diameter matching the smaller of the bounding dimensions.
Decoration describes an adaptive collection of graphical effects that may be applied to an arbitrary rectangle (e.g., box). Decorations optionally specify a padding (via Decoration.padding
) to ensure that any additional painting within a box (e.g., from a child widget; note that decorations do not perform clipping) does not overlap with the decorationās own painting. Additionally, certain decorations can be marked as complex (via Decoration.isComplex
) to enable caching.
Decorations support hit testing (via Decoration.hitTest
). A size is provided so that the decoration may be scaled to a particular box. The given offset describes a position within this box relative to its top-left corner. An optional TextDirection
supports containers that are sensitive to this parameter.
Decorations support linear interpolation (via Decoration.lerp
, Decoration.lerpFrom
, and Decoration.lerpTo
). The ātā parameter represents a position on a timeline with 0 corresponding to 0% (i.e., the pre-state) and 1 corresponding to 100% (i.e., the post-state); note that values outside of this range are possible. If the source or destination value is null (indicating that a true interpolation isnāt possible), a default interpolation should be computed that reasonably approximates a true interpolation.
BoxDecoration
is a Decoration
subclass that describes the appearance of a graphical box. Boxes are composed of a number of elements, including a border, a drop shadow, and a background. The background is itself comprised of color, gradient, and image layers. While typically rectangular, boxes may be given rounded corners or even a circular shape (via BoxDecoration.shape
). BoxDecorations
provide a BoxPainter
subclass capable of rendering the described box given different ImageConfigurations
.
ShapeDecoration
is analogous to BoxDecoration
but supports rendering into any shape (via ShapeBorder
). Rendering occurs in layers: first a fill color is painted, then a gradient, and finally an image. Next, the ShapeBorder
is painted (clipping the previous layers); the border also serves as the casting element for all associated shadows. ShapeDecoration
also uses a BoxPainter
subclass for rendering.
Shape decorations may be obtained from box decorations (via ShapeDecoration.fromBoxDecoration
) since the latter is derived from the former. In general, box decorations are more efficient since they do not need to represent arbitrary shapes; however, shapes support a wider arrange of interpolation (e.g., rectangle to circle).
DecoratedBox
incorporates a decoration into the widget hierarchy. Decorations can be painted in the foreground or background via DecorationPosition
(i.e., in front of or behind the child, respectively). Generally, Container
is used to incorporate a DecoratedBox
into the UI.
NotchedShape
describes the difference of two shapes (i.e., a guest shape is subtracted from a host shape). A path describing this shape is obtained by specifying two bounding rectangles (i.e., the host and the guest) sharing a coordinate space. The AutomaticNotchedShape
subclass uses these bounds to determine the concrete dimensions of ShapeBorder
instances before computing their difference.
BoxPainter
provides a base class for instances capable of rendering a Decoration
to a canvas given an ImageConfiguration
. The configuration specifies the final size, scale, and locale to be used when rendering; this information allows an otherwise abstract decoration to be made concrete. Since decorations may rely on asynchronous image providers, BoxPainter.onChanged
notifies client code when the associated resources have changed (i.e, so that painting may be repeated).
RenderSliverList
positions children of varying extents in a linear array along the main axis. All positioning is relative to adjacent children in the list. Since only visible children are materialized, and earlier children may change extent, positions are occasionally corrected to maintain a consistent state.
RenderSliverGrid
positions children in a two dimensional arrangement determined during layout by a SliverGridDelegate
. The positioning, spacing, and size of each item is generally static, though the delegate is free to compute an arbitrarily complex layout. The current layout strategy does not support non-fixed extents.
When extent is not fixed, position cannot be directly computed from a childās index. Instead, position must be measured by laying out all preceding children. Since this would be quite expensive and eliminate the benefit of viewport culling, an optimization is used whereby previously laid out children serve to anchor newly revealed children (i.e., positions are determined relative to previously laid out children).
Note that this can lead to inconsistencies if off-screen children change size or are removed; scroll offset corrections are applied to address these errors.
Layout proceeds by establishing the range of children currently visible in the viewport; that is, the range of children beginning from the child starting at or intersecting the viewportās leading edge to the child ending at or intersecting its trailing edge. Unlike a fixed extent list, this range cannot be computed directly but must be measured by traversing and building children starting from the first known child. Layout is computed as follows:
Ensure that thereās a first known child (since, as mentioned, layout is performed relative to previously laid out children).
If there isnāt, create (but do not layout) this initial child with an offset and index of zero. If this fails, the list has zero extent and layout is complete.
If the first known child does not precede or start at the viewportās leading edge, build children toward the leading edge until such a child is identified.
Leading children must be built as the walk progresses since they must not have existed (else, there would have been a different first known child). As part of building, the item is inserted into the listās child model.
If there are no more children to build, the last successfully processed child is positioned at offset zero (i.e., bumped up to the top of the list).
If the incoming scroll offset is already zero, then the walk ends; this child satisfies the search criteria.
If the scroll offset is non-zero, a scroll offset correction is needed.
A non-zero offset implies that a portion of the list precedes the leading edge. Given that there arenāt even enough children to reach the leading edge, this cannot be the case.
The correction ensures that the incoming scroll offset will be zero when layout is reattempted. Since the earliest child is now positioned at offset zero, the inconsistency is corrected.
The newly built childās scroll offset is computed by subtracting its paint extent from the last childās scroll offset.
If the resulting offset is negative, a scroll offset correction is needed.
A negative offset implies that there is insufficient room for the child. The last childās offset represents the number of pixels available before that child; if this is less than the new childās extent, the list is too small and the incoming scroll offset must be invalid.
All preceding children that do not fit (including the one that triggered this case) are built and measured to determine their total extent. The earliest such child is positioned at offset zero.
A scroll offset correction is calculated to allow sufficient room for the overflowing children while ensuring that the last processed child appears at the same visual location.
This quantity is the total extent needed minus the last walked itemās scroll offset.
Position the child at the calculated scroll offset.
The first child in the list must now precede or start at the viewportās leading edge. Ensure that it has been laid out (e.g., if the preceding walk wasnāt necessary).
Advance to find the child starting at or intersecting with the viewportās leading edge (there may have already been several such children in the child list). Then, advance to find the child ending at or intersecting with the viewportās trailing edge.
Advance by identifying the next child in the list while incrementing an index counter to detect gaps.
If the next child hasnāt been built or a gap is detected, build and layout the child at the current index.
If no more children can be built, report failure.
If the next child hasnāt been laid out, lay it out now.
While advancing, position each child directly after the preceding child.
If advancing fails before the leading edge is reached, remove all but the latest such child (via garbage collection). Maintain this child as it captures the listās total dimensions (i.e., its position plus its paint extent corresponds to the listās scroll extent).
Complete layout with zero paint extent, using the last item to compute overall scroll and maximum paint extents.
Count the children that wholly precede the viewportās leading edge. Once the trailing child is found, count all children following it. Remove the corresponding children via garbage collection since they are no longer visible.
Return the resulting geometry:
SliverGeometry.scrollExtent
: estimated maximum extent (this is correct for fixed extent lists).
SliverGeometry.maxPaintExtent
: estimated maximum extent (this is the most that can be painted).
SliverGeometry.paintExtent
: the portion of reified children that are actually visible (via RenderSliver.calculatePaintOffset
).
SliverGeometry.hasVisualOverflow
: true if the trailing child extends beyond the viewportās leading edge, or the list precedes the viewportās leading edge (i.e., incoming scroll offset is greater than zero).
If the list was fully scrolled, it will not have had an opportunity to lay out children. However, it is still necessary to report underflow to the manager.
SliverGridGeometry
captures the geometry of an item within a grid. This encompasses a childās scroll offset, cross axis offset, main axis extent, and cross axis extent.
SliverGridParentData
extends SliverMultiBoxAdaptorParentData
to include the childās cross axis offset. This is necessary since multiple children within a grid can share the same scroll offset while appearing at different cross axis offsets.
SliverGridLayout
encapsulates positioning, sizing, and spacing logic. It is consulted during layout to determine the position and size of each grid item. This information is provided by returning the minimum and maximum index for a given scroll offset (via SliverGridLayout.getMinChildIndexForScrollOffset
and SliverGridLayout.getMaxChildIndexForScrollOffset
), as well as the grid geometry for a given index (via SliverGridLayout.getGeometryForChildIndex
).
SliverGridDelegate
builds a SliverGridLayout
subclass on demand (i.e., during grid layout). This allows the calculated layout to adjust to incoming sliver constraints.
SliverGridRegularTileLayout
calculates a layout wherein children are equally sized and spaced. As such, all aspects of layout are computed directly (i.e., without measuring adjacent children).
SliverGridDelegateWithFixedCrossAxisCount
configures a SliverGridRegularTileLayout
such that the same number of children appear at a given scroll offset. Children are sized to share the available cross axis extent equally.
SliverGridDelegateWithMaxCrossAxisExtent
configures a SliverGridRegularTileLayout
such that tiles are no larger than the provided maximum cross axis extent. A candidate extent is calculated that divides the available space evenly (i.e., without a remainder) and that is as large as possible.
Grids are laid out similarly to fixed extent lists. Once a layout is computed (via SliverGridDelegate.getLayout
), it is treated as a static description of the grid (i.e., positioning is absolute). As a result, children do not need to be reified to measure extent and position. Layout proceeds as follows:
Compute a new SliverGridLayout
via SliverGridDelegate
, providing the incoming constraints..
Target first and last indices are computed based on the children that would be visible given the scroll offset and the remaining paint extent (via SliverGridLayout.getMinChildIndexForScrollOffset
and SliverGridLayout.getMaxChildIndexForScrollOffset
, respectively).
Shrink wrapping viewports have infinite extent. In this case, there is no last index.
Any children that were visible but are now outside of the target index range are garbage collected (via RenderSliverMultiBoxAdaptor.collectGarbage
). This also cleans up any expired keep alive children.
If there are no children attached to the grid, insert (but do not lay out) an initial child at the first index.
If this child cannot be built, layout is completed with scroll extent and maximum paint extent set to the calculated max scroll offset (via SliverGridLayout.computeMaxScrollOffset
); all other geometry remains zero.
All children still attached to the grid fall in the visible index range and there is at least one such child.
If indices have become visible that precede the first childās index, the corresponding children are built and laid out (via RenderSliverMultiBoxAdaptor.insertAndLayoutLeadingChild
).
These children will need to be built since they could not have been attached to the grid by assumption.
If one of these children cannot be built, layout will fail. This is likely a bug.
Identify the child with the largest index that has been built and laid out so far. This is the trailing child.
If there were leading children, this will be the leading child adjacent to the initial child. If not, this is the initial child itself (which is now laid out if necessary).
Lay out every remaining child until there is no more room (i.e., the target index is reached) or no more children (i.e., a child cannot be built). Update the trailing child as layout progresses.
The trailing child serves as the āafterā argument when inserting children (via RenderSliverMultiBoxAdaptor.insertAndLayoutChild
).
The children may have already been attached to the grid. If so, the child is laid out without being rebuilt.
Layout offsets for both the main and cross axes are assigned according the geometry reported by the SliverGridLayout
(via SliverGridLayout.getGeometryForChildIndex
).
Compute the estimated maximum extent using the first and last index that were actually reified as well as the enclosing leading and trailing scroll offsets.
Return the resulting geometry:
SliverGeometry.scrollExtent
: estimated maximum extent (this is correct for grids with fixed extent).
SliverGeometry.maxPaintExtent
: estimated maximum extent (this is the most that can be painted).
SliverGeometry.paintExtent
: the visible portion of the range defined by the leading and trailing scroll offsets.
SliverGeometry.hasVisualOverflow
: always true, unfortunately.
If the list was fully scrolled, it will not have had an opportunity to lay out children. However, it is still necessary to report underflow to the manager.
WORK IN PROGRESS
Let's start by talking about what State even IS.
In the broadest possible sense, the state of an app is everything that exists in memory when the app is running. This includes the appās assets, all the variables that the Flutter framework keeps about the UI, animation state, textures, fonts, and so on. While this broadest possible definition of state is valid, itās not very useful for the architecture of an app.
When we're discussing managing state, it's really a discussion about the best practices in the design of your code to change the state of your application. There are many libraries that provide techniques to help us manage state. Many of which were around before Flutter. As this guide is developed, we intend to talk about those different approaches.
TODO
The common Color
type is generally used throughout the framework. These may be organized into swatches with a single primary aRGB
value and a mapping from arbitrary keys to Color
instances. Material provides a specialization called MaterialColor
which uses an index value as key and limits the map to ten entries (50, 100, 200, ... 900), with larger indices being associated with darker shades. These are further organized into a standard set of colors and swatches within the Colors
class.
Flex
is the base class for Row
and Column
. It implements the flex layout protocol in an axis-agnostic manner.
Row
is identical to Flex
with a default axis of Axis.horizontal
.
Column
is identical to Flex
with a default axis of Axis.vertical
.
Flexible
is the base class for Expanded
. It is a parent data widget that alters its childās flex value. Its default fit is FlexFit.loose
, which causes its child to be laid out with loose constraints
Expanded
is identical to Flexible
with a default fit of FlexFit.tight
. Consequently, it passes tight constraints to its children, requiring them to fill all available space.
All flexible containers follow the same protocol.
Layout children without flex factors with unbounded main constraints and the incoming cross constraints (if stretching, cross constraints are tight).
Apportion remaining space among flex children using flex factors.
Main axis size = myFlex * (freeSpace / totalFlex)
Layout each child as above, with the resulting size as the main axis constraint. Use tight constraints for FlexFit.tight
; else, use loose.
The cross extent is the max of all child cross extents.
If using MainAxisSize.max
, the main extent is the incoming max constraint. Else, the main extent is the sum of all child extents in that dimension (subject to constraints).
Children are positioned according to MainAxisAlignment
and CrossAxisAlignment
.
In short, containers size to their child plus any padding; in so doing, they respect any additional constraints provided directly or via a width or height. Decorations may be painted over this entire region. Next, a margin is added around the resulting box and, if specified, a transformation applied to the entire container.
If there is no child and no explicit size, the container shrinks in unbounded environments and expands in bounded ones.
The container widget delegates to a number of sub-widgets based on its configuration. Each behavior is layered atop all previous layers (thus, child refers to the accumulation of widgets). If a width or height is provided, these are transformed into extra constraints.
If there is no child and no explicit size:
Shrink when the incoming constraints are unbounded (via LimitedBox
); else, expand (via ConstrainedBox
).
If there is an alignment:
Align the child within the parent (via Align
).
If there is padding or the decoration has padding...
Apply the total padding to the child (via Padding
).
If there is a decoration:
Wrap the child in the decoration (via DecoratedBox
).
If there is a foreground decoration:
Wrap the child in the foreground decoration (via DecoratedBox
, using DecorationPosition.foreground
).
If there are extra constraints:
Apply the extra constraints to the incoming constraints (via ConstrainedBox
).
If there is a margin...
Apply the margin to the child (via Padding
).
If there is a transform...
Transform the child accordingly (via Transform
).
FocusManager
, stored in the WidgetsBinding
, tracks the currently focused node and the most recent node to request focus. It handles updating focus to the new primary (if any) and maintaining the consistency of the focus tree by sending appropriate notifications. The manager also bubbles raw keyboard events up from the focused node and tracks the current highlight mode.
FocusTraversalPolicy
dictates how focus moves within a focus scope. Traversal can happen in a TraversalDirection
(TraversalDirection.left
, TraversalDirection.up
) or to the next, previous, or first node (i.e., the node to receive focus when nothing else is focused). All traversal is limited to the nodeās closest enclosing scope. The default traversal policy is the ReadingOrderTraversalPolicy
but can be overridden using the DefaultFocusTraversal
inherited widget. FocusNode.skipTraversal
can be used to allow a node to be focusable without being eligible for traversal.
FocusAttachment
is a helper used to attach, detach, and reparent a focus node as it moves around the focus tree, typically in response to changes to the underlying widget tree. As such, this ensures that the nodeās focus parent is associated with a context (BuildContext
) that is an ancestor of its own context. The
attachment instance delegates to the underlying focus node using inherited widgets to set defaults -- i.e., when reparenting, Focus.of
/ FocusScope.of
are used to find the nodeās new parent.
FocusNode
represents a discrete focusable element within the UI. Focus nodes can request focus, discard focus, respond to focus changes, and receive keyboard events when focused. Focus nodes are grouped into collections using scopes which are themselves a type of focus node. Conceptually, focus nodes are entities in the UI that can receive focus whereas focus scopes allow focus to shift amongst a group of descendant nodes. As program state, focus nodes must be associated with a State
instance.
FocusScopeNode
is a focus node subclass that organizes focus nodes into a traversable group. Conceptually, a scope corresponds to the subtree (including nested scopes) rooted at the focus scope node. Descendent nodes add themselves to the nearest enclosing scope when receiving focus (FocusNode._reparent
calls FocusNode._removeChild
to ensure scopes forget nodes that have moved). Focus scopes maintain a history stack with the top corresponding to the most recently focused node. If a child is removed, the last focused scope becomes the focused child; this process will not update the primary focus.
Focus is a stateful widget that manages a focus node that is either provided (i.e., so that an ancestor can control focus) or created automatically. Focus.of
establishes an inherited relationship causing the dependant widget to be rebuilt when focus changes. Autofocus allows the node to request focus within the enclosing scope if no other node is already focused.
FocusScope
is the same as above, but manages a focus scope node.
FocusHighlightMode
and FocusHighlightStrategy
determine how focusable widgets respond to focus with respect to highlighting. By default the highlight mode is updated automatically by tracking whether the last interaction was touch-based. The highlight mode is either FocusHighlightMode.traditional
, indicating that all controls are eligible, and FocusHighlightMode.touch
, indicating that only those controls that summon the soft keyboard are eligible.
Focus determines the UI elements that will receive raw keyboard events.
The primary focused node is unique within the application. As such, it is the default recipient of keyboard events, as forwarded by the FocusManager
.
All nodes along the path from the root focus scope to the primary focused node have focus. Keyboard events will bubble along this path until they are handled.
Moving focus (via traversal or by requesting focus) does not call FocusNode.unfocus
, which would remove it from the enclosing scopeās stack of recently focused nodes, also ensuring that a pending update doesnāt refocus the node.
Controls managing focus nodes may choose to render with a highlight in response to FocusNode.highlightMode
.
The focus tree is a sparse representation of focusable elements in the UI consisting entirely of focus nodes. Focus nodes maintain a list of children, as well as assessors that return the descendants and ancestors of a node in furthest and closest order, respectively.
Some focus nodes represent scopes which demarcate a subtree rooted at that node. Conceptually, scopes serve as a container for focus traversal operations.
Both scopes and ordinary nodes can receive focus, but scopes pass focus to the first descendent node that isnāt a scope. Even if a scope is not along the path to the primary focus, it tracks a focused child (FocusScopeNode.focusedChild
) that would receive focus if it were.
Scopes maintain a stack of previously focused children with the top of the stack being the first to receive focus if that scope receives focus (the focused child). When the focused child is removed, focus is shifted to the previous holder. If a focused child is a scope, it is called the scopeās first focus.
When a node is focused, all enclosing scopes ensure that their focused child is either the target node or a scope thatās one level closer to the target node.
As focus changes, affected nodes receive focus notifications that clients can use to update the UI. The focused node also receives raw keyboard events which bubble up the focus path.
A FocusAttachment
ensures a FocusNode
is anchored to the correct parent when the focus tree changes (e.g., because the widget tree rebuilds). This ensures that the associated build contexts are consistent with the actual widget tree.
As a form of program state, every focus node must be hosted by a StatefulWidget
ās State
instance. When state is initialized (State.initState
), the focus node is attached to the current BuildContext
(via FocusNode.attach
), returning a FocusAttachment
handle.
When the widget tree rebuilds (State.build
, State.didChangeDependencies
), the attachment must be updated via FocusAttachment.reparent
. If a widget is configured to use a new focus node (State.didUpdateWidget
), the previous attachment must be detached (FocusAttachment.detach
, which calls FocusNode.unfocus
as needed) before attaching the new node. Finally, the focus node must be disposed when the host state is itself disposed (FocusNode.dispose
, which calls FocusAttachment.detach
as needed).
Reparenting is delegated to FocusNode._reparent
, using Focus.of
/ FocusScope.of
to locate the nearest parent node. If the node being reparented previous had focus, focus is restored through the new path via FocusNode._setAsFocusedChild
.
A focus node can have multiple attachments, but only one attachment may be active at a time.
FocusManager
tracks the root focus scope as well as the current (primary) and requesting (next) focus nodes. It also maintains a list of dirty nodes that require update notifications.
As nodes request focus (FocusNode.requestFocus
), the manager is notified that the node is dirty (FocusNode._markAsDirty
) and that a focus update is needed (FocusManager._markNeedsUpdate
), passing the requesting node. This sets the next focus node and schedules a microtask to actually update focus. This can delay focus updates by up to one frame.
This microtask promotes the requesting node to primary (FocusManager._applyFocusChange
), marking all nodes from the root to the incoming and outgoing nodes, inclusive, as being dirty. Then, all dirty nodes are notified that focus has changed (FocusNode._notify
) and marked clean.
The new primary node updates all enclosing scopes such that they describe a path toward it via each scopeās focused child (FocusNode._setAsFocusedChild
).
When a node requests focus (FocusNode.requestFocus
), every enclosing scope is updated to focus toward the requesting node directly or via an intermediate scope (FocusNode._setAsFocusedChild
). The node is then marked dirty (FocusNode._markAsDirty
) which updates the managerās next node and requests an update.
When a scope requests focus (FocusScopeNode.requestFocus
), the scope tree is traversed to find the first descendent node that isnāt a scope. Once found, that node is focused. Otherwise, the deepest scope is focused.
The focus manager resolves focus via FocusManager._applyFocusChange
, promoting the next node to the primary and sending notifications to all dirty nodes.
When a node is unfocused, the enclosing scope āforgetsā the node and the manager is notified (via FrameManager._willUnfocusNode
).
If the node had been tracked as primary or next, the corresponding property is cleared and an update scheduled. If a next node is available, that node becomes primary. If there is no primary and no next node, the root scope becomes primary.
The deepest non-scope node will not be automatically focused. However, future traversal will attempt to identify the current first focus (FocusTraversalPolicy.findFirstFocus
) when navigating.
When a node is detached, if it had been primary, it is unfocused as above. It is then removed from the focus tree.
When a node is disposed, the manager is notified (via FrameManager._willDisposeFocusNode
) which calls FrameManager._willUnfocusNode
, as above. Finally, it is detached.
Focus updates are scheduled once per frame. As a result, state will not stack but resolve to the most recent request.
Some controls display the soft keyboard in response to focus. The keyboard token is a boolean tracking whether focus was requested explicitly (via FocusNode.requestFocus
) or assigned automatically due to another node losing focus.
Controls that display the keyboard consume the token (FocusNode.consumeKeyboardToken
), which returns its value and sets it to false. This ensures that the keyboard is shown exactly once in response to explicit user interaction.