HitTestable, which implies the presence of a
hitTestmethod. The default implementation defers to
RenderView, which itself implements
hitTestto visit the entire render tree. Each render object is given an opportunity to add itself to a shared
HitTestDispatcher) uses the
PointerRouterto pass the original event to all render objects that were hit (
HitTestTarget, and therefore provides
handleEvent). If a
GestureRecognizeris utilized, the event’s pointer is passed via
GestureRecognizer.addPointerwhich registers with the
PointerRouterto receive future events.
- Related events are sent to all original
HitTestTargetas well as any routes registered with the
PointerDownEventwill close the gesture area, barring additional entries, whereas
PointerUpEventwill sweep it, resolving competing gestures and preventing indecisive gestures from locking up input.
Window.onPointerDataPacketcaptures pointer updates from the engine and generates
PointerEventsfrom the raw data.
PointerEventConverteris utilized to map physical coordinates from the engine to logical coordinates, taking into account the device’s pixel ratio (via
- A pointer is added to the gesture recognizer by client code on
PointerDownEvent. Gesture recognizers determine whether a pointer is allowed by overriding
GestureRecognizer.isPointerAllowed. If so, the recognizer subscribes to future pointer events via the
PointerRouterand adds the pointer to the gesture arena via
- The recognizer will process incoming events, outcomes from the gesture arena (
rejectGesture), spontaneous decisions about the gesture (
GestureArenaEntry.resolve), and other externalities. Typically, recognizers watch the stream of
HitTestTarget.handleEvent, looking for terminating events like
PointerDown, or criteria that will cause acceptance / rejection. If the gesture is accepted, the recognizer will continue to process events to characterize the gesture, invoking user-provided callbacks at key moments.
- The gesture recognizer must unsubscribe from the pointer when rejecting or done processing, removing itself from the
- The arena disambiguates multiple gestures in a way that allows single gestures to resolve immediately if there is no competition. A recognizer “wins” if it declares itself the winner or if it’s the last/sole survivor.
GestureArenaTeamcombines multiple recognizers together into a group.
- Captained teams cause the captain recognizer to win when all unaffiliated recognizers reject or a constituent accepts.
- A non-captained team causes the first added recognizer to win when all unaffiliated recognizers reject. However, if a constituent accepts, that recognizer still takes the win.
- There are two major categories of gesture recognizers, multi-touch recognizers (i.e.,
MultiTapGestureRecognizer) that simultaneously process multiple pointers (i.e., tapping with two fingers will register twice), and single-touch recognizer (i.e.,
OneSequenceGestureRecognizer) that will only consider events from a single pointer (i.e., tapping with two fingers will register once).
- There is a helper “Drag” object that is used to communicate drag-related updates to other parts of the framework (like
- There’s a
VelocityTrackerthat generates fairly accurate estimates about drag velocity using curve fitting.
- There are local and global
PointerRouter. Local routes are used as described above; global routes are used to react to any interaction (i.e., to dismiss a tooltip).