Images
How are images represented?
At the lowest level, images are represented as a
Uint8List
(i.e., an opaque list of unsigned bytes). These bytes can be expressed in any number of image formats, and must be decoded to a common representation by a coded.instantiateImageCodec
accepts a list of bytes and returns the appropriate codec from the engine already bound to the provided image. This function accepts an optional width and height; if these do not match the image’s intrinsic size, the image is scaled accordingly. If only one dimension is provided, the other dimension remains the intrinsic dimension.PaintingBinding.instantiateImageCodec
provides a thin wrapper around this function with the intention of eventually supporting additional processing.Codec represents the application of a codec on a pre-specified image array. Codecs process both single frames and animated images. Once the
Codec
is retrieved viainstantiateImageCodec
, the decodedFrameInfo
(which contains the image) may be requested viaCodec.nextFrame
; this may be invoked repeatedly for animations, and will automatically wrap to the first frame. TheCodec
must be disposed when no longer needed (the image data remains valid).DecoderCallback
provides a layer of indirection between image decoding (via theCodec
returned byinstantiateImageCodec
) and any additional decoding necessary for an image (e.g., resizing). It is primarily used withImageProvider
to encapsulate decoding-specific implementation details.FrameInfo
corresponds to a single frame in an animated image (single images are considered one-frame animations). Duration, if application, is exposed viaFrameInfo.duration
. Otherwise, the decodedImage
may be read asFrameInfo.image
.Image is an opaque handle to decoded image pixels managed by the engine, with a width and a height. The decoded bytes can be obtained via
Image.toByteData
which accepts anImageByteFormat
specifying (e.g.,ImageByteFormat.rawRgba
,ImageByteFormat.png
). However, the raw bytes are often not required as theImage
handle is sufficient to paint images to the screen.ImageInfo
associates anImage
with a pixel density (i.e.,ImageInfo.scale
). Scale describes the number of image pixels per one side of a logical pixel (e.g., a scale of2.0
implies that each 1x1 logical pixel corresponds to 2x2 image pixels; that is, a 100x100 pixel image would be painted into a 50x50 logical pixel region and therefore have twice the resolution depending on the display).
What are the building blocks for managing image data?
The image framework must account for a variety of cases that complicate image handling. Some images are obtained asynchronously; others are arranged into image sets so than an optimal variant can be selected at runtime (e.g., for the current resolution). Others correspond to animations which update at regular intervals. Any of these images may be cached to avoid unnecessary loading.
ImageStream
provides a consistent handle to a potentially evolving image resource; changes may be due to loading, animation, or explicit mutation. Changes are driven by a singleImageStreamCompleter
, which notifies theImageStream
whenever concrete image data is available or changes (viaImageInfo
). TheImageStream
forwards notifications to one or more listeners (i.e.,ImageStreamListener
instances), which may be invoked multiple times as the image loads or mutates. EachImageStream
is associated with a key that can be used to determine whether twoImageStream
instances are backed by the same completer [?].ImageStreamListener
encapsulates a set of callbacks for responding to image events. If the image is being loaded (e.g., via the network), anImageChunkListener
is invoked with anImageChunkEvent
describing overall progress. If an image has become available, anImageListener
is invoked with the finalImageInfo
(including a flag indicating whether the image was loaded synchronously). Last, if the image has failed to load, anImageErrorListener
is invoked.The chunk listener is only called when an image must be loaded (e.g., via
NetworkImage
). It may also be called after theImageListener
if the image is an animation (i.e., another frame is being fetched).The
ImageListener
may be invoked multiple times if the associated image is an animation (i.e., once per frame).ImageStreamListeners
are compared on the basis of the contained callbacks.
ImageStreamCompleter
manages image loading for anImageStream
from an asynchronous source (typically aCodec
). A list ofImageStreamListener
instances are notified whenever image data becomes available (i.e., the completer “completes”), either in part (viaImageStreamListener.onImageChunk
) or in whole (viaImageStreamListener.onImage
). Listeners may be invoked multiple times (e.g., as chunks are loaded or with multiple animation frames). The completer notifies listeners when an image becomes available (viaImageStreamCompleter.setImage
). Adding listeners after the image has been loaded will trigger synchronous notifications; this is how theImageCache
avoids refetching images unnecessarily.The corresponding
Image
must be resolved to anImageInfo
(i.e., by incorporating scale); the scale is often provided explicitly.OneFrameImageStreamCompleter
handles one-frame (i.e., single) images. The correspondingImageInfo
is provided as a future; when this future resolves,OneFrameImageStreamCompleter.setImage
is invoked, notifying listeners.MultiFrameImageStreamCompleter
handles multi-frame images (e.g., animations or engine frames), completing once per animation frame as long as there are listeners. If the image is only associated with a single frame, that frame is emitted immediately. An optional stream ofImageChunkEvents
allows loading status to be conveyed to the attached listeners. Note that adding a new listener will attempt to decode the next frame; this is safe, if inefficient, asCodec.getNextFrame
automatically cycles.The next frame is eagerly decoded by the codec (via
Codec.getNextFrame
). Once available, a non-repeating callback is scheduled to emit the frame after the corresponding duration has lapsed (viaFrameInfo.duration
); the first frame is emitted immediately. If there are additional frames (viaCodec.frameCount
), or the animation cycles (viaCodec.repetitionCount
), this process is repeated. Frames are emitted viaMultiFrameImageStreamCompleter.setImage
, notifying all subscribed listeners.In this way, the next frame is decoded eagerly but only emitted during the first application frame after the duration has lapsed. If at any point there are no listeners, the process is paused; no frames are decoded or emitted until a listener is added.
A singleton
ImageCache
is created by thePaintingBinding
during initialization (viaPaintingBinding.createImageCache
). The cache maps keys toImageStreamCompleters
, retaining only the most recently used entries. Once a maximum number of entries or bytes is reached, the least recently accessed entries are evicted. Note that any images actively retained by the application (e.g.,Image
,ImageInfo
,ImageStream
, etc.) cannot be invalidated by this cache; the cache is only useful when locating anImageStreamCompleter
for a given key. If a completer is found, and the image has already been loaded, the listener is notified with the image synchronously.ImageCache.putIfAbsent
serves as the main interface to the cache. If a key is found, the correspondingImageStreamCompleter
is returned. Otherwise, the completer is built using the provided closure. In both cases, the timestamp is updated.Because images are loaded asynchronously, the cache policy can only be enforced once the image loads. Thus, the cache maintains two maps:
ImageCache._pendingImages
andImageCache._cache
. On a cache miss, the newly built completer is added to the pending map and assigned anImageStreamListener
; when the listener is notified, the final image size is calculated, the listener removed, and the cache policy applied. The completer is then moved to the cache map.If an image fails to load, it does not contribute to cache size but it does consume an entry. If an image is too large for the cache, the cache is expanded to accommodate the image with some headroom.
ImageConfiguration
describes the operating environment so that the best image can be selected from a set of alternatives (i.e., a double resolution image for a retina display); this is the primary input toImageProvider
. A configuration can be extracted from the element tree viacreateLocalImageConfiguration
.ImageProvider
identifies an image without committing to a specific asset. This allows the best variant to be selected according to the currentImageConfiguration
. Any images managed viaImageProvider
are passed through the globalImageCache
.ImageProvider.obtainKey
produces a key that uniquely identifies a specific image (including scale) given anImageConfiguration
and the provider’s settings.ImageProvider.load
builds anImageStreamCompleter
for a given key. The completer begins fetching the image immediately and decodes the resulting bytes via theDecoderCallback
.ImageProvider.resolve
wraps both methods to (1) obtain a key (viaImageProvider.obtainKey
), (2) query the cache using the key, and (3) if no completer is found, create anImageStreamCompleter
(viaImageProvider.load
) and update the cache.
precacheImage
provides a convenient wrapper aroundImageProvider
so that a given image can be added to theImageCache
. So long as the same key is used for subsequent accesses, the image will be available immediately (provided that it has fully loaded).
How are images provided and painted?
ImageProvider
federates access to images, selecting the best image given the current environment (i.e.,ImageConfiguration
). The provider computes a key that uniquely identifies the asset to be loaded; this creates or retrieves anImageStreamCompleter
from the cache. Various provider subclasses overrideImageProvider.load
to customize how the completer is configured; most useSynchronousFuture
to try to provide the image without needing to wait for the next frame. TheImageStreamCompleter
is constructed with a future resolving to a bound codec (i.e., associated with raw image bytes). These bytes may be obtained in a variety of ways: from the network, from memory, from anAssetBundle
, etc. The completer accepts an optional stream ofImageChunkEvents
so that any listeners are notified as the image loads. Once the raw image has been read into memory, an appropriate codec is provided by the engine (via aDecoderCallback
, which generally delegates toPaintingBinding.instantiateImageCodec
). This codec is used to decode frames (potentially multiple times for animated images). As frames are decoded, listeners (e.g., an image widget) are notified with the finalizedImageInfo
(which includes decoded bytes and scale data). These bytes may be painted directly viapaintImage
.
What image providers are available?
FileImage
provides images from the file system. As its own key,FileImage
overrides the equality operator to compare the target file name and scale. AMultiFrameImageStreamCompleter
is configured with the provided scale, and aCodec
instantiated using bytes loaded from the file (viaFile.readAsBytes
). The completer will only notify listeners when the image is fully loaded.MemoryImage
provides images directly from an immutable array of bytes. As its own key,MemoryImage
overrides the equality operator to compare scale as well as the actual bytes. AMultiFrameImageStreamCompleter
is configured with the provided scale, and aCodec
instantiated using the provided bytes. The completer will only notify listeners when the image is fully loaded.NetworkImage
defines a thin interface to support different means of providing images from the network; it relies on instances of itself for a key.io.NetworkImage
implements this interface usingDart
’s standardHttpClient
to retrieve images. As its own key,io.NetworkImage
overrides the equality operator to compare the targetURL
and scale. AMultiFrameImageStreamCompleter
is configured with the provided scale, and aCodec
instantiated using the consolidated bytes produced byHttpClient.getUrl
. Unlike the other providers,io.NetworkImage
will report loading status to its listeners via a stream ofImageChunkEvents
. This relies on the “Content-Length” header being correctly reported by the remote server.
AssetBundleImageProvider
provides images from anAssetBundle
usingAssetBundleImageKey
. The key is comprised of a specific asset bundle, asset key, and image scale. AMultiFrameImageStreamCompleter
is configured with the provided scale, and aCodec
instantiated using bytes loaded from the bundle (viaAssetBundle.load
). The completer will only notify listeners when the image is fully loaded.ExactAssetImage
is a subclass that allows the bundle, asset, and image scale to be set explicitly, rather than read from anImageConfiguration
.AssetImage
is a subclass that resolves to the most appropriate asset given a set of alternatives and the current runtime environment. Primarily, this subclass selects assets optimized for the device’s pixel ratio using a simple naming convention. Assets are organized into logical directories within a given parent. Directories are named “Nx/”, where N is corresponds to the image’s intended scale; the default asset (with 1:1 scaling) is rooted within the parent itself. The variant that most closely matches the current pixel ratio is selected.The main difference from the superclass is method by which keys are produced; all other functionality (e.g.,
AssetImage.load
,AssetImage.resolve
) is inherited.A
JSON
-encoded asset manifest is produced from the pubspec file during building. This manifest is parsed to locate variants of each asset according to the scheme described above; from this list, the variant nearest the current pixel ratio is identified. A key is produced using this asset’s scale (which may not match the device’s pixel ratio), its fully qualified name, and the bundle that was used. The completer is configured by the superclass.The equality operator is overridden such that only the unresolved asset name and bundle are consulted; scale (and the best fitting asset name) are excluded from the comparison.
ResizeImage
wraps anotherImageProvider
to support size-aware caching. Ordinarily, images are decoded using their intrinsic dimensions (viainstantiateImageCodec
); consequently, the version of the image stored in theImageCache
corresponds to the full size image. This is inefficient for images that are displayed at a different size.ResizeImage
addresses this by augmenting the underlying key with the requested dimensions; it also applies aDecoderCallback
that forwards these dimensions viainstantiateImageCodec
.The first time an image is provided, it is loaded using the underlying provider (via
ImageProvider.load
, which doesn’t update the cache). The resultingImageStreamCompleter
is cached using theResizeImage
’s key (i.e.,_SizeAwareCacheKey
).Subsequent accesses will hit the cache, which returns an image with the corresponding dimensions. Usages with different dimensions will result in additional entries being added to the cache.
What are the building blocks for image rendering?
There are several auxiliary classes allowing image rendering to be customized.
BlendMode
specifies how pixels from source and destination images are combined during compositing (e.g.,BlendMode.multiply
,BlendMode.overlay
,BlendMode.difference
).ColorFilter
specifies a function combining two colors into an output color; this function is applied before any blending.ImageFilter
provides a handle to an image filter applied during rendering (e.g., gaussian blur, scaling transforms).FilterQuality
allows the quality/performance of said filter to be broadly customized.Canvas exposes the lowest level API for painting images into layers. The principal methods include
Canvas.drawImage
, which paints an image at a particular offset,Canvas.drawImageRect
, which copies pixels from a source rectangle to a destination rectangle,Canvas.drawAtlas
, which does the same for a variety of rectangles using a “sprite atlas,” andCanvas.drawImageNine
, which slices an image into a non-uniform 3x3 grid, scaling the cardinal and center boxes to fill a destination rectangle (the corners are copied directly). Each of these methods accept aPaint
instance to be used when compositing the image (e.g., allowing aBlendMode
to be specified); each also calls directly into the engine to perform any actual painting.paintImage
wraps the canvas API to provide an imperative API for painting images in a variety of styles. It adds support for applying a box fit (e.g.,BoxFit.cover
to ensure the image covers the destination) and repeated painting (e.g.,ImageRepeat.repeat
to tile an image to cover the destination), managing layers as necessary.
How are images integrated with the render tree?
Image encapsulates a variety of widgets, providing a high level interface to the image rendering machinery. This widget configures an
ImageProvider
(selected based on the named constructor, e.g.,Image.network
,Image.asset
,Image.memory
) which it resolves to obtain anImageStream
. Whenever this stream emits anImageInfo
instance, the widget is rebuilt and repainted. Conversely, if the widget is reconfigured, theImageProvider
is re-resolved, and the process repeated. From this flow,Image
extracts the necessary data to fully configure aRawImage
widget, which manages the actualRenderImage
If a cache width or cache height are provided, the underlying
ImageProvider
is wrapped in aResizeImage
(viaImage._resizeIfNeeded
). This ensures that the image is decoded and cached using the provided dimensions, potentially limiting the amount of memory used.Image adds support for image chrome (e.g., a loading indicator) and semantic annotations.
If animations are disabled by
TickerMode
,Image
pauses rendering of any new animation frames provided by theImageStream
for consistency.The
ImageConfiguration
passed toImageProvider
is retrieved from the widget environment viacreateLocalImageConfiguration
.
RawImage
is aLeafRenderObjectWidget
wrapping aRenderImage
and all necessary configuration data (e.g., theui.Image
, scale, dimensions, blend mode).RenderImage
is aRenderBox
leaf node that paints a single image; as such, it relies on the widget system to repaint whenever the associatedImageStream
emits a new frame. Painting is performed bypaintImage
using a destination rectangle sized by layout and positioned at the current offset. Alignment, box fit, and repetition determines how the image fills the available space.There are two types of dimensions considered during layout: the image’s intrinsic dimensions (e.g., the number of bytes comprising the image divided by scale) and the requested dimensions (e.g., the value of width and height specified by the caller).
During layout, the incoming constraints are applied to the requested dimensions (via
RenderImage._sizeForConstraints
): first, the requested dimensions are clamped to the constraints. Next, the result is adjusted to match the image’s intrinsic aspect ratio while remaining as large as possible. If there is no image associated with the render object, the smallest possible size is selected.The intrinsic dimension methods apply the same logic. However, instead of using the incoming constraints, one dimension is fixed (i.e., corresponding to method’s parameter) whereas the other is left unconstrained.
Last updated