How are images represented?

  • At the lowest level, images are represented as a Uint8List (i.e., an opaque list of unsigned bytes). These bytes can be expressed in any number of image formats, and must be decoded to a common representation by a codec.

  • instantiateImageCodec accepts a list of bytes and returns the appropriate codec from the engine already bound to the provided image. This function accepts an optional width and height; if these do not match the image’s intrinsic size, the image is scaled accordingly. If only one dimension is provided, the other dimension remains the intrinsic dimension. PaintingBinding.instantiateImageCodec provides a thin wrapper around this function with the intention of eventually supporting additional processing.

  • Codec represents the application of a codec on a pre-specified image array. Codecs process both single frames and animated images. Once the Codec is retrieved via instantiateImageCodec, the decoded FrameInfo (which contains the image) may be requested via Codec.nextFrame; this may be invoked repeatedly for animations, and will automatically wrap to the first frame. The Codec must be disposed when no longer needed (the image data remains valid).

  • DecoderCallback provides a layer of indirection between image decoding (via the Codec returned by instantiateImageCodec) and any additional decoding necessary for an image (e.g., resizing). It is primarily used with ImageProvider to encapsulate decoding-specific implementation details.

  • FrameInfo corresponds to a single frame in an animated image (single images are considered one-frame animations). Duration, if application, is exposed via FrameInfo.duration. Otherwise, the decoded Image may be read as FrameInfo.image.

  • Image is an opaque handle to decoded image pixels managed by the engine, with a width and a height. The decoded bytes can be obtained via Image.toByteData which accepts an ImageByteFormat specifying (e.g., ImageByteFormat.rawRgba, ImageByteFormat.png). However, the raw bytes are often not required as the Image handle is sufficient to paint images to the screen.

  • ImageInfo associates an Image with a pixel density (i.e., ImageInfo.scale). Scale describes the number of image pixels per one side of a logical pixel (e.g., a scale of 2.0 implies that each 1x1 logical pixel corresponds to 2x2 image pixels; that is, a 100x100 pixel image would be painted into a 50x50 logical pixel region and therefore have twice the resolution depending on the display).

What are the building blocks for managing image data?

  • The image framework must account for a variety of cases that complicate image handling. Some images are obtained asynchronously; others are arranged into image sets so than an optimal variant can be selected at runtime (e.g., for the current resolution). Others correspond to animations which update at regular intervals. Any of these images may be cached to avoid unnecessary loading.

  • ImageStream provides a consistent handle to a potentially evolving image resource; changes may be due to loading, animation, or explicit mutation. Changes are driven by a single ImageStreamCompleter, which notifies the ImageStream whenever concrete image data is available or changes (via ImageInfo). The ImageStream forwards notifications to one or more listeners (i.e., ImageStreamListener instances), which may be invoked multiple times as the image loads or mutates. Each ImageStream is associated with a key that can be used to determine whether two ImageStream instances are backed by the same completer [?].

  • ImageStreamListener encapsulates a set of callbacks for responding to image events. If the image is being loaded (e.g., via the network), an ImageChunkListener is invoked with an ImageChunkEvent describing overall progress. If an image has become available, an ImageListener is invoked with the final ImageInfo (including a flag indicating whether the image was loaded synchronously). Last, if the image has failed to load, an ImageErrorListener is invoked.

    • The chunk listener is only called when an image must be loaded (e.g., via NetworkImage). It may also be called after the ImageListener if the image is an animation (i.e., another frame is being fetched).

    • The ImageListener may be invoked multiple times if the associated image is an animation (i.e., once per frame).

    • ImageStreamListeners are compared on the basis of the contained callbacks.

  • ImageStreamCompleter manages image loading for an ImageStream from an asynchronous source (typically a Codec). A list of ImageStreamListener instances are notified whenever image data becomes available (i.e., the completer “completes”), either in part (via ImageStreamListener.onImageChunk) or in whole (via ImageStreamListener.onImage). Listeners may be invoked multiple times (e.g., as chunks are loaded or with multiple animation frames). The completer notifies listeners when an image becomes available (via ImageStreamCompleter.setImage). Adding listeners after the image has been loaded will trigger synchronous notifications; this is how the ImageCache avoids refetching images unnecessarily.

    • The corresponding Image must be resolved to an ImageInfo (i.e., by incorporating scale); the scale is often provided explicitly.

    • OneFrameImageStreamCompleter handles one-frame (i.e., single) images. The corresponding ImageInfo is provided as a future; when this future resolves, OneFrameImageStreamCompleter.setImage is invoked, notifying listeners.

    • MultiFrameImageStreamCompleter handles multi-frame images (e.g., animations or engine frames), completing once per animation frame as long as there are listeners. If the image is only associated with a single frame, that frame is emitted immediately. An optional stream of ImageChunkEvents allows loading status to be conveyed to the attached listeners. Note that adding a new listener will attempt to decode the next frame; this is safe, if inefficient, as Codec.getNextFrame automatically cycles.

      • The next frame is eagerly decoded by the codec (via Codec.getNextFrame). Once available, a non-repeating callback is scheduled to emit the frame after the corresponding duration has lapsed (via FrameInfo.duration); the first frame is emitted immediately. If there are additional frames (via Codec.frameCount), or the animation cycles (via Codec.repetitionCount), this process is repeated. Frames are emitted via MultiFrameImageStreamCompleter.setImage, notifying all subscribed listeners.

      • In this way, the next frame is decoded eagerly but only emitted during the first application frame after the duration has lapsed. If at any point there are no listeners, the process is paused; no frames are decoded or emitted until a listener is added.

  • A singleton ImageCache is created by the PaintingBinding during initialization (via PaintingBinding.createImageCache). The cache maps keys to ImageStreamCompleters, retaining only the most recently used entries. Once a maximum number of entries or bytes is reached, the least recently accessed entries are evicted. Note that any images actively retained by the application (e.g., Image, ImageInfo, ImageStream, etc.) cannot be invalidated by this cache; the cache is only useful when locating an ImageStreamCompleter for a given key. If a completer is found, and the image has already been loaded, the listener is notified with the image synchronously.

    • ImageCache.putIfAbsent serves as the main interface to the cache. If a key is found, the corresponding ImageStreamCompleter is returned. Otherwise, the completer is built using the provided closure. In both cases, the timestamp is updated.

    • Because images are loaded asynchronously, the cache policy can only be enforced once the image loads. Thus, the cache maintains two maps: ImageCache._pendingImages and ImageCache._cache. On a cache miss, the newly built completer is added to the pending map and assigned an ImageStreamListener; when the listener is notified, the final image size is calculated, the listener removed, and the cache policy applied. The completer is then moved to the cache map.

    • If an image fails to load, it does not contribute to cache size but it does consume an entry. If an image is too large for the cache, the cache is expanded to accommodate the image with some headroom.

  • ImageConfiguration describes the operating environment so that the best image can be selected from a set of alternatives (i.e., a double resolution image for a retina display); this is the primary input to ImageProvider. A configuration can be extracted from the element tree via createLocalImageConfiguration.

  • ImageProvider identifies an image without committing to a specific asset. This allows the best variant to be selected according to the current ImageConfiguration. Any images managed via ImageProvider are passed through the global ImageCache.

    • ImageProvider.obtainKey produces a key that uniquely identifies a specific image (including scale) given an ImageConfiguration and the provider’s settings.

    • ImageProvider.load builds an ImageStreamCompleter for a given key. The completer begins fetching the image immediately and decodes the resulting bytes via the DecoderCallback.

    • ImageProvider.resolve wraps both methods to (1) obtain a key (via ImageProvider.obtainKey), (2) query the cache using the key, and (3) if no completer is found, create an ImageStreamCompleter (via ImageProvider.load) and update the cache.

  • precacheImage provides a convenient wrapper around ImageProvider so that a given image can be added to the ImageCache. So long as the same key is used for subsequent accesses, the image will be available immediately (provided that it has fully loaded).

How are images provided and painted?

  • ImageProvider federates access to images, selecting the best image given the current environment (i.e., ImageConfiguration). The provider computes a key that uniquely identifies the asset to be loaded; this creates or retrieves an ImageStreamCompleter from the cache. Various provider subclasses override ImageProvider.load to customize how the completer is configured; most use SynchronousFuture to try to provide the image without needing to wait for the next frame. The ImageStreamCompleter is constructed with a future resolving to a bound codec (i.e., associated with raw image bytes). These bytes may be obtained in a variety of ways: from the network, from memory, from an AssetBundle, etc. The completer accepts an optional stream of ImageChunkEvents so that any listeners are notified as the image loads. Once the raw image has been read into memory, an appropriate codec is provided by the engine (via a DecoderCallback, which generally delegates to PaintingBinding.instantiateImageCodec). This codec is used to decode frames (potentially multiple times for animated images). As frames are decoded, listeners (e.g., an image widget) are notified with the finalized ImageInfo (which includes decoded bytes and scale data). These bytes may be painted directly via paintImage.

What image providers are available?

  • FileImage provides images from the file system. As its own key, FileImage overrides the equality operator to compare the target file name and scale. A MultiFrameImageStreamCompleter is configured with the provided scale, and a Codec instantiated using bytes loaded from the file (via File.readAsBytes). The completer will only notify listeners when the image is fully loaded.

  • MemoryImage provides images directly from an immutable array of bytes. As its own key, MemoryImage overrides the equality operator to compare scale as well as the actual bytes. A MultiFrameImageStreamCompleter is configured with the provided scale, and a Codec instantiated using the provided bytes. The completer will only notify listeners when the image is fully loaded.

  • NetworkImage defines a thin interface to support different means of providing images from the network; it relies on instances of itself for a key.

    • io.NetworkImage implements this interface using Dart’s standard HttpClient to retrieve images. As its own key, io.NetworkImage overrides the equality operator to compare the target URL and scale. A MultiFrameImageStreamCompleter is configured with the provided scale, and a Codec instantiated using the consolidated bytes produced by HttpClient.getUrl. Unlike the other providers, io.NetworkImage will report loading status to its listeners via a stream of ImageChunkEvents. This relies on the “Content-Length” header being correctly reported by the remote server.

  • AssetBundleImageProvider provides images from an AssetBundle using AssetBundleImageKey. The key is comprised of a specific asset bundle, asset key, and image scale. A MultiFrameImageStreamCompleter is configured with the provided scale, and a Codec instantiated using bytes loaded from the bundle (via AssetBundle.load). The completer will only notify listeners when the image is fully loaded.

    • ExactAssetImage is a subclass that allows the bundle, asset, and image scale to be set explicitly, rather than read from an ImageConfiguration.

    • AssetImage is a subclass that resolves to the most appropriate asset given a set of alternatives and the current runtime environment. Primarily, this subclass selects assets optimized for the device’s pixel ratio using a simple naming convention. Assets are organized into logical directories within a given parent. Directories are named “Nx/”, where N is corresponds to the image’s intended scale; the default asset (with 1:1 scaling) is rooted within the parent itself. The variant that most closely matches the current pixel ratio is selected.

      • The main difference from the superclass is method by which keys are produced; all other functionality (e.g., AssetImage.load, AssetImage.resolve) is inherited.

      • A JSON-encoded asset manifest is produced from the pubspec file during building. This manifest is parsed to locate variants of each asset according to the scheme described above; from this list, the variant nearest the current pixel ratio is identified. A key is produced using this asset’s scale (which may not match the device’s pixel ratio), its fully qualified name, and the bundle that was used. The completer is configured by the superclass.

      • The equality operator is overridden such that only the unresolved asset name and bundle are consulted; scale (and the best fitting asset name) are excluded from the comparison.

  • ResizeImage wraps another ImageProvider to support size-aware caching. Ordinarily, images are decoded using their intrinsic dimensions (via instantiateImageCodec); consequently, the version of the image stored in the ImageCache corresponds to the full size image. This is inefficient for images that are displayed at a different size. ResizeImage addresses this by augmenting the underlying key with the requested dimensions; it also applies a DecoderCallback that forwards these dimensions via instantiateImageCodec.

    • The first time an image is provided, it is loaded using the underlying provider (via ImageProvider.load, which doesn’t update the cache). The resulting ImageStreamCompleter is cached using the ResizeImage’s key (i.e., _SizeAwareCacheKey).

    • Subsequent accesses will hit the cache, which returns an image with the corresponding dimensions. Usages with different dimensions will result in additional entries being added to the cache.

What are the building blocks for image rendering?

  • There are several auxiliary classes allowing image rendering to be customized. BlendMode specifies how pixels from source and destination images are combined during compositing (e.g., BlendMode.multiply, BlendMode.overlay, BlendMode.difference). ColorFilter specifies a function combining two colors into an output color; this function is applied before any blending. ImageFilter provides a handle to an image filter applied during rendering (e.g., Gaussian blur, scaling transforms). FilterQuality allows the quality/performance of said filter to be broadly customized.

  • Canvas exposes the lowest level API for painting images into layers. The principal methods include Canvas.drawImage, which paints an image at a particular offset, Canvas.drawImageRect, which copies pixels from a source rectangle to a destination rectangle, Canvas.drawAtlas, which does the same for a variety of rectangles using a “sprite atlas,” and Canvas.drawImageNine, which slices an image into a non-uniform 3x3 grid, scaling the cardinal and center boxes to fill a destination rectangle (the corners are copied directly). Each of these methods accept a Paint instance to be used when compositing the image (e.g., allowing a BlendMode to be specified); each also calls directly into the engine to perform any actual painting.

  • paintImage wraps the canvas API to provide an imperative API for painting images in a variety of styles. It adds support for applying a box fit (e.g., BoxFit.cover to ensure the image covers the destination) and repeated painting (e.g., ImageRepeat.repeat to tile an image to cover the destination), managing layers as necessary.

How are images integrated with the render tree?

  • Image encapsulates a variety of widgets, providing a high level interface to the image rendering machinery. This widget configures an ImageProvider (selected based on the named constructor, e.g.,, Image.asset, Image.memory) which it resolves to obtain an ImageStream. Whenever this stream emits an ImageInfo instance, the widget is rebuilt and repainted. Conversely, if the widget is reconfigured, the ImageProvider is re-resolved, and the process repeated. From this flow, Image extracts the necessary data to fully configure a RawImage widget, which manages the actual RenderImage

    • If a cache width or cache height are provided, the underlying ImageProvider is wrapped in a ResizeImage (via Image._resizeIfNeeded). This ensures that the image is decoded and cached using the provided dimensions, potentially limiting the amount of memory used.

    • Image adds support for image chrome (e.g., a loading indicator) and semantic annotations.

    • If animations are disabled by TickerMode, Image pauses rendering of any new animation frames provided by the ImageStream for consistency.

    • The ImageConfiguration passed to ImageProvider is retrieved from the widget environment via createLocalImageConfiguration.

  • RawImage is a LeafRenderObjectWidget wrapping a RenderImage and all necessary configuration data (e.g., the ui.Image, scale, dimensions, blend mode).

  • RenderImage is a RenderBox leaf node that paints a single image; as such, it relies on the widget system to repaint whenever the associated ImageStream emits a new frame. Painting is performed by paintImage using a destination rectangle sized by layout and positioned at the current offset. Alignment, box fit, and repetition determines how the image fills the available space.

    • There are two types of dimensions considered during layout: the image’s intrinsic dimensions (e.g., the number of bytes comprising the image divided by scale) and the requested dimensions (e.g., the value of width and height specified by the caller).

    • During layout, the incoming constraints are applied to the requested dimensions (via RenderImage._sizeForConstraints): first, the requested dimensions are clamped to the constraints. Next, the result is adjusted to match the image’s intrinsic aspect ratio while remaining as large as possible. If there is no image associated with the render object, the smallest possible size is selected.

    • The intrinsic dimension methods apply the same logic. However, instead of using the incoming constraints, one dimension is fixed (i.e., corresponding to method’s parameter) whereas the other is left unconstrained.

Last updated