Used the the `misspell` tool available at
https://github.com/client9/misspell, then applied hand-corrections. It's
possible we could adopt this as a presubmit, but there are still enough
false positives that it may not be worth the effort.
Corrects uses of setup as a verb to 'set up', leaves noun/noun-phrase
forms of setup as 'setup'. Also settles on 'teardown' as opposed to
tear-down for consistency across the codebase.
A few other minor comment/wording corrections.
This replaces the custom Obj-C TextInputModel implementation used on
macOS with the common C++ implementation used on Linux and Windows. Note
that as a side-effect, this change enables *some* direct IME input for
CJK input but full input will land in follow-up patches that land:
1. Add handling for TextInput.setMarkedTextRect message
2. Add handling for TextInput.setEditableSizeAndTransform message
3. Implement firstRectForCharacterRange:actualRange: using the above.
4. Add NSTextInputContext handling.
This updates the Win32 desktop embedder to support input method (abbreviated IM
or IME) composing regions.
In contrast to languages such as English, where keyboard input is
managed keystroke-by-keystroke, languages such as Japanese require a
multi-step input process wherein the user begins a composing sequence,
during which point their keystrokes are captured by a system input
method and converted into a text sequence. During composing, the user is
able to edit the composing range and manage the conversion from keyboard
input to text before eventually committing the text to the underlying
text input field.
To illustrate this, in Japanese, this sequence might look something like
the following:
1. User types 'k'. The character 'k' is added to the composing region.
Typically, the text 'k' will be inserted inline into the underlying
text field but the composing range will be highlighted in some manner,
frequently with a highlight or underline.
2. User types 'a'. The composing range is replaced with the phonetic
kana character 'か' (ka). The composing range continues to be
highlighted.
3. User types 'k'. The character 'k' is appended to the composing
range such that the highlighted text is now 'かk'
4. User types 'u'. The trailing 'k' is replaced with the phonetic kana
character 'く' (ku) such that the composing range now reads 'かく'
The composing range continues to be highlighted.
5. The user presses the space bar to convert the kana characters to
kanji. The composing range is replaced with '書く' (kaku: to write).
6. The user presses the space bar again to show other conversions. The
user's configured input method (for example, ibus) pops up a
completions menu populated with alternatives such as 各 (kaku:
every), 描く (kaku: to draw), 核 (kaku: pit of a fruit, nucleus), 角
(kaku: angle), etc.
7. The user uses the arrow keys to navigate the completions menu and
select the alternative to input. As they do, the inline composing
region in the text field is updated. It continues to be highlighted
or underlined.
8. The user hits enter to commit the composing region. The text is
committed to the underlying text field and the visual highlighting is
removed.
9. If the user presses another key, a new composing sequence begins.
If a selection is present when composing begins, it is preserved until
the first keypress of input is received, at which point the selection is
deleted. If a composing sequence is aborted before the first keypress,
the selection is preserved. Creating a new selection (with the mouse,
for example) aborts composing and the composing region is automatically
committed. A composing range and selection, both with an extent, are
not permitted to co-exist.
During composing, keyboard navigation via the arrow keys, or home and
end (or equivalent shortcuts) is restricted to the composing range, as
are deletions via backspace and the delete key. This patch adds two new
private convenience methods, `editing_range` and `text_range`. The
former returns the range for which editing is currently active -- the
composing range, if composing, otherwise the full range of the text. The
latter, returns a range from position 0 (inclusive) to `text_.length()`
exclusive.
Windows IME support revolves around two main UI windows: the composition window
and the candidate window. The composition window is a system window overlaid
within the current window bounds which renders the composing string. Flutter
already renders this string itself, so we request that this window be hidden.
The candidate window is a system-rendered dropdown that displays all possible
conversions for the text in the composing region. Since the contents of this
window are specific to the particular IME in use, and because the user may have
installed one or more third-party IMEs, Flutter does not attempt to render this
as a widget itself, but rather delegates to the system-rendered window.
The lifecycle of IME composing begins follows the following event order:
1. WM_IME_SETCONTEXT: on window creation this event is received. We strip the
ISC_SHOWUICOMPOSITIONWINDOW bit from the event lparam before passing it to
DefWindowProc() in order to hide the composition window, which Flutter
already renders itself.
2. WM_IME_STARTCOMPOSITION: triggered whenever the user begins inputting new
text. We use this event to set Flutter's TextInputModel into composing mode.
3. WM_IME_COMPOSITION: triggered on each keypress as the user adds, replaces,
or deletes text in the composing region, navigates with their cursor within
the composing region, or selects a new conversion candidate from the
candidates list.
4. WM_IME_ENDCOMPOSITION: triggered when the user has finished editing the text
in the composing region and decides to commit or abort the composition.
Additionally, the following IME-related events are emitted but not yet handled:
* WM_INPUTLANGCHANGE: triggered whenever the user selects a new language using
the system language selection menu. Since there some language-specific
behaviours to IMEs, we may want to make use of this in the future.
* WM_IME_NOTIFY: triggered to notify of various status events such as opening
or closing the candidate window, setting the conversion mode, etc. None of
these are relevant to Flutter at the moment.
* WM_IME_REQUEST: triggered to notify of various commands/requests such as
triggering reconversion of text, which should begin composition mode, insert
the selected text into the composing region, and allow the user to select new
alternative candidates for the text in question before re-committing their
new selection. This patch doesn't support this feature, but it's an important
feature that we should support in future.
During multi-step text input composing, such as with Chinese, Japanese,
and Korean text input, the framework sends embedders cursor rect updates
in the form of two messages:
* TextInput.setMarkedTextRect: notifies the embedder the size and
position of the composing text rect (or cursor when not composing) in
local coordinates.
* TextInput.setEditableSizeAndTransform: notifies the embedder of the
size of the EditableText and 4x4 transform matrix from local to
PipelineOwner.rootNode coordinates.
On receipt of either message, we cache a local copy on the
TextInputPlugin and notify the Win32FlutterWindow of the updated cursor
rect. In a followup patch, we update Win32FlutterWindow to implement the
Win32 input manager (IMM) calls required to position the IME candidates
window while editing.
This removes most of the remaining FLUTTER_NOLINT comments and opts
these files back into linter enforcement.
I've filed https://github.com/flutter/flutter/issues/68273 to require
that all FLUTTER_NOLINT comments be followed by a GitHub issue URL
describing the problem to be fixed.
* Add multi-step IME support to TextInputModel
This updates the platform-independent TextInputModel to add support for
input method (abbreviated IM or IME) composing regions.
In contrast to languages such as English, where keyboard input is
managed keystroke-by-keystroke, languages such as Japanese require a
multi-step input process wherein the user begins a composing sequence,
during which point their keystrokes are captured by a system input
method and converted into a text sequence. During composing, the user is
able to edit the composing range and manage the conversion from keyboard
input to text before eventually committing the text to the underlying
text input field.
To illustrate this, in Japanese, this sequence might look something like
the following:
1. User types 'k'. The character 'k' is added to the composing region.
Typically, the text 'k' will be inserted inline into the underlying
text field but the composing range will be highlighted in some manner,
frequently with a highlight or underline.
2. User types 'a'. The composing range is replaced with the phonetic
kana character 'か' (ka). The composing range continues to be
highlighted.
3. User types 'k'. The character 'k' is appended to the composing
range such that the highlighted text is now 'かk'
4. User types 'u'. The trailing 'k' is replaced with the phonetic kana
character 'く' (ku) such that the composing range now reads 'かく'
The composing range continues to be highlighted.
5. The user presses the space bar to convert the kana characters to
kanji. The composing range is replaced with '書く' (kaku: to write).
6. The user presses the space bar again to show other conversions. The
user's configured input method (for example, ibus) pops up a
completions menu populated with alternatives such as 各 (kaku:
every), 描く (kaku: to draw), 核 (kaku: pit of a fruit, nucleus), 角
(kaku: angle), etc.
7. The user uses the arrow keys to navigate the completions menu and
select the alternative to input. As they do, the inline composing
region in the text field is updated. It continues to be highlighted
or underlined.
8. The user hits enter to commit the composing region. The text is
committed to the underlying text field and the visual highlighting is
removed.
9. If the user presses another key, a new composing sequence begins.
If a selection is present when composing begins, it is preserved until
the first keypress of input is received, at which point the selection is
deleted. If a composing sequence is aborted before the first keypress,
the selection is preserved. Creating a new selection (with the mouse,
for example) aborts composing and the composing region is automatically
committed. A composing range and selection, both with an extent, are
not permitted to co-exist.
During composing, keyboard navigation via the arrow keys, or home and
end (or equivalent shortcuts) is restricted to the composing range, as
are deletions via backspace and the delete key. This patch adds two new
private convenience methods, `editing_range` and `text_range`. The
former returns the range for which editing is currently active -- the
composing range, if composing, otherwise the full range of the text. The
latter, returns a range from position 0 (inclusive) to `text_.length()`
exclusive.
* Move SetComposingLength to TextRange::set_*
Adds set_base, set_extent, set_start, set_end methods to TextRange.
Adds tests for TextRange::Contains(const TextRange&) where the range
being tested spans the base/extent of the testing range.
This was originally intended to land in #21854, but it seems I didn't
push the additional tests before landing.
The C++ wrapper's plugin registrar can own plugins to provided lifetime
management. However, plugins expect the registrar to be valid for the
life of the object, including during destruction, so any owned plugins
must be explicitly cleared before any registrar-specific destruction
happens.
Replaces selection_base() and selection_extent() with selection() and
SetSelection(int, int) with SetSelection(range).
This also adds the following convenience methods to TextRange:
* reversed()
* Contains(size_t position)
* Contains(const TextRange& range)
as well as operator== for use in unit tests. When Flutter migrates to
C++20, we can replace that method with a default declaration.
Extracts a TextRange class with a base and extent, and start(), end(),
collapsed(), and length() getters.
The possibility of reversed base and extent in selections and composing
ranges makes reasoning about them complex and increases the chances of
errors in the code. This change migrates most uses of base and extent in
the text model to start()/end() or position(). The position() method is
intended purely as an aid to readability to indicate that a collapsed
selection is expected at the call site; it also enforces a debug-time
assertion that this is the case.
Previously, the selection base and extent were stored internally as
iterators over text_. Since iterators must be treated as invalidated
whenever the underlying container changes, this requires that
selection_base_ and selection_extent_ be re-assigned after every change
to text_.
This is not currently particularly problematic, but once we add fields
to track the base and extent of the composing region for multi-step
input method support, as well as support for the sub-range within the
composing region to which edits/completions apply, we end up having to
regenerate a lot of iterators with each change, many of which are
logically unchanged in position.
A side benefit is that this simplifies inspection of these fields when
debugging.
Previously, TextInputModel's SetEditingState method was a 1:1 mapping of
the underlying protocol used on the text input channel between the
framework and the engine. This breaks it up into two methods, which
allows the selection to be updated independently of the text, and avoids
tying the API the the underlying protocol.
This will become more important when we add additional state to support
composing regions for multi-step input methods such as those used for
Japanese.
SetText resets the selection rather than making a best-efforts attempt
to preserve it. This choice was primarily to keep the code simple and
make the API easier to reason about. An alternative would have been to
make a best-effort attempt to preserve the selection, potentially
clamping one or both to the end of the new string. In all cases where an
embedder resets the string, it is expected that they also have the
selection, so can call SetSelection with an updated selection if needed.
Replaces the (temporary) compile-time option to pass engine switches
with the ability to pass them temporarily at runtime via environment
variables. This moves the recently-added code for doing this on Windows
to a shared location for use by all desktop embeddings.
This is enabled only for debug/profile to avoid potential issues with
tampering with released applications, but if there is a need for that in
the future it could be added (potentially with a whitelist, as is
currently used for Dart VM flags).
Temporarily adds a way to enable mirrors as a compile time option,
as is already provided in the Linux embedding, to provide a migration
path for the one remaining known need for compile-time options
that has been raised in flutter/flutter#38569.
When the EncodableValue implementation changed, the old version was
temporarily kept behind an #ifdef to allow temporarily using the old
version, so that the roll would not be blocked. All known existing
clients have migrated, so the legacy version is no longer necessary.
Cleans up header order/grouping for consistency: associated header, C/C++ system/standard library headers, library headers, platform-specific #includes.
Adds <cstring> where strlen, memcpy are being used: there are a bunch of places we use them transitively.
Applies linter-required cleanups. Disables linter on one file due to included RapidJson header. See https://github.com/flutter/flutter/issues/65676
This patch does not cover flutter/shell/platform/darwin. There's a separate, slightly more intensive cleanup for those in progress.
We currently use a mix of C standard includes (e.g. limits.h) and their
C++ variants (e.g. climits). This migrates to a consistent style for all
cases where the C++ variants are acceptable, but leaves the C
equivalents in place where they are required, such as in the embedder
API and other headers that may be used from C.
Add copyright headers in a few files where they were missing.
Trim trailing blank comment line where present, for consistency with
other engine code.
Use the standard libtxt copyright header in one file where it differed
(extra (C) and comma compared to other files in libtxt).
This also amends tools/const_finder/test/const_finder_test.dart to look
for a const an additional four lines down to account for the copyright
header added to the test fixture.
The C++ wrapper makes heavy use of templates to support arbitrary types
in the platform channel classes, but in practice EncodableValue is what
essentially all code will use. This defaults those template types to
reduce boilerplate in plugin code (e.g., allowing the use of
MethodChannel<> instead of MethodChannel<EncodableValue>).
The response APIs for method channels and event channels used pointers
for optional parameters; this kept the API surface simple, but meant
that they couldn't take rvalues. As a result, returning success values
or error details often took an extra line, declaring a variable for the
result just to have something to pass the address of.
This converts them to using references, with function overloading to
allow for optional parameters, so that values can be inlined.
For now the pointer versions are still present, so that conversion can
be done before it becomes a breaking change; they will be removed soon.
Part of https://github.com/flutter/flutter/issues/63975
Relands https://github.com/flutter/engine/pull/20399
Makes BinaryMessenger available from FlutterEngine, rather than just the plugin registrar. This allows for method channels directly in applications without building them as plugins, and matches the other platforms.
Requires some restructuring of code and GN targets in the client wrappers to make the internals in the shared section usable by the implementations of platform-specific parts of the wrappers. Also fixes a latent issue with EnableInputBlocking symbols being declared but not defined for Windows that came up during testing of the restructing.
Fixes https://github.com/flutter/flutter/issues/62871
Makes BinaryMessenger available from FlutterEngine, rather than just the plugin registrar. This allows for method channels directly in applications without building them as plugins, and matches the other platforms.
Requires some restructuring of code and GN targets in the client wrappers to make the internals in the shared section usable by the implementations of platform-specific parts of the wrappers. Also fixes a latent issue with EnableInputBlocking symbols being declared but not defined for Windows that came up during testing of the restructuring.
Fixes https://github.com/flutter/flutter/issues/62871
A recent refactoring broke the USE_LEGACY_ENCODABLE_VALUE codepath in
standard_codec.cc, which went unnoticed since it wasn't being compiled.
This fixes the breakage, and also adds a temporary minimal unit test
target that ensures that all the USE_LEGACY_ENCODABLE_VALUE paths are
being compiled.