When an accessibility service connects we get a callback in
which we either add the service, if this service is in the list
of connecting services (we still want the service to connect),
or we unbind and clear the state, if the service is no longer in
the list of connecting services (we do not want this service to
connect because something change between the bind request and
the connection callback).
The problem is that when the service connects and it is not in
the list of connecting services on service connected we called
the clean up code before the connection was complete. However,
the clean up code expects fully configured services. Now we
fully connect the service and in case there is a problem -
disconnect it.
bug:8232627
Change-Id: I939e544e31ffc1406035265a012c180f2ca95d7c
On user switch the transient state of the old user was not cleared
which means that when we switch back to this user the operational
state such as which event types were dispatched, what state was sent
to local managers, etc is stale. This leads to semi-updated state
and broken behavior. Now if the user becomes inactive, we are clearing
all transient state which will be recreated when the user becomes
active.
bug:8196652
Change-Id: Ie9e0d712b6d567e5074b328f1bb87afaa5395c06
The UI test automation service was not removed from the list of
enabled and installed service where it was explicitly added on
registration. This was leaving the accessibility manager service
in an inconsistent state.
bug:8185435
Change-Id: Ice17cdef361fe98ce34f8dd01ec11dbad6c4d0c2
1. The accessibility manager service updates its internal state
based on which settings are enabled, what accessibility services
are installed and what features are requested by the enabled
services. It was trying to do the minimal amount of work to
react to contextual changes like these which resulted in missed
cases and complex code. Now there is a single method that reads
the contextual information and single method that reacts to
contextual changes. This makes the code much easier to maintain.
2. The accessibility manager service was not updating its internal
state when requested features from accessibility services change.
It was relying on changing system settings and reacting to the
settings change. This is problematic since the internal state is
not updated atomically which leads to race condition bugs. For
example, if touch exploration is enabled and a service requests
it is disabled, the internal state will not be updated but a
request for a settings change will be made. Now while the settings
change is propagating another request form the same service
comes to enable touch exploration but the system incorrectly
thinks touch exploration is enabled. At the end the feature is
disabled even though it was requested.
3. Fixed a potential NPE if the accessibility input filter's event
handler was nullified between processing two event batches.
4. Fixed a bug where, if magnification is enabled, it does not work
on the settings screen since the magnified bounds are not pushed
from the window manager to the accessibility manager.
Change-Id: Idf629a06480e12f0d88372762df6c024fe0d7856
Currently text editing is pretty hard (certain operations even
impossible) for a blind person. To address the issue this change
adds APIs that enable an accessibility service to perform basic
text editing operations such as copy, paste, cut, set selection,
extend selection while moving at a given granularity.
The new APIs enable an accessibility service to expose a gesture
driven efficient text editing facility.
bug:8098384
Change-Id: I82b200138a3fdf4c0c316b774fc08a096ced29d0
Currently we have an "enhance web accessibility" setting that has to be
enabled to make sure web content is accessible. We added the setting to
get user consent because we are injecting JavaScript-based screen-reader
pulled from the Google infrastructure. However, many users do not know
that and (as expected) do not read the user documentation, resulting in
critique for lacking accessibility support in WebViews with JavaScript
enabled (Browser, Gmail, etc).
To smoothen the user experience now "enhance web accessibility" is a
feature an accessibility plug-in can request, similarly to explore by
touch. Now a user does not need to know that she has to explicitly
enable the setting and web accessibility will work out-of-the-box.
Before we were showing a dialog when a plug-in tries to put the device
in a touch exploration mode. However, now that we have one more feature
a plug-in can request, showing two dialogs (assume a plug-in wants both
features) will mean that a user should potentially deal with three
dialogs, one for enabling the service, and one for each feature. We
could merge the dialogs but still the user has to poke two dialogs.
It seems that the permission mechanism is a perfect fit for getting
user permission for an app to do something, in this case to enable
an accessibility feature. We need a separate permission for explore
by touch and enhance web accessibility since the former changes the
interaction model and the latter injects JavaScript in web pages. It
is critical to get user consent for the script injection part so we
need a well-documented permission rather a vague umbrella permission
for poking accessibility features. To allow better grouping of the
accessibility permissions this patch adds a permission group as well.
bug:8089372
Change-Id: Ic125514c34f191aea0416a469e4b3481ab3200b9
This change adds APIs support for implementing UI tests. Such tests do
not rely on internal application structure and can span across application
boundaries. UI automation APIs are encapsulated in the UiAutomation object
that is provided by an Instrumentation object. It is initialized by the
system and can be used for both introspecting the screen and performing
interactions simulating a user. UI test are normal instrumentation tests
and are executed on the device.
UiAutomation uses the accessibility APIs to introspect the screen and
a special delegate object to perform privileged operations such as
injecting input events. Since instrumentation tests are invoked by a shell
command, the shell program launching the tests creates a delegate object and
passes it as an argument to started instrumentation. This delegate
allows the APK that runs the tests to access some privileged operations
protected by a signature level permissions which are explicitly granted
to the shell user.
The UiAutomation object also supports running tests in the legacy way
where the tests are run as a Java shell program. This enables existing
UiAutomator tests to keep working while the new ones should be implemented
using the new APIs. The UiAutomation object exposes lower level APIs which
allow simulation of arbitrary user interactions and writing complete UI test
cases. Clients, such as UiAutomator, are encouraged to implement higher-
level APIs which minimize development effort and can be used as a helper
library by the test developer.
The benefit of this change is decoupling UiAutomator from the system
since the former was calling hidden APIs which required that it is
bundled in the system image. This prevented UiAutomator from being
evolved separately from the system. Also UiAutomator was creating
additional API surface in the system image. Another benefit of the new
design is that now test cases have access to a context and can use
public platform APIs in addition to the UiAutomator ones. Further,
third-parties can develop their own higher level test APIs on top
of the lower level ones exposes by UiAutomation.
bug:8028258
Also this change adds the fully qualified resource name of the view's
id in the emitted AccessibilityNodeInfo if a special flag is set while
configuring the accessibility service. Also added is API for looking
up node infos by this id. The id resource name is relatively more stable
compared to the generaed id number which may change from one build to
another. This API facilitate reuing the already defined ids for UI
automation.
bug:7678973
Change-Id: I589ad14790320dec8a33095953926c2a2dd0228b
1. This patch takes care of the case where a magnified window is covering an unmagnigied
one. One example is a dialog that covers the IME window.
bug:7634430
2. Ensuring that the UI automator tool can connect and correctly dump the screen.
bug:7694696
3. Removed the partial implementation for multi display magnification. It adds
unnecessary complexity since it cannot be implemented without support for
input from multiple screens. We will revisit when necessary.
4. Moved the magnified border window as a surface in the window manager.
5. Moved the mediator APIs on the window manager and the policy methods on the
WindowManagerPolicy.
6. Implemented batch event processing for the accessibility input filter.
Change-Id: I4ebf68b94fb07201e124794f69611ece388ec116
1. The screen magnification feature was implemented entirely as a part of the accessibility
manager. To achieve that the window manager had to implement a bunch of hooks for an
external client to observe its internal state. This was problematic since it dilutes
the window manager interface and allows code that is deeply coupled with the window
manager to reside outside of it. Also the observer callbacks were IPCs which cannot
be called with the window manager's lock held. To avoid that the window manager had
to post messages requesting notification of interested parties which makes the code
consuming the callbacks to run asynchronously of the window manager. This causes timing
issues and adds unnecessary complexity.
Now the magnification logic is split in two halves. The first half that is responsible
to track the magnified portion of the screen and serve as a policy which windows can be
magnified and it is a part of the window manager. This part exposes higher level APIs
allowing interested parties with the right permissions to control the magnification
of a given display. The APIs also allow a client to be registered for callbacks on
interesting changes such as resize of the magnified region, etc. This part servers
as a mediator between magnification controllers and the window manager.
The second half is a controller that is responsible to drive the magnification
state based on touch interactions. It also presents a highlight when magnified to
suggest the magnified potion of the screen. The controller is responsible for auto
zooming out in case the user context changes - rotation, new actitivity. The controller
also auto pans if a dialog appears and it does not interesect the magnified frame.
bug:7410464
2. By design screen magnification and touch exploration work separately and together. If
magnification is enabled the user sees a larger version of the widgets and a sub section
of the screen content. Accessibility services use the introspection APIs to "see" what
is on the screen so they can speak it, navigate to the next item in response to a
gesture, etc. Hence, the information returned to accessibility services has to reflect
what a sighted user would see on the screen. Therefore, if the screen is magnified
we need to adjust the bounds and position of the infos describing views in a magnified
window such that the info bounds are equivalent to what the user sees.
To improve performance we keep accessibility node info caches in the client process.
However, when magnification state changes we have to clear these caches since the
bounds of the cached infos no longer reflect the screen content which just got smaller
or larger.
This patch propagates not only the window scale as before but also the X/Y pan and the
bounds of the magnified portion of the screen to the introspected app. This information
is used to adjust the bounds of the node infos coming from this window such that the
reported bounds are the same as the user sees not as the app thinks they are. Note that
if magnification is enabled we zoom the content and pan it along the X and Y axis. Also
recomputed is the isVisibleToUser property of the reported info since in a magnified
state the user sees a subset of the window content and the views not in the magnified
viewport should be reported as not visible to the user.
bug:7344059
Change-Id: I6f7832c7a6a65c5368b390eb1f1518d0c7afd7d2
1. In touch exploration mode the system clicks in the center of the
accessibility focus rectangle. However, if this rectangle is only
partially shown on the window or on the screen the system may not
be able to perform the click, if the accessibility focus center
is not on the screen, or click on the wrong window, if the access
focus center is outside of the window.
This change clips the rectangle to the window bounds which and the
display bounds. This will ensure no clicks are sent to the wrong
window and no clicks are sent outside of the screen.
bug:7453839
Change-Id: I79f98971e7ebcbb391c37284467dc76076172c5f
1. The accessibility layer has a back door for a UI test automation code running
from the shell to attach. The unregister code does an incorrect identity check
and as a result the register UI test automation service is not disconnected
until its process is killed. The fix is super safe and simple.
bug:7409261
Change-Id: I4b1da18be6c5619dadd4a58fca6724529bc59dea
1. The active window is the one that the user touches or the one
that has input focus. We recognize the user touching a window
by the received accessibility hover events and the user not
touching the screen by a call from the touch explorer. It is
possible that the user touches window that does not have
input focus and as soon as he lifts finger the active one
will become the window that has input focus but now we get
he hover accessibility events from the touched window which
incorrectly changes the active window to be the touched one.
Note that at this point the user is not touching the screen.
bug:7298484
Change-Id: Ife035a798a6e68133f9220eeeabdfcd35a431b56
1. We are showing a warning dialog if the user enables an accessibility
service that requests explore by touch. This dialog was shown only
for the owner but should be shown for the current user.
bug:7304437
Change-Id: I692b5112df16405e6d2e4890aafbfde79981f973
1. The active window is the one the user is touching or the one
that has input focus. It has to be made current immediately
after the user has stopped touching the screen because if the
user types with the IME he should get a feedback for the
letter typed in the text view which is in the input focused
window. Note that we always deliver hover accessibility events
(they are a result of user touching the screen) so change of
the active window before all hover accessibility events from
the touched window are delivered is fine.
bug:7296890
Change-Id: I1ae87c8419e2f19bd8eb68de084c7117c66894bc
1. Accessibility events for changes in the content of a given window, such as
click, focus, etc. are dispatched to clients only if they come from the
active window.
Events for changes in the state of a window, such as window got input focus
or a notification appeared, are always dispatched. The notification events
do not contain source, so a client cannot introspect the notification area
(unless the user explicitly touches it which generates hove events). The
events for a window getting input focus change the active window so they
have to be dispatched.
Events that are a result of the user touching the screen, such as hover
enter, first tocuh, etc. should always be dispatched.
bug:7282006
Change-Id: I96b79189f8571285175d9660a22394cc84f39559
1. The accessibility layer announces user switches. Even though
the initial switch to the owner on a singe user device is a
valid use switch we should not announce it for accessibility.
bug:7264693
Change-Id: Idf022fab6b74c84b7a96bc4ed7c7fee2b83029a6
1. A recently added check was preventing touch exploration being
disabled when the last touch exploring service was turned off.
As a consequence enabling explore by touch was initializing the
input filter with the magnification and the not disabled
screen magnification features.
bug:7256223
Change-Id: I9ed5457705d625805462e4d316b2c8a5af9aabca
1. If an accessibility service does not specify that it handles any
event types it was never added to the list of services while
the system is bound to it. Since the service is not in the list
with enabled services we never unbind it, hence it consumes
resources without doing nothing. This is also semantically
incorrect because a sevice may not want to receive events while
handling only gestures.
bug:5648345
Change-Id: Id478a4704cdeeb1729330f6ae4b8ff9e06320952
1. This change adds a global gesture for enabling accessibility.
To enable this gesture the user has to allow it from the
accessibility settings or use the setup wizard to enable
accessibility. When the global gesture is enabled the user
can long press on power to bring the global actions dialog
and then hold with two fingers for a few seconds to enable
accessibility. The appropriate feedback is also provided.
2. The global gesture is writing directly into the settings for
the current user if performed when the keyguard is not on. If
the keygaurd is on and the current user has no accessibility
enabled, the gesture will temporary enable accessibility
for the current user, i.e. no settings are changed, to allow
the blind user to log into his account. As soon as a user
switch happens the new user settings are inherited. If no
user change happens after temporary enabling accessibility
the temporary changes will be undone when the keyguard goes
away and the device will works as expected by the current user.
bug:6171929
3. The initialization code for the owner was not executed due
to a redundant check, thus putting the accessibility layer in
an inconsistent state which breaks pretty much everything.
bug:7240414
Change-Id: Ie7d7aba80f5867b7f88d5893b848b53fb02a7537
IStatusBarService.collapseQuickSettings is gone;
collapseNotifications is now collapsePanels, which does what
collapse() used to do. Similarly,
IStatusBar.animateCollapseQuickSettings is now simply
IStatusBar.animateCollapse().
Bug: 7245229
Change-Id: Id157d2fdf34926d3c85ffa8b81c741a5359aede4
1. Added APIs for opening the quick settings to the StatusBarManagerService
and the local StatausBarManager. The new APIs are protected by the old
EXPAND_STATUS_BAR permission.
Renamed the expand* and collapse* non-public APIs that are expanding
the notifications to expandNotifications* collapseNotifications* to
better convey what they do given that this change adds
expandQuickSettings* and collapseQuickSettings*.
Added a global action to the accessibility layer to expand the quick
settings which is calling into the new status bar manager APIs.
bug:7030487
Change-Id: Ic7b46e1a132f1c0d71355f18e7c5a9a2424171c3
1. The initial user was set to USER_NULL but some clients were registering
before the user change callback happens. Since the initial user is
the owner the current user id defaults to USER_OWNER.
2. The check for global clients and window connections was using the
calling UID but there are processes that run in a per user basis
as system UID (Setting for example). Now the check is stronger
and comparing the caller PID with that of the system process.
3. The code for finding the focused window id was not checking the
global window token list in addition to that of the current user.
4. The code updating the active window id was calling out into the
window manager with a lock held.
bug:7224670
Change-Id: I9f4b7ea67eb5598b30ee7d1b68a1d3ce0cf8cfb4
1. The active window for accessibility purposes is the either the
window the user is touching or the window that has input focus. We
were using the touch exploration gesture end event to figure
when the user stops touching the screen so we can set the active
window to the input focused one. However, we do not send such
gesture end if the user does not touch explore. If the user only
taps we do not consider this touch exploring. We now have dedicated
accessibility events for first and last touch and this change uses
them as a guide when to update the active window.
bug:6523219
Change-Id: I6262c0c5f408b02dbaa127664e4b426935d7f81f
1. Since adb is restarted on user switch it makes no sense to
try to reconnect the ui automation service since it will
be killed on a user switch.
Disabling touch exploration on UI automation service
connect since it can explicitly put the device in this
state if needed.
bug:6967373
Change-Id: I8cfde74f28f3f03d4ccf24746d43b8178ae2b5ef
1. This change converts the accessibility manager service to
maintain a state per user. When the user changes the services
for the user that is going away are disconnected, the local
accessibility managers in the processes for this user are
disabled, the state is swapped with the new user's one, and
the new user state is refreshed.
This change updates all calls into the system to use their
user specific versions when applicable. For example, regisetring
content observers, package monitors, calls into other system
services, etc.
There are some components that are shared across users such
as UI created by the system process and the SystemUI package.
Such components are managed as a global state shared across
all users and are updated accordingly on a user switch. Since
the SystemUI is running in a normal app process this change
adds hidden APIs on the local window manager to allow the
SystemUI to notify the accessibility layer that it will run
accross users.
Calls to AccessibiltyManager's isEnabled(), isTouchExplorationEnabled()
and sendAccessibilityEvent return false or a are a nop for a
background user sice he should not send accessibility events,
and should not perform touch exploration.
Update the internal accessibility tests due to changes in the
AccessibilityManager.
This change also fixes several issues that were encountered
such as calling out the accessibility manager service with a
lock held.
Removed some incorrect debugging code from the TouchExplorer
that was leading to a system crash.
bug:6967373
Change-Id: I2cf32ffdee1d827a8197ae4ce717dc0ff798b259
1. Currently the system fires accessibility events to announce the
start and end of a touch exploration gesture. However, such a
gesture starts after we have decided that the user is not
performing a gesture which is achieved by measuring speed of
movement during a threshold distance. This allows an accessibility
service to provide some feedback to the user so he knows that
he is touch exploring.
This change adds event types for the first and last touches
of the user. Note that the first touch does not conincide with
the start of a touch exploration gesture since we need a time
or distance to pass before we know whether the user explores
or gestures. However, it is very useful for an accessibility
service to know when the user starts to interact with the
touch screen so it can turn the speech off, to name one
compelling use case.
This change also provides event types for the start and end
of gesture detection. If the user has moved over the threshold
with a speed greater than X, then the system detects gestures.
It is useful for an accessibility service to know the begin
and end of gesture detection so it can provide given feedback
type for such a gesture, say it may produce haptic feedback
or sound that differs for the one for touch exploration.
The main benefit of announcing these new events is that an
accessibility service can provide feedback for each touch
state allowing the user to always know what he is doing.
bug:7166935
Change-Id: I26270d774cc059cb921d6a4254bc0aab0530c1dd
1. This change enforces an accessibility service to require the system
defined BIND_ACCESSIBILITY_SERVICE permission.
bug:6507771
Change-Id: If5e16bb4fa97891be0ccbb35e343773712e33b98
This change is the initial check in of the screen magnification
feature. This feature enables magnification of the screen via
global gestures (assuming it has been enabled from settings)
to allow a low vision user to efficiently use an Android device.
Interaction model:
1. Triple tap toggles permanent screen magnification which is magnifying
the area around the location of the triple tap. One can think of the
location of the triple tap as the center of the magnified viewport.
For example, a triple tap when not magnified would magnify the screen
and leave it in a magnified state. A triple tapping when magnified would
clear magnification and leave the screen in a not magnified state.
2. Triple tap and hold would magnify the screen if not magnified and enable
viewport dragging mode until the finger goes up. One can think of this
mode as a way to move the magnified viewport since the area around the
moving finger will be magnified to fit the screen. For example, if the
screen was not magnified and the user triple taps and holds the screen
would magnify and the viewport will follow the user's finger. When the
finger goes up the screen will clear zoom out. If the same user interaction
is performed when the screen is magnified, the viewport movement will
be the same but when the finger goes up the screen will stay magnified.
In other words, the initial magnified state is sticky.
3. Pinching with any number of additional fingers when viewport dragging
is enabled, i.e. the user triple tapped and holds, would adjust the
magnification scale which will become the current default magnification
scale. The next time the user magnifies the same magnification scale
would be used.
4. When in a permanent magnified state the user can use two or more fingers
to pan the viewport. Note that in this mode the content is panned as
opposed to the viewport dragging mode in which the viewport is moved.
5. When in a permanent magnified state the user can use three or more
fingers to change the magnification scale which will become the current
default magnification scale. The next time the user magnifies the same
magnification scale would be used.
6. The magnification scale will be persisted in settings and in the cloud.
Note: Since two fingers are used to pan the content in a permanently magnified
state no other two finger gestures in touch exploration or applications
will work unless the uses zooms out to normal state where all gestures
works as expected. This is an intentional tradeoff to allow efficient
panning since in a permanently magnified state this would be the dominant
action to be performed.
Design:
1. The window manager exposes APIs for setting accessibility transformation
which is a scale and offsets for X and Y axis. The window manager queries
the window policy for which windows will not be magnified. For example,
the IME windows and the navigation bar are not magnified including windows
that are attached to them.
2. The accessibility features such a screen magnification and touch
exploration are now impemented as a sequence of transformations on the
event stream. The accessibility manager service may request each
of these features or both. The behavior of the features is not changed
based on the fact that another one is enabled.
3. The screen magnifier keeps a viewport of the content that is magnified
which is surrounded by a glow in a magnified state. Interactions outside
of the viewport are delegated directly to the application without
interpretation. For example, a triple tap on the letter 'a' of the IME
would type three letters instead of toggling magnified state. The viewport
is updated on screen rotation and on window transitions. For example,
when the IME pops up the viewport shrinks.
4. The glow around the viewport is implemented as a special type of window
that does not take input focus, cannot be touched, is laid out in the
screen coordiates with width and height matching these of the screen.
When the magnified region changes the root view of the window draws the
hightlight but the size of the window does not change - unless a rotation
happens. All changes in the viewport size or showing or hiding it are
animated.
5. The viewport is encapsulated in a class that knows how to show,
hide, and resize the viewport - potentially animating that.
This class uses the new animation framework for animations.
6. The magnification is handled by a magnification controller that
keeps track of the current trnasformation to be applied to the screen
content and the desired such. If these two are not the same it is
responsibility of the magnification controller to reconcile them by
potentially animating the transition from one to the other.
7. A dipslay content observer wathces for winodw transitions, screen
rotations, and when a rectange on the screen has been reqeusted. This
class is responsible for handling interesting state changes such
as changing the viewport bounds on IME pop up or screen rotation,
panning the content to make a requested rectangle visible on the
screen, etc.
8. To implement viewport updates the window manger was updated with APIs
to watch for window transitions and when a rectangle has been requested
on the screen. These APIs are protected by a signature level permission.
Also a parcelable and poolable window info class has been added with
APIs for getting the window info given the window token. This enables
getting some useful information about a window. There APIs are also
signature protected.
bug:6795382
Change-Id: Iec93da8bf6376beebbd4f5167ab7723dc7d9bd00
1. The window manager was not notifying a window when the latter
has been moved. This was causing incorrect coordinates of the
nodes reported to accessibility services. To workaround that
we have carried the correct window location when making a
call from the accessibility layer into a window. Now the
window manager notifies the window when it is moved and the
workaround is no longer needed. This change takes it out.
2. The left and right in the attach info were not updated properly
after a report that the window has moved.
3. The accessibility manager service was calling directly methods
on the window manager service without going through the interface
of the latter. This leads to unnecessary coupling and in the
long rung increases system complexity and reduces maintability.
bug:6623031
Change-Id: Iacb734b1bf337a47fad02c827ece45bb2f53a79d
1. There was a misspelled duplicate member in the accessibility service
class which was causing inconsistent behavior because one field was
updated and another checked.
2. When the set of services that can put the device in explore by touch
mode changes we were disconnecting and reconnecting all services
and this is not correct. Now only the state of explore by touch is
updated appropriately.
bug:6798860
Change-Id: Ib3c119cef8e71c3458d56e4ce6fbde2c2f750dcd
1. The touch explorer was notified for accessibility events from
a binder thread which was poking the internal state of the
latter which by design is not tread safe. Since the touch
explorer is expected to be running only on the main thread
the accessibility manager service delivers the accessibility
events to the explorer on that thread.
bug:6635496
Change-Id: Ifdc5329e4be8e485d7f77f0fb472184494fa0d15
1. When typing into an auto completion edit field a list of completions pops up and if
the user touch explores the list and tries to double tap to select the touched
completion the latter is not selected.
The auto completion is a popup that does not take input focus and is overlaid on
top of the window that has input focus. The touch explorer was clicking on the
location of the accessibility focus if the last touch explored location is within
the bounds of the active window. In this case this was the window with the edit
text into which the user is typing. The check performed by the touch explorer
was missing the case when the last touch explored location was within the bounds
of the active window but it actually was deloverd to another overlaid window.
Now we are poking on the accessibility focus location if the last explored
location is within the active window and was delivered to it.
bug:6629535
Change-Id: Ie66d5bb81ab021f2bb0414339b7de26d96826191