1. Since adb is restarted on user switch it makes no sense to
try to reconnect the ui automation service since it will
be killed on a user switch.
Disabling touch exploration on UI automation service
connect since it can explicitly put the device in this
state if needed.
bug:6967373
Change-Id: I8cfde74f28f3f03d4ccf24746d43b8178ae2b5ef
1. This change converts the accessibility manager service to
maintain a state per user. When the user changes the services
for the user that is going away are disconnected, the local
accessibility managers in the processes for this user are
disabled, the state is swapped with the new user's one, and
the new user state is refreshed.
This change updates all calls into the system to use their
user specific versions when applicable. For example, regisetring
content observers, package monitors, calls into other system
services, etc.
There are some components that are shared across users such
as UI created by the system process and the SystemUI package.
Such components are managed as a global state shared across
all users and are updated accordingly on a user switch. Since
the SystemUI is running in a normal app process this change
adds hidden APIs on the local window manager to allow the
SystemUI to notify the accessibility layer that it will run
accross users.
Calls to AccessibiltyManager's isEnabled(), isTouchExplorationEnabled()
and sendAccessibilityEvent return false or a are a nop for a
background user sice he should not send accessibility events,
and should not perform touch exploration.
Update the internal accessibility tests due to changes in the
AccessibilityManager.
This change also fixes several issues that were encountered
such as calling out the accessibility manager service with a
lock held.
Removed some incorrect debugging code from the TouchExplorer
that was leading to a system crash.
bug:6967373
Change-Id: I2cf32ffdee1d827a8197ae4ce717dc0ff798b259
1. Currently the system fires accessibility events to announce the
start and end of a touch exploration gesture. However, such a
gesture starts after we have decided that the user is not
performing a gesture which is achieved by measuring speed of
movement during a threshold distance. This allows an accessibility
service to provide some feedback to the user so he knows that
he is touch exploring.
This change adds event types for the first and last touches
of the user. Note that the first touch does not conincide with
the start of a touch exploration gesture since we need a time
or distance to pass before we know whether the user explores
or gestures. However, it is very useful for an accessibility
service to know when the user starts to interact with the
touch screen so it can turn the speech off, to name one
compelling use case.
This change also provides event types for the start and end
of gesture detection. If the user has moved over the threshold
with a speed greater than X, then the system detects gestures.
It is useful for an accessibility service to know the begin
and end of gesture detection so it can provide given feedback
type for such a gesture, say it may produce haptic feedback
or sound that differs for the one for touch exploration.
The main benefit of announcing these new events is that an
accessibility service can provide feedback for each touch
state allowing the user to always know what he is doing.
bug:7166935
Change-Id: I26270d774cc059cb921d6a4254bc0aab0530c1dd
1. This change enforces an accessibility service to require the system
defined BIND_ACCESSIBILITY_SERVICE permission.
bug:6507771
Change-Id: If5e16bb4fa97891be0ccbb35e343773712e33b98
This change is the initial check in of the screen magnification
feature. This feature enables magnification of the screen via
global gestures (assuming it has been enabled from settings)
to allow a low vision user to efficiently use an Android device.
Interaction model:
1. Triple tap toggles permanent screen magnification which is magnifying
the area around the location of the triple tap. One can think of the
location of the triple tap as the center of the magnified viewport.
For example, a triple tap when not magnified would magnify the screen
and leave it in a magnified state. A triple tapping when magnified would
clear magnification and leave the screen in a not magnified state.
2. Triple tap and hold would magnify the screen if not magnified and enable
viewport dragging mode until the finger goes up. One can think of this
mode as a way to move the magnified viewport since the area around the
moving finger will be magnified to fit the screen. For example, if the
screen was not magnified and the user triple taps and holds the screen
would magnify and the viewport will follow the user's finger. When the
finger goes up the screen will clear zoom out. If the same user interaction
is performed when the screen is magnified, the viewport movement will
be the same but when the finger goes up the screen will stay magnified.
In other words, the initial magnified state is sticky.
3. Pinching with any number of additional fingers when viewport dragging
is enabled, i.e. the user triple tapped and holds, would adjust the
magnification scale which will become the current default magnification
scale. The next time the user magnifies the same magnification scale
would be used.
4. When in a permanent magnified state the user can use two or more fingers
to pan the viewport. Note that in this mode the content is panned as
opposed to the viewport dragging mode in which the viewport is moved.
5. When in a permanent magnified state the user can use three or more
fingers to change the magnification scale which will become the current
default magnification scale. The next time the user magnifies the same
magnification scale would be used.
6. The magnification scale will be persisted in settings and in the cloud.
Note: Since two fingers are used to pan the content in a permanently magnified
state no other two finger gestures in touch exploration or applications
will work unless the uses zooms out to normal state where all gestures
works as expected. This is an intentional tradeoff to allow efficient
panning since in a permanently magnified state this would be the dominant
action to be performed.
Design:
1. The window manager exposes APIs for setting accessibility transformation
which is a scale and offsets for X and Y axis. The window manager queries
the window policy for which windows will not be magnified. For example,
the IME windows and the navigation bar are not magnified including windows
that are attached to them.
2. The accessibility features such a screen magnification and touch
exploration are now impemented as a sequence of transformations on the
event stream. The accessibility manager service may request each
of these features or both. The behavior of the features is not changed
based on the fact that another one is enabled.
3. The screen magnifier keeps a viewport of the content that is magnified
which is surrounded by a glow in a magnified state. Interactions outside
of the viewport are delegated directly to the application without
interpretation. For example, a triple tap on the letter 'a' of the IME
would type three letters instead of toggling magnified state. The viewport
is updated on screen rotation and on window transitions. For example,
when the IME pops up the viewport shrinks.
4. The glow around the viewport is implemented as a special type of window
that does not take input focus, cannot be touched, is laid out in the
screen coordiates with width and height matching these of the screen.
When the magnified region changes the root view of the window draws the
hightlight but the size of the window does not change - unless a rotation
happens. All changes in the viewport size or showing or hiding it are
animated.
5. The viewport is encapsulated in a class that knows how to show,
hide, and resize the viewport - potentially animating that.
This class uses the new animation framework for animations.
6. The magnification is handled by a magnification controller that
keeps track of the current trnasformation to be applied to the screen
content and the desired such. If these two are not the same it is
responsibility of the magnification controller to reconcile them by
potentially animating the transition from one to the other.
7. A dipslay content observer wathces for winodw transitions, screen
rotations, and when a rectange on the screen has been reqeusted. This
class is responsible for handling interesting state changes such
as changing the viewport bounds on IME pop up or screen rotation,
panning the content to make a requested rectangle visible on the
screen, etc.
8. To implement viewport updates the window manger was updated with APIs
to watch for window transitions and when a rectangle has been requested
on the screen. These APIs are protected by a signature level permission.
Also a parcelable and poolable window info class has been added with
APIs for getting the window info given the window token. This enables
getting some useful information about a window. There APIs are also
signature protected.
bug:6795382
Change-Id: Iec93da8bf6376beebbd4f5167ab7723dc7d9bd00
1. The window manager was not notifying a window when the latter
has been moved. This was causing incorrect coordinates of the
nodes reported to accessibility services. To workaround that
we have carried the correct window location when making a
call from the accessibility layer into a window. Now the
window manager notifies the window when it is moved and the
workaround is no longer needed. This change takes it out.
2. The left and right in the attach info were not updated properly
after a report that the window has moved.
3. The accessibility manager service was calling directly methods
on the window manager service without going through the interface
of the latter. This leads to unnecessary coupling and in the
long rung increases system complexity and reduces maintability.
bug:6623031
Change-Id: Iacb734b1bf337a47fad02c827ece45bb2f53a79d
1. There was a misspelled duplicate member in the accessibility service
class which was causing inconsistent behavior because one field was
updated and another checked.
2. When the set of services that can put the device in explore by touch
mode changes we were disconnecting and reconnecting all services
and this is not correct. Now only the state of explore by touch is
updated appropriately.
bug:6798860
Change-Id: Ib3c119cef8e71c3458d56e4ce6fbde2c2f750dcd
1. The touch explorer was notified for accessibility events from
a binder thread which was poking the internal state of the
latter which by design is not tread safe. Since the touch
explorer is expected to be running only on the main thread
the accessibility manager service delivers the accessibility
events to the explorer on that thread.
bug:6635496
Change-Id: Ifdc5329e4be8e485d7f77f0fb472184494fa0d15
1. When typing into an auto completion edit field a list of completions pops up and if
the user touch explores the list and tries to double tap to select the touched
completion the latter is not selected.
The auto completion is a popup that does not take input focus and is overlaid on
top of the window that has input focus. The touch explorer was clicking on the
location of the accessibility focus if the last touch explored location is within
the bounds of the active window. In this case this was the window with the edit
text into which the user is typing. The check performed by the touch explorer
was missing the case when the last touch explored location was within the bounds
of the active window but it actually was deloverd to another overlaid window.
Now we are poking on the accessibility focus location if the last explored
location is within the active window and was delivered to it.
bug:6629535
Change-Id: Ie66d5bb81ab021f2bb0414339b7de26d96826191
1. If the last touch explored location is within the active window we
used to click on exact location if it is within the accessibility
focus otherwise in the accessibility focus center. If the last touch
explored location is not within the active window we used to just
click there. This breaks in the case were one has touch explored
at a given place in the current window and now a dialog opens *not*
covering the touch explored location. If one uses swipes to move
accessibility focus i.e. to traverse the dialog without touching
it one cannot activate anything because the touch explorer is using
the last touch explored location that is outside of the active
window e.g the dialog.
The solution is to clear the last touch explored location when a
window opens or accessibility focus moves. If the last touch
explored location is null we are clicking in the accessibility
focus location.
bug:6620911
2. There is a bug in the window manager that does not notify a
window that its location has changed (bug:6623031). This breaks
accessibility interaction with dialogs that have input because
when the IME is up the dialog is moved but not notified. Now
the accessibility layer gets incorrect location for the
accessibility focus and the window bounds.
The soluion is when the accessibility manager service calls
into the remove thress to obtain some accessibility node infos
it passes the window left and top which it gets from the
window manager. These values are used to update the attach info
window left and top so all accessibility node infos emitted
from that window had correct bounds in screen coordinates.
bug:6620796
Change-Id: I18914f2095c55cfc826acf5277bd94b776bda0c8
1. The global action to open recent apps shows the old dialog style rent apps
panel. Apparently the key code to open recent apps is not opening the new
UI so the AccessibilityManagerService is calling directly the method on
the IStatusBarSerivce to do so.
bug:6607664
Change-Id: I94c1963b07947776bf1c2448903b26f3603f9a59
1. Touch exploration gestures are demarcated by start and end
events. Due to a bug in the AccessibilityManagerService
the gesture end event was not dispatched. This caused the
AccessibilityNodeInfoCache to be off sync since it relies
on getting such events not to mention that the clients were
not getting the end but only the start event. The issue
was that the notified service types variable was not reset
after every event so when the manager sends the last hover
exit it flags that the service type is already notified
resulting in dropping on the floor the following gesture
end event.
bug:6539306
Change-Id: I2b96bcecea3b2240199d67f01afa6a033afce1de
1. Now we are asking the user to grant permission to the service to enable
touch exploration only the first time this service is enabled. If the
service was uninstalled and then later installed we ask the user again.
This avoids the scenario in which rebooting the device or upgrading an
accessibility service leaves the device in a state in which the user
cannot interact with.
bug:6582088
Change-Id: I51d24e4892b3b48c9fb11dfb09ec1118502ba526
1. If a UI test automation accessibility service is connected to the
system we pospone state updates in the AccessibilityManagerService
for the moment the UI automations service dies or is disconnected.
bug:6540522
Change-Id: I48ddf603b53d2158a00edcf8ad05cfe2575d4d75
1. We are passing the interrogating process id in the remote
accessibility requests to catch the query from the same
thread. While all other methods were doing this correctly
somehow the perform action is using the incorrect process id.
bug:6534935
Change-Id: Icef50833903c562758d51ef316b60c53c7a336c0
1. The internal service instance created by AccessibilityManagerService
was getting the looper of the current thread when created. This works
for real accessibility services but since UI automation service is
registered via an IPC the binder thread has no looper. Now we explicitly
get the correct looper.
bug:6535435
Change-Id: I63a2ada1b65c4b3c71c3d1e6deb3dfdeb7a3d6d6
1. Now the user have to double tap to activate the last
item. If the last touched window is not active because
it does not take input focus the click on the last
touch explored location. Othewise the click is on the
accessibility focus location.
bug:5932640
Change-Id: Ibb7b97262a7c5f2f94abef429e02790fdc91a8dd
1. Every accessibility services targeting JellyBean or higher has
to request a special permission for the system to bind to it.
Change-Id: I6e579326bdf3597f148d6c67317455701ec8af68
1. Since the API version has been finalized this change
updates the SDk version checks to use the JellyBean
verson number.
bug:5947249
Change-Id: Ie22fa7e18a7ea7b0c7077d80246a26c17f327ceb
1. The initial design was to have some accessibility gestures
being handled by the system if the gesture handling access
service does not consume the gesture. However, we are not
sure what a good default is and once we add a default handler
we cannot remove it since people may rely on it. Thus, we
take the simples approach and let the accessibility service
handle the gestures. If no gestures are handled the system
will work in explore by touch as before.
bug:5932640
Change-Id: I865a83549fa03b0141d27ce9713e9b7bb45a57b4
1. Scrolling actions are crucial for enabling a gesture based
traversal of the UI and specifically scrollable containers
especially lists and anything backed by an adapter. Since
accessibility focus can land only attached views, it cannot
visit views for adapter items not shown on the screen.
Auto scrolling the list as a result of putting access focus
ot a list item does not work well since the user may get
trapped in a long list. Adding an accessibility node provider
to emit virtual views for one view before the first and one
after the last is complex and suffers the limitation of trapping
the user. Accessibility service need an explicit scroll actions
which may be performed upon an explicit user action. Hence,
the user is informed for the start/end of the visible part of
the list and he makes a deliberate choice to scroll. This will
benefit also people developing Braille devices since they can
scroll the content without telling the user to stop using the
Braille controller and take the device out of his pocket to scroll
and go back to the Braille controller.
NOTE: Without these action large portions of the screen will be
hard to access since users will have to touch and explore to
find and scroll the list.
Change-Id: Iafcf54d4967893205872b3649025a4e347a299ed
1. Delegating activation gestures has several issues that we should
decide how to handle if possible before allowing an accessibility
service to take over them:
A) It is needed that every view than can be clicked or long pressed on
reacts to such as a response to calling performClick and performLongPress
which is not necessary true since the view may watch the touch
events and do its own click long click detection. As a result it may
be possible that there are view a user cannot interact with in
touch exploration mode but can if not in that mode.
B) Clicking or long pressing on a different location in a view may yield
different results, for example NumberPicker. Ideally such views have
to implement AccessibilityNodeProvide which provider handles correctly
the request for click long press on virtual nodes. Some apps however
just fire different hover accessibility events when the user is over
a specific semantic portion of the view but do not provide virtual
nodes. Hence, a user will not be able to interact with such semantic
regions but the system can achieve that by sending the click/long click
at the precise location in the view that was last touch explored.
2. Adding a flag on accessibility service info to request explore by touch
mode. There is no need to put the device in this mode if node of the currently
enabled accessibility services supports it. Now the problem is inverted and
the service has to explicitly state its capability.
3. Fixing a bug where includeImportantViews was ignored for automation
services.
Change-Id: I3b29a19f24ab5e26ee29f974bbac2197614c9e2a
This problem was introduced in I74df9c24. The intention of the
change was still let UiTestAutomationBridge see the
non-important views, but there were bugs in the implementation:
1. AccessibilityManagerService was not really updating
mIncludeNotImportantViews when mIsAutomation is true
2. Wrong constant is used to set the flag
Change-Id: Ia0a2e9ed9720bd0ea3a563e0b492e870a6ec1586
1. Since we are using a stateless proxy accessibility service to
perform default accessibility gesture handling it shuld not
operate against not important views.
bug:6422069
Change-Id: I74df9c2415ab3b164d9ac5873f7004c0459e2bfa
1. Changed all references to granularity to movement
granularity. BTW, to be more precise it should be
text movement granularity.
bug:6435232
Change-Id: If6366b002ca3390f74918995b342baff2cbcfd01
1. The event of setting an accessibility focus on a view should not
make the host window the currently active one.
bug:6400648
Change-Id: Ib45c255f441c38489ee9d4ab5f284550ac5f6b01
1. The checks for action arguments are not needed since they
may cause trouble for developers if we add more args to
an action.
bug:6414006
Change-Id: Ia4212b52be183b1ef1cfd2561ce618cef2b015e4
1. The granularities for traversing the text content of an accessibility
node info are now predefined constants and custom ones will not be
supported. This is the simplest solution - we can always add namespaced
user defined ones (unlikely).
2. Added actions for traversing web content. These actions can be used by
an accessibility service to transparently drive the JavaScript based
screen reader that is used for handling web content.
3. Added a new accessibility event type for traversing the content of a
view. This event is needed to announce to the user what is the next
element, i.e. the one next to the cursor, after the view's text was
traversed.
bug:5932640
bug:6389591
Change-Id: I144647da55bc4005c64f89865ef333af8359e145
1. The accessibility focus directions are not needed since an
accessibility service just get the root, first child, next
sibling, previous sibling and call execute the action to
give it accessibility focus. Now the accessibility node
info tree is properly ordered taking into account layout
manager directions for both layout manager that we report
and ones that we have determined as not important for
accessibility. Also the position of a node info are ordered
properly based on their coordinates after all transformations
as opposed to child index.
bug:5932640
Change-Id: I994a8297cb1e57c829ecbac73a937c2bcbe0bac7
1. A view that creates an accessibility node info may add to the info
a list of granularity labels. These are granularities by which the
source view can iterate over its content. For example a text view
may support character, word link while a web view may additionally
support buttons, tables, etc. There are actions on accessibility
node info to go to the next/previous at a given granularity which
is passesed as an argument.
2. Added Bundle argument to the APIs for performing accessibility
actions. This is generic and extensible.
bug:5932640
Change-Id: I328cbbb4cddfdee082ab2a8b7ff1bd7477d8d6f9
1. The gesture dispatcher thread was not waiting in a loop
that check for complete initialization. Therefore is was
susceptible to missed signals and unexpected interrupts.
2. In the gesture processing message handle the interaction id
was reading the wrong message argument.
bug:5932640
Change-Id: Ic65ecc01a7fe7d43929c6c07d0759ae9001cf515
1. An accessibility service has to explicitly opt in to be notified
for gestures by the system. There is only one accessibility service
that handles gestures and in case it does not handle a gesture
the system performs default handling. This default handling ensures
that we have gesture navigation even if no accessibility service
would like to participate/customize the interaction model.
bug:5932640
Change-Id: Id8194293bd94097b455e9388b68134a45dc3b8fa
1. Added more gesture for accessibility. After a meeting
with the access-eng team we have decided that the current
set of gestures may be smaller than needed considering
that we will use four gestures for home, back, recents,
and notifications.
2. Adding actions for going back, home, opening the recents,
and opening the notifications.
3. Added preliminary mapping from some of the new gestures
to the new actions.
4. Fixed a bug in the accessibility interaction controller
which was trying to create a handled on the main looper
thread which may be null if the queried UI is in the
system process. Now the context looper of the root view
is used.
5. Fixed a bug of using an incorrect constant.
6. Added a missing locking in a couple of places.
7. Fixed view comparison for accessibilityt since it was
not anisymmetric.
bug:5932640
bug:5605641
Change-Id: Icc983bf4eafefa42b65920b3782ed8a25518e94f
Usefulness: Keep track of the current user location in the screen when
traversing the it. Enabling structural and directional
navigation over all elements on the screen. This enables
blind users that know the application layout to efficiently
locate desired elements as opposed to try touch exploring the
region where the the element should be - very tedious.
Rationale: There are two ways to implement accessibility focus One is
to let accessibility services keep track of it since they
have access to the screen content, and another to let the view
hierarchy keep track of it. While the first approach would
require almost no work on our part it poses several challenges
which make it a sub-optimal choice. Having the accessibility focus
in the accessibility service would require that service to scrape
the window content every time it changes to sync the view tree
state and the accessibility focus location. Pretty much the service
will have to keep an off screen model of the screen content. This
could be quite challenging to get right and would incur performance
cost for the multiple IPCs to repeatedly fetch the screen content.
Further, keeping virtual accessibility focus (i.e. in the service)
would require sync of the input and accessibility focus. This could
be challenging to implement right as well. Also, having an unlimited
number of accessibility services we cannot guarantee that they will
have a proper implementation, if any, to allow users to perform structural
navigation of the screen content. Assuming two accessibility
services implement structural navigation via accessibility focus,
there is not guarantee that they will behave similarly by default,
i.e. provide some standard way to navigate the screen content.
Also feedback from experienced accessibility researchers, specifically
T.V Raman, provides evidence that having virtual accessibility focus
creates many issues and it is very hard to get right.
Therefore, keeping accessibility focus in the system will avoid
keeping an off-screen model in accessibility services, it will always
be in sync with the state of the view hierarchy and the input focus.
Also this will allow having a default behavior for traversing the
screen via this accessibility focus that is consistent in all
accessibility services. We provide accessibility services with APIs to
override this behavior but all of them will perform screen traversal
in a consistent way by default.
Behavior: If accessibility is enabled the accessibility focus is the leading one
and the input follows it. Putting accessibility focus on a view moves
the input focus there. Clearing the accessibility focus of a view, clears
the input focus of this view. If accessibility focus is on a view that
cannot take input focus, then no other view should have input focus.
In accessibility mode we initially give accessibility focus to the topmost
view and no view has input focus. This ensures consistent behavior accross
all apps. Note that accessibility focus can move hierarchically in the
view tree and having it at the root is better than putting it where the
input focus would be - at the first input focusable which could be at
an arbitrary depth in the view tree. By default not all views are reported
for accessibility, only the important ones. A view may be explicitly labeled
as important or not for accessibility, or the system determines which one
is such - default. Important views for accessibility are all views that are
not dumb layout managers used only to arrange their chidren. Since the same
content arrangement can be obtained via different combintation of layout
managers, such managers cannot be used to reliably determine the application
structure. For example, a user should see a list as a list view with several
list items and each list item as a text view and a button as opposed to seeing
all the layout managers used to arrange the list item's content.
By default only important for accessibility views are regared for accessibility
purposes. View not regarded for accessibility neither fire accessibility events,
nor are reported being on the screen. An accessibility service may request the
system to regard all views. If the target SDK of an accessibility services is
less than JellyBean, then all views are regarded for accessibility.
Note that an accessibility service that requires all view to be ragarded for
accessibility may put accessibility focus on any view. Hence, it may implement
any navigational paradigm if desired. Especially considering the fact that
the system is detecting some standard gestures and delegates their processing
to an accessibility service. The default implementation of an accessibility
services performs the defualt navigation.
bug:5932640
bug:5605641
Change-Id: Ieac461d480579d706a847b9325720cb254736ebe
1. Before there were two caches one in the app process that
kept track only the ids of infos that were given to a
querying client and one in the querying client that
holds the infos. This design requires precise sync
between the caches. Doing that is somehow complicated
since the app has cache for each window and it has
to intercept all accessibility events from that window
to manage the cache. Each app has to have a cache for
each querying client. This approach would guarantee that
no infos are fetched twice but due to its stateful nature
and the two caches is tricky to implement and adds
unnecessary complexity. Now there is only one cache in
the client and the apps are stateless. The client is
passing flags to the app that are a clue what nodes to
prefetch. This approach may occasionally fetch a node
twice but it is considerably simpler and stateless
from the app perspective - there is only one cache.
Fetching a node more than once does not cause much
overhead compared to the IPC.
Change-Id: Ia02f6fe4f82cff9a9c2e21f4a36747de0f414c6f
1. UiTestAutomationBridge was accessing the root node in the
active window by tracking the accessibility event stream
and keeping the last active window changing event. Now
the bridge is stateless and the root node is fetched by
passing special window and view id with the request to
the system.
2. AccessibilityNodeInfos that are cached were not finished,
i.e. not sealed, causing exception when trying to access
their children or rpedecessors.
3. AccessibilityManagerService was not properly restoring its
state after the UI automation bridge disconnects from it.
I particular the devices was still in explore by touch mode
event if no services are enabled and the sutomation bridge
is disconnected.
4. ViewRootImpl for the focused window now fires accessibility
events when accessibility is enabled to allow accessibility
services to determine the current user location.
5. Several missing null checks in ViewRootImpl are fixed since
there were scenraios in which a NPE can occur.
6. Update the internal window content querying tests.
7. ViewRootImpl was firing one extra focus event.
bug:6009813
bug:6026952
Change-Id: Ib2e058d64538ecc268f9ef7a8f36ead047868a05
1. The AccessibilityManagerService used to disable the IU
automation service on package change. This behavior
was incorrect since the automation service has to
survive package installations.
bug:5975207
Change-Id: Idb5e76d02625c333a5842a6b5c5bc90c9b9634c9
1. Now when an interrogating client requires an AccessibilibtyNodeInfo
we aggressively prefetch all the predecessors of that node and its
descendants. The number of fetched nodes in one call is limited to
keep the APIs responsive. The prefetched nodes infos are cached in
the client process. The node info cache is invalidated partially or
completely based on the fired accessibility events. For example,
TYPE_WINDOW_STATE_CHANGED event clears the cache while
TYPE_VIEW_FOCUSED removed the focused node from the cache, etc.
Note that the cache is only for the currently active window.
The ViewRootImple also keeps track of only the ids of the node
infos it has sent to each querying process to avoid duplicating
work. Usually only one process will query the screen content
but we support the general case. Also all the caches are
automatically invalidated so not additional bookkeeping is
required. This simple strategy leads to 10X improving the
speed of the querying APIs.
2. The Monkey and UI test automation framework were registering a
raw event listener for accessibility events and hence perform
connection and cache management in similar way to an AccessibilityService.
This is fragile and requires the implementer to know internal framework
stuff. Now the functionality required by the Monkey and the UI automation
is encapsulated in a new UiTestAutomationBridge class. To enable this
was requited some refactoring of AccessibilityService.
3. Removed the *doSomethiong*InActiveWindow methods from the
AccessibilityInteractionClient and the AccessibilityInteractionConnection.
The function of these methods is implemented by the not *InActiveWindow
version while passing appropriate constants.
4. Updated the internal window Querying tests to use the new
UiTestAutomationBridge.
5. If the ViewRootImple was not initialized the querying APIs of
the IAccessibilityInteractionConnection implementation were
returning immediately without calling the callback with null.
This was causing the client side to wait until it times out. Now
the client is notified as soon as the call fails.
6. Added a check to guarantee that Views with AccessibilityNodeProvider
do not have children.
bug:5879530
Change-Id: I3ee43718748fec6e570992c7073c8f6f1fc269b3
1. AccessibilityInteractionConnections were removed from the
AccessiiblityManagerService but their DeathRecipents were
not unregistered, thus every removed interaction connection
was essentially leaking. Such connection is registered in
the system for every ViewRootImpl when accessiiblity is
enabled and inregistered when disabled.
2. Every AccessibilityEvent and AccessiilbityEventInfo obtained
from a widnow content querying accessibility service had a
handle to a binder proxy over which to make queries. Hoewever,
holding a proxy to a remote binder prevents the latter from
being garbage collected. Therefore, now the events and infos
have a connection id insteand and the hindden singleton
AccessiiblityInteaction client via which queries are made
has a registry with the connections. This class looks up
the connection given its id before making an IPC. Now the
connection is stored in one place and when an accessibility
service is disconnected the system sets the connection to
null so the binder object in the system process can be GCed.
Note that before this change a bad implemented accessibility
service could cache events or infos causing a leak in the
system process. This should never happen.
3. SparseArray was not clearing the reference to the last moved
element while garbage collecting thus causing a leak.
bug:5664337
Change-Id: Id397f614b026d43bd7b57bb7f8186bca5cdfcff9