An accessibility service may register to observe the interactive windows
on the primary display. These windows are the one that has input focus and
ones a sighted user can touch. It is sometimes beneficial for an
accessibility service to overlay a window to intercept user interaction
and based on that introspect and perform an action on the windows that
are on the screen. This is problematic as overlaying a full screen window
that is touchable prevents the accessibility service to introspect the
content under this window.
This change adds a special type of window that only an accessibility service
can place which does not affect what an accessibility service can "see" on
the screen. Hence, even putting such a window full screen the service will
be able to interact with the other interactive windows it covers.
Change-Id: I053ccc3a5c6360a98dc40bdb172b54dab35d8b31
In touch exploration mode an accessibility service can move
accessibility focus in response to user gestures. In this case
when the user double-taps the system is sending down and up
events at the center of the acessibility focused view. This
works fine until the clicked view's center is covered by another
clickable view. In such a scenario the user thinks he is clicking
on one view but the click is handled by another. Terrible.
This change solves the problem of clicking on the wrong view
and also solves the problem of clicking on the wrong window.
The key idea is that when the system detects a double tap or
a double tap and hold it asks the accessibility focused node
(if such) to compute a point at which a click can be performed.
In respinse to that the node is asking the source view to
compute this.
If a view is partially covered by siblings or siblings of
predecessors that are clickable, the click point will be
properly computed to ensure the click occurs on the desired
view. The click point is also bounded in the interactive
part of the host window.
The current approach has rare edge cases that may produce false
positives or false negatives. For example, a portion of the
view may be covered by an interactive descendant of a
predecessor, which we do not compute (we check only siblings of
predecessors). Also a view may be handling raw touch events
instead of registering click listeners, which we cannot compute.
Despite these limitations this approach will work most of the
time and it is a huge improvement over just blindly sending
the down and up events in the center of the view.
Note that the additional computational complexity is incurred
only when the user wants to click on the accessibility focused
view which is very a rare event and this is a good tradeoff.
bug:15696993
Change-Id: I85927a77d6c24f7550b0d5f9f762722a8230830f
We are caching the window data in the accessibility service process.
When windows change we were sending the dalta of the windows the
service knows about. To make this work when the app asked for all
windows we had to call into the system as new windows may have
appeared. This was slow.
Now we are telling the service some windows change and if it gets
the windows we cache them. We call into the system only on a cache
miss and evict all windows from the cache on window change event.
We do not evict the nodes of the window as the former may have
just moved. If views in a window change they fire accessibility
events that trigger the correct eviction.
Change-Id: I586a72a2497b0d44a75288fa758e7e88817f3300
Also fixes an infinite recursion bug in the WindowManagerService
implementation of WindowManagerInternal.
BUG: 16129909
Change-Id: I4f9d32f4e6c3ad460652c5e5271540fa5032a1f5
1. The APIs for introspecting interactive windows were reporting only
the touchable windows but were missing the focused window. The user
can interact with the latter by typing, hence it should always be
reported. Also this was breaking backwards compatibility as if the
focused window is covered by a modal one, the focused window was not
reporeted and this was putting the active window in a bad state as
the latter is either the focused window or the one the user is touching.
2. Window change events are too frequent as on window transition things
are chanign a lot. Now we are trottling the windows changed events
at the standard recurring accessibility event interval.
3. Fixed a wrong flag comparison and removed some unneded code.
buy:15434666
bug:15432989
Change-Id: I825b33067e8cbf26396a4d38642bde4907b6427a
Now that we have APIs to query all interactive windows and allow
an accessibility service to put accessibility focus in each of
them we have to guarantee that there is a single accessibility
focus. This is required for correct operation of the touch
explorer as on double tap in clicks in the center of the focused
area, hence having more that one focus is an issue. Also the
system is maintaining a single input focus so now accessibility
focus behaves consistently with that.
bug:13965563
Change-Id: I0b5c26dadfabbf80dbed8dc4602073aa575ac179
1. The old introspection model was allowing querying only the active window
which is the one the user is touching or the focused one if no window is
touched. This was limiting as auto completion drop downs were not inspectable,
there was not way to know when the IME toggles, non-focusable windows were
not inspectable if the user taps them as until a screen-reader starts
introspecting the users finger is up, accessibility focus was limited to
only one window and the user couldn't use gestures to visit the whole UI,
and other things I can't remember right now.
The new APIs allow getting all interactive windows, i.e. ones that a
sighted user can interact with. This prevents an accessibility service
from interacting with content a sighter user cannot. The list of windows
can be obtained from an accessibility service or the host window from an
accessibility node info. Introspecting windows obey the same rules for
introspecting node, i.e. the service has to declare this capability
in its manifest.
When some windows change accessibility services receive a new type
of event. Initially the types of windows is very limited. We provide
the bounds in screen, layer, and some other properties which are
enough for a client to determined the spacial and hierarchical
relationship of the windows.
2. Update the documentation in AccessibilityService for newer event types.
3. LongArray was not removing elements properly.
4. Composite accessibility node ids were not properly constructed as they
are composed of two ints, each taking 32 bits. However, the values for
undefined were -1 so composing a 64 long from -1, -1 prevents from getting
back these values when unpacking.
5. Some apps were generating inconsistent AccessibilityNodeInfo tree. Added
a check that enforces such trees to be well formed on dev builds.
6. Removed an necessary code for piping the touch exploration state to
the policy as it should just use the AccessibilityManager from context.
7. When view's visibility changed it was not firing an event to notify
clients it disappeared/appeared. Also ViewGroup was sending accessibility
events for changes if the view is included for accessibility but this is
wrong as there may be a service that want all nodes, hence events from them.
The accessibility manager service takes care of delivering events from
not important for accessibility nodes only to services that want such.
8. Several places were asking for prefetching of sibling but not predecessor
nodes which resulted in prefetching of unconnected subtrees.
9. The local AccessibilityManager implementation was relying on the backing
service being ready when it is created but it can be fetched from a context
before that. If that happens the local manager was in a broken state forever.
Now it is more robust and starts working properly once the backing service
is up. Several places were lacking locking.
bug:13331285
Change-Id: Ie51166d4875d5f3def8d29d77973da4b9251f5c8
Since older versions of the platform will crash if the mode is
set to 3, this CL adds an extra bit and uses 4 as the mode. This
will also cleanly default the mode to AUTO on unsupported
platforms.
BUG: 11031947
Change-Id: I48035abbf35523d65cead6f27be10f8087973ceb
Since the enable touch exploration capability is dynamically granted by
the user for apps targeting pre-JellybeanMR2 API level, we have to properly
update the accessibility service info for that service and also avoid
caching copies of the service info.
bug:8633951
Change-Id: I83dd1c852706ec55d40cda7209ad842889fb970a
1. UiAutomation#executeAndWaitForEvent method was invoking the passed
runnable while holding the lock which may lead to a deadlock. For
example: a runnable that calls getActivity() gets us into a state
like this.
2. UI automation services did not get all capabilities such a
service can have. Now a UI test service gets all of them.
3. When UiAutomation was exiting for event fired as a result of a
performed action, it was checking whether the received evnet time
is strictly before the time of executing the command that should
fire the event. However, if the execution is fast enough, i.e.
less than one millisecond, then the event time and the execution
time are the same. This was leading to a missed signal in rare
cases.
4. AccessibilityNodeInfoCache was not clearing the relevant state
for accessibility focus clearing event.
5. Accessibility text traversal in TextView was partially using text
and partially content description - broken. Now we are using the
text since for text view and content desc for other views. In other
words, we are using the most precise text we have.
6. AccessibilityManagerService was not granting capabilities of a
UiAutomation service - plainly wrong.
CTS change:https://googleplex-android-review.googlesource.com/#/c/300693/
bug:8695422
bug:8657560
Change-Id: I9afc5c3c69eb51f1c01930959232f44681b15e86
Accessibility services can perform special operations such as retrieve
the screen content, enable explore by touch, etc. To ensure the user
is aware that the service will perform special operations we were using
permissions. However, the special operations cannot be performed unless
the service is really enabled by the user and it is at this point that
we want to notify the user about the service capabilities.
This change adds capability attributes to the accessibility service's
meta-data XML file. The service has to declare the capability and when
it is enabled we show the user the capabilities in the warining dialog.
bug:8633951
Change-Id: Id3442dc71dad018e606888afdc40834682fdb037
Now that we have gestures which are detected by the system and
interpreted by an accessibility service, there is an inconsistent
behavior between using the gestures and the keyboard. Some devices
have both. Therefore, an accessibility service should be able to
interpret keys in addition to gestures to provide consistent user
experience. Now an accessibility service can expose shortcuts for
each gestural action.
This change adds APIs for an accessibility service to observe and
intercept at will key events before they are dispatched to the
rest of the system. The service can return true or false from its
onKeyEvent to either consume the event or to let it be delivered
to the rest of the system. However, the service will *not* be
able to inject key events or modify the observed ones.
Previous ideas of allowing the service to say it "tracks" the event
so the latter is not delivered to the system until a subsequent
event is either "handled" or "not handled" will not work. If the
service tracks a key but no other key is pressed essentially this
key is not delivered to the app and at potentially much later point
this stashed event will be delivered in maybe a completely different
context.The correct way of implementing shortcuts is a combination
of modifier keys plus some other key/key sequence. Key events already
contain information about which modifier keys are down as well as
the service can track them as well.
bug:8088812
Change-Id: I81ba9a7de9f19ca6662661f27fdc852323e38c00
1. Accessibility service may set a flag to request a compressed
view of the node tree representing the screen. The compressed
state does not contain nodes that do to react to user actions
and do not draw content, i.e. they are dumb view managers. This
compressed hierarchy was very beneficial to the test team and
this change is exposing the APIs. The compression has to be
configurable since old tests are written against uncompressed
view tree. Basically we do not hide the fact that UIAutomation
is simply an accessibility service with some other useful APIs.
bug:8051095
2. Accessibility service can perform global actions such as opening
notifications, opening recent apps, etc. These are also needed
for UI testing since there is no other way to do it via the
existing UIAutomation APIs. Basically we do not hide the fact
that UIAutomation is simply an accessibility service with some
other useful APIs.
bug:8117582
Change-Id: I7b6e24b5f7a973fdada0cc199cff9f882b10720b
1. The accessibility manager service updates its internal state
based on which settings are enabled, what accessibility services
are installed and what features are requested by the enabled
services. It was trying to do the minimal amount of work to
react to contextual changes like these which resulted in missed
cases and complex code. Now there is a single method that reads
the contextual information and single method that reacts to
contextual changes. This makes the code much easier to maintain.
2. The accessibility manager service was not updating its internal
state when requested features from accessibility services change.
It was relying on changing system settings and reacting to the
settings change. This is problematic since the internal state is
not updated atomically which leads to race condition bugs. For
example, if touch exploration is enabled and a service requests
it is disabled, the internal state will not be updated but a
request for a settings change will be made. Now while the settings
change is propagating another request form the same service
comes to enable touch exploration but the system incorrectly
thinks touch exploration is enabled. At the end the feature is
disabled even though it was requested.
3. Fixed a potential NPE if the accessibility input filter's event
handler was nullified between processing two event batches.
4. Fixed a bug where, if magnification is enabled, it does not work
on the settings screen since the magnified bounds are not pushed
from the window manager to the accessibility manager.
Change-Id: Idf629a06480e12f0d88372762df6c024fe0d7856
Currently we have an "enhance web accessibility" setting that has to be
enabled to make sure web content is accessible. We added the setting to
get user consent because we are injecting JavaScript-based screen-reader
pulled from the Google infrastructure. However, many users do not know
that and (as expected) do not read the user documentation, resulting in
critique for lacking accessibility support in WebViews with JavaScript
enabled (Browser, Gmail, etc).
To smoothen the user experience now "enhance web accessibility" is a
feature an accessibility plug-in can request, similarly to explore by
touch. Now a user does not need to know that she has to explicitly
enable the setting and web accessibility will work out-of-the-box.
Before we were showing a dialog when a plug-in tries to put the device
in a touch exploration mode. However, now that we have one more feature
a plug-in can request, showing two dialogs (assume a plug-in wants both
features) will mean that a user should potentially deal with three
dialogs, one for enabling the service, and one for each feature. We
could merge the dialogs but still the user has to poke two dialogs.
It seems that the permission mechanism is a perfect fit for getting
user permission for an app to do something, in this case to enable
an accessibility feature. We need a separate permission for explore
by touch and enhance web accessibility since the former changes the
interaction model and the latter injects JavaScript in web pages. It
is critical to get user consent for the script injection part so we
need a well-documented permission rather a vague umbrella permission
for poking accessibility features. To allow better grouping of the
accessibility permissions this patch adds a permission group as well.
bug:8089372
Change-Id: Ic125514c34f191aea0416a469e4b3481ab3200b9
This change adds APIs support for implementing UI tests. Such tests do
not rely on internal application structure and can span across application
boundaries. UI automation APIs are encapsulated in the UiAutomation object
that is provided by an Instrumentation object. It is initialized by the
system and can be used for both introspecting the screen and performing
interactions simulating a user. UI test are normal instrumentation tests
and are executed on the device.
UiAutomation uses the accessibility APIs to introspect the screen and
a special delegate object to perform privileged operations such as
injecting input events. Since instrumentation tests are invoked by a shell
command, the shell program launching the tests creates a delegate object and
passes it as an argument to started instrumentation. This delegate
allows the APK that runs the tests to access some privileged operations
protected by a signature level permissions which are explicitly granted
to the shell user.
The UiAutomation object also supports running tests in the legacy way
where the tests are run as a Java shell program. This enables existing
UiAutomator tests to keep working while the new ones should be implemented
using the new APIs. The UiAutomation object exposes lower level APIs which
allow simulation of arbitrary user interactions and writing complete UI test
cases. Clients, such as UiAutomator, are encouraged to implement higher-
level APIs which minimize development effort and can be used as a helper
library by the test developer.
The benefit of this change is decoupling UiAutomator from the system
since the former was calling hidden APIs which required that it is
bundled in the system image. This prevented UiAutomator from being
evolved separately from the system. Also UiAutomator was creating
additional API surface in the system image. Another benefit of the new
design is that now test cases have access to a context and can use
public platform APIs in addition to the UiAutomator ones. Further,
third-parties can develop their own higher level test APIs on top
of the lower level ones exposes by UiAutomation.
bug:8028258
Also this change adds the fully qualified resource name of the view's
id in the emitted AccessibilityNodeInfo if a special flag is set while
configuring the accessibility service. Also added is API for looking
up node infos by this id. The id resource name is relatively more stable
compared to the generaed id number which may change from one build to
another. This API facilitate reuing the already defined ids for UI
automation.
bug:7678973
Change-Id: I589ad14790320dec8a33095953926c2a2dd0228b
Improves the throughput of IME event handling by ensuring that
input events do not get serialized behind UI traversal and
drawing messages such as when the UI is animating.
Added support for creating an asynchronous Handler as part of a
HandlerCaller. It turns out we should be using an asynchronous
Handler not only in IME dispatch but also in accessibility and
wallpaper events where HandlerCaller is used. So fixed those
services to also use an asynchronous Handler.
Change-Id: I0b19140c9d5ca6ee300c1a150c48312fd55ed8eb
1. The screen magnification feature was implemented entirely as a part of the accessibility
manager. To achieve that the window manager had to implement a bunch of hooks for an
external client to observe its internal state. This was problematic since it dilutes
the window manager interface and allows code that is deeply coupled with the window
manager to reside outside of it. Also the observer callbacks were IPCs which cannot
be called with the window manager's lock held. To avoid that the window manager had
to post messages requesting notification of interested parties which makes the code
consuming the callbacks to run asynchronously of the window manager. This causes timing
issues and adds unnecessary complexity.
Now the magnification logic is split in two halves. The first half that is responsible
to track the magnified portion of the screen and serve as a policy which windows can be
magnified and it is a part of the window manager. This part exposes higher level APIs
allowing interested parties with the right permissions to control the magnification
of a given display. The APIs also allow a client to be registered for callbacks on
interesting changes such as resize of the magnified region, etc. This part servers
as a mediator between magnification controllers and the window manager.
The second half is a controller that is responsible to drive the magnification
state based on touch interactions. It also presents a highlight when magnified to
suggest the magnified potion of the screen. The controller is responsible for auto
zooming out in case the user context changes - rotation, new actitivity. The controller
also auto pans if a dialog appears and it does not interesect the magnified frame.
bug:7410464
2. By design screen magnification and touch exploration work separately and together. If
magnification is enabled the user sees a larger version of the widgets and a sub section
of the screen content. Accessibility services use the introspection APIs to "see" what
is on the screen so they can speak it, navigate to the next item in response to a
gesture, etc. Hence, the information returned to accessibility services has to reflect
what a sighted user would see on the screen. Therefore, if the screen is magnified
we need to adjust the bounds and position of the infos describing views in a magnified
window such that the info bounds are equivalent to what the user sees.
To improve performance we keep accessibility node info caches in the client process.
However, when magnification state changes we have to clear these caches since the
bounds of the cached infos no longer reflect the screen content which just got smaller
or larger.
This patch propagates not only the window scale as before but also the X/Y pan and the
bounds of the magnified portion of the screen to the introspected app. This information
is used to adjust the bounds of the node infos coming from this window such that the
reported bounds are the same as the user sees not as the app thinks they are. Note that
if magnification is enabled we zoom the content and pan it along the X and Y axis. Also
recomputed is the isVisibleToUser property of the reported info since in a magnified
state the user sees a subset of the window content and the views not in the magnified
viewport should be reported as not visible to the user.
bug:7344059
Change-Id: I6f7832c7a6a65c5368b390eb1f1518d0c7afd7d2
1. Added APIs for opening the quick settings to the StatusBarManagerService
and the local StatausBarManager. The new APIs are protected by the old
EXPAND_STATUS_BAR permission.
Renamed the expand* and collapse* non-public APIs that are expanding
the notifications to expandNotifications* collapseNotifications* to
better convey what they do given that this change adds
expandQuickSettings* and collapseQuickSettings*.
Added a global action to the accessibility layer to expand the quick
settings which is calling into the new status bar manager APIs.
bug:7030487
Change-Id: Ic7b46e1a132f1c0d71355f18e7c5a9a2424171c3
1. The feedback type provided by braille devices differs from all
already predefined types and this change defines braille
feedback type. Braille does not fit in the already predefined
categories.
bug:6998945
Change-Id: Ide6043bed03fdecd6d2ee45a08762f5bd07a2118
Okay this isn't entirely docs. I have un-hidden the new permission
for binding to accessibility services; we were going to go out with
it hidden since it didn't go into factory ROM, but now that we are
doing other things we might as well expose it.
Also changed a log from E to W, since it is not an error.
Change-Id: I9226c95e3b63e12218fe41dc3f4290d9824a9e5c
1. Every accessibility services targeting JellyBean or higher has
to request a special permission for the system to bind to it.
Change-Id: I6e579326bdf3597f148d6c67317455701ec8af68
1. The initial design was to have some accessibility gestures
being handled by the system if the gesture handling access
service does not consume the gesture. However, we are not
sure what a good default is and once we add a default handler
we cannot remove it since people may rely on it. Thus, we
take the simples approach and let the accessibility service
handle the gestures. If no gestures are handled the system
will work in explore by touch as before.
bug:5932640
Change-Id: I865a83549fa03b0141d27ce9713e9b7bb45a57b4
1. Delegating activation gestures has several issues that we should
decide how to handle if possible before allowing an accessibility
service to take over them:
A) It is needed that every view than can be clicked or long pressed on
reacts to such as a response to calling performClick and performLongPress
which is not necessary true since the view may watch the touch
events and do its own click long click detection. As a result it may
be possible that there are view a user cannot interact with in
touch exploration mode but can if not in that mode.
B) Clicking or long pressing on a different location in a view may yield
different results, for example NumberPicker. Ideally such views have
to implement AccessibilityNodeProvide which provider handles correctly
the request for click long press on virtual nodes. Some apps however
just fire different hover accessibility events when the user is over
a specific semantic portion of the view but do not provide virtual
nodes. Hence, a user will not be able to interact with such semantic
regions but the system can achieve that by sending the click/long click
at the precise location in the view that was last touch explored.
2. Adding a flag on accessibility service info to request explore by touch
mode. There is no need to put the device in this mode if node of the currently
enabled accessibility services supports it. Now the problem is inverted and
the service has to explicitly state its capability.
3. Fixing a bug where includeImportantViews was ignored for automation
services.
Change-Id: I3b29a19f24ab5e26ee29f974bbac2197614c9e2a
This problem was introduced in I74df9c24. The intention of the
change was still let UiTestAutomationBridge see the
non-important views, but there were bugs in the implementation:
1. AccessibilityManagerService was not really updating
mIncludeNotImportantViews when mIsAutomation is true
2. Wrong constant is used to set the flag
Change-Id: Ia0a2e9ed9720bd0ea3a563e0b492e870a6ec1586
1. Since we are using a stateless proxy accessibility service to
perform default accessibility gesture handling it shuld not
operate against not important views.
bug:6422069
Change-Id: I74df9c2415ab3b164d9ac5873f7004c0459e2bfa
1. The clockwise and counter clockwise circle gestures are difficult to
perform for a blind user. Some user studies have shown that so nuke them.
bug:6435260
Change-Id: I81d23bd3c450239fe17b4b8c60bafab00daa480a
1. A view that creates an accessibility node info may add to the info
a list of granularity labels. These are granularities by which the
source view can iterate over its content. For example a text view
may support character, word link while a web view may additionally
support buttons, tables, etc. There are actions on accessibility
node info to go to the next/previous at a given granularity which
is passesed as an argument.
2. Added Bundle argument to the APIs for performing accessibility
actions. This is generic and extensible.
bug:5932640
Change-Id: I328cbbb4cddfdee082ab2a8b7ff1bd7477d8d6f9
1. An accessibility service has to explicitly opt in to be notified
for gestures by the system. There is only one accessibility service
that handles gestures and in case it does not handle a gesture
the system performs default handling. This default handling ensures
that we have gesture navigation even if no accessibility service
would like to participate/customize the interaction model.
bug:5932640
Change-Id: Id8194293bd94097b455e9388b68134a45dc3b8fa
1. This APIs is useful for accessibility services since very often
they want to get access to the root node in the current window
to determine where accessibility focus is and subsequently find
a next view to take accessibility focus.
Change-Id: Ide5231f832d9ce2e9b14dc411dfd675f753a4edb
1. Added more gesture for accessibility. After a meeting
with the access-eng team we have decided that the current
set of gestures may be smaller than needed considering
that we will use four gestures for home, back, recents,
and notifications.
2. Adding actions for going back, home, opening the recents,
and opening the notifications.
3. Added preliminary mapping from some of the new gestures
to the new actions.
4. Fixed a bug in the accessibility interaction controller
which was trying to create a handled on the main looper
thread which may be null if the queried UI is in the
system process. Now the context looper of the root view
is used.
5. Fixed a bug of using an incorrect constant.
6. Added a missing locking in a couple of places.
7. Fixed view comparison for accessibilityt since it was
not anisymmetric.
bug:5932640
bug:5605641
Change-Id: Icc983bf4eafefa42b65920b3782ed8a25518e94f
Usefulness: Keep track of the current user location in the screen when
traversing the it. Enabling structural and directional
navigation over all elements on the screen. This enables
blind users that know the application layout to efficiently
locate desired elements as opposed to try touch exploring the
region where the the element should be - very tedious.
Rationale: There are two ways to implement accessibility focus One is
to let accessibility services keep track of it since they
have access to the screen content, and another to let the view
hierarchy keep track of it. While the first approach would
require almost no work on our part it poses several challenges
which make it a sub-optimal choice. Having the accessibility focus
in the accessibility service would require that service to scrape
the window content every time it changes to sync the view tree
state and the accessibility focus location. Pretty much the service
will have to keep an off screen model of the screen content. This
could be quite challenging to get right and would incur performance
cost for the multiple IPCs to repeatedly fetch the screen content.
Further, keeping virtual accessibility focus (i.e. in the service)
would require sync of the input and accessibility focus. This could
be challenging to implement right as well. Also, having an unlimited
number of accessibility services we cannot guarantee that they will
have a proper implementation, if any, to allow users to perform structural
navigation of the screen content. Assuming two accessibility
services implement structural navigation via accessibility focus,
there is not guarantee that they will behave similarly by default,
i.e. provide some standard way to navigate the screen content.
Also feedback from experienced accessibility researchers, specifically
T.V Raman, provides evidence that having virtual accessibility focus
creates many issues and it is very hard to get right.
Therefore, keeping accessibility focus in the system will avoid
keeping an off-screen model in accessibility services, it will always
be in sync with the state of the view hierarchy and the input focus.
Also this will allow having a default behavior for traversing the
screen via this accessibility focus that is consistent in all
accessibility services. We provide accessibility services with APIs to
override this behavior but all of them will perform screen traversal
in a consistent way by default.
Behavior: If accessibility is enabled the accessibility focus is the leading one
and the input follows it. Putting accessibility focus on a view moves
the input focus there. Clearing the accessibility focus of a view, clears
the input focus of this view. If accessibility focus is on a view that
cannot take input focus, then no other view should have input focus.
In accessibility mode we initially give accessibility focus to the topmost
view and no view has input focus. This ensures consistent behavior accross
all apps. Note that accessibility focus can move hierarchically in the
view tree and having it at the root is better than putting it where the
input focus would be - at the first input focusable which could be at
an arbitrary depth in the view tree. By default not all views are reported
for accessibility, only the important ones. A view may be explicitly labeled
as important or not for accessibility, or the system determines which one
is such - default. Important views for accessibility are all views that are
not dumb layout managers used only to arrange their chidren. Since the same
content arrangement can be obtained via different combintation of layout
managers, such managers cannot be used to reliably determine the application
structure. For example, a user should see a list as a list view with several
list items and each list item as a text view and a button as opposed to seeing
all the layout managers used to arrange the list item's content.
By default only important for accessibility views are regared for accessibility
purposes. View not regarded for accessibility neither fire accessibility events,
nor are reported being on the screen. An accessibility service may request the
system to regard all views. If the target SDK of an accessibility services is
less than JellyBean, then all views are regarded for accessibility.
Note that an accessibility service that requires all view to be ragarded for
accessibility may put accessibility focus on any view. Hence, it may implement
any navigational paradigm if desired. Especially considering the fact that
the system is detecting some standard gestures and delegates their processing
to an accessibility service. The default implementation of an accessibility
services performs the defualt navigation.
bug:5932640
bug:5605641
Change-Id: Ieac461d480579d706a847b9325720cb254736ebe
1. When the feedback type is set to TYPES_ALL_MASK the method
feedbackTypeToString was not generating a string with all
feedback types. Also the generated string sometimes could
have a comma appended after the last feedback type name
- this happens if the feedback type contains unknown type
flag.
Change-Id: I848c520cf7dae16dcd882f68275869c7ab656dd7
1. Now the thread is terminated in the disconnect() method
and also it is made demon since it has no pupose outside
the context of the bridge client.
bug:6053108
Change-Id: Idc25373fddf501eda4f875fea3e944367e4f04bf