Process any pending menu invalidations on the animation tick, before
traversals are performed. Collapse multiple menu invalidations
together.
Bug 7189372
Change-Id: I7a33ae9813980eb8fbcc958804de2c03328ecca8
Launcher occasionally crashes with a stack trace indicating that the memory
of a Layer object is corrupt. It is possible for us to delete a Layer
structure and then, briefly, use it to draw a DisplayList again before
that DisplayList gets recreated (without the layer that got deleted).
When this happens, if the memory got corrupted, it's possible to crash.
The fix is to add Layer to the other objects which we currently refcount
(bitmaps, shaders, etc.). Then instead of deleting a Layer, we decrement the
refcount. We increment when creating it, then increment it again when it's
referenced from a DisplayList. Then we decrement the refcount instead of
deleting it, and decrement when we clear a DisplayList that refers to it.
Then when the refcount reaches 0, we delete it.
Issue #6994632 Native crash in launcher when trying to launch all apps screen
Change-Id: I0627be8d49bb2f9ba8d158a84b764bb4e7df934c
Limit music volume when headphones or headset are inserted.
Display warning message when user wants to increase the volume
above a platform specific volume and request user acknowledgement
before proceeding.
TODO: exact wording of the warning message must be defined by UX.
Change-Id: I00f429f602534c6d8783126b929371c4d432e6e2
Take touch point size into account during the detection of scaling
gestures. Average together the major/minor axes of the size.
Add a config resource for tuning the minimum span requred to begin
(or end) a scale. This may be altered in a device-specific overlay
for devices that deviate too far from their assigned density bucket.
Change-Id: I2986888e9427a7e4cb56717d59c4fa7858ba9ca7
1. Currently the system fires accessibility events to announce the
start and end of a touch exploration gesture. However, such a
gesture starts after we have decided that the user is not
performing a gesture which is achieved by measuring speed of
movement during a threshold distance. This allows an accessibility
service to provide some feedback to the user so he knows that
he is touch exploring.
This change adds event types for the first and last touches
of the user. Note that the first touch does not conincide with
the start of a touch exploration gesture since we need a time
or distance to pass before we know whether the user explores
or gestures. However, it is very useful for an accessibility
service to know when the user starts to interact with the
touch screen so it can turn the speech off, to name one
compelling use case.
This change also provides event types for the start and end
of gesture detection. If the user has moved over the threshold
with a speed greater than X, then the system detects gestures.
It is useful for an accessibility service to know the begin
and end of gesture detection so it can provide given feedback
type for such a gesture, say it may produce haptic feedback
or sound that differs for the one for touch exploration.
The main benefit of announcing these new events is that an
accessibility service can provide feedback for each touch
state allowing the user to always know what he is doing.
bug:7166935
Change-Id: I26270d774cc059cb921d6a4254bc0aab0530c1dd
Bug #7165793
A ViewRootImpl's root view can be null. Check for this condition to
prevent an NPE invalidateWorld(). Other messages perform a similar
check to properly handle the case where mView == null.
Change-Id: I5bcfc41c48a469d38b21be74df2f6c715b0f9352
1. For accessibility purposes it is important to be able to associate
a view with content with a view that labels it. For example, if
an accessibility service knows that a TextView is associated with
an EditText, it can provide much richer feedback.
This change adds APIs for setting a view to be the label for another
one and setting the label for a view, i.e. the reverse association.
bug:5016937
Change-Id: I7b837265c5ed9302e3ce352396dc6e88413038b5
The mAttachInfo.mSystemUiVisibility value was changing in
View.dispatchAttachedToWindow but
mAttachInfo.mRecomputeGlobalAttributes was not being set.
Consequently ViewRootImpl.collectViewAttributes was returning
without updating the subtreeSystemUiVisibility. This is fixed
by calling needGlobalAttributesUpdate in dispatchAttachedToWindow.
WIthin ViewRootImpl.collectViewAttributes the assignment to
subtreeSystemUiVisibility was only being made if
mAttachInfo.mSystemUiVisibility was changed within
collectViewAttributes. But mAttachInfo.mSystemUiVisibility
was changing outside of collectViewAttributes in
dispatchAttachedToWindow. Consequently subtreeSystemUiVisibility
was never updated. By looking for a mismatch between
subtreeSystemUiVisibility and mSystemUiVisibility
subtreeSystemUiVisibility gets assigned whenever it is out of
sync.
Fixes bug 7091817.
Change-Id: I1e97a7dec14dc9594876175ae26370fb9030a8a6
Seems like we should never be dispatching input events to windows
that currently don't have focus.
Change-Id: I524796df69f3fdda4147cb37e7b85b79bad88762
1. The way for computing the magnified region was simplistic and
incorrect. It was ignoring window layering resulting in broken
behavior. For example, if the IME is up, then the everything else
is magnifed and the IME not. Now the keyguard appears and covers
the IME but the magnified region does not expand while it would
since the keyguard completely covers the not magnified IME window.
bug:7138937
Change-Id: I21414635aefab700ce75d40f3e913c1472cba202
1. The implementation was not taking into account the transformation
matrices if the views.
2. The rectangle that was passed as an argument to
ViewParent.requestChildRectangleOnScreen was modified by
some implementations - now care is taken to prevent it.
3. The scroll of child was used when a rectangle of its coordinate
system was mapped to one in the parent system. However, the
scroll shows how much a parent has scrolled its descendants, so
the scroll of the parent has to be used not the child.
bug:7139556
Change-Id: I5b09eb7f105047e95282f74308968d5465831c84
The window manager now has a facility to provide a full-screen
animation, which the activity manager uses every time a user
switch happens.
The current animation is just a simple dumb slide until we get
a design from UX.
Also some cleanup: moved the portrait task animations to the
default config so we always have an animation for them, and finally
got the java symbol stuff out of public.xml.
Change-Id: I726f77422b2ef5f2d98f961f8da003e045f0ebe8
We now support scaling the logical display to fit the
physical display, whatever size it is. So we can allow
adb shell am display-size to use more or less arbitrary sizes
although we do enforce an upper and lower bound to
protect the user.
Change-Id: I5fe6ba32ad1f9e4fbcd6915f7d36850b987bbcc0
Trackball and generic motion events now pass through the IME in case
it would like to handle them before passing them on to the view
hierarchy.
While I was at it, I also...
...fixed the documentation on InputMethodService.onKeyUp()
...added documentation to InputMethodService.onTrackballEvent()
...added trackball and generic motion events to the "input" command
...fixed input consistency verification involving ACTION_OUTSIDE
Bug: 7050005
Change-Id: I40ab68df4a9542af6df25de6ec2ec500e4c02902
Previously, to draw a layered view with a changed Paint object for the
drawLayer operation, you'd have to invalidate the parent view, to get the
native DisplayList to pick up the new Paint properties. This change adds
API and functionality so that the developer can call setLayerPaint(), which
does the proper invalidation (lightweight, doesn't cause redrawing the view).
Issue #6923810 Make it easy to efficiently animate a layer's Paint
Change-Id: I7fea79788d50f6d9c86dd5e5b2a4490cb95142bb
1. Getting the value of the content description via the method since
there is nothing preventing developers to override the method to
return a desired value (they should not do that but it is feasible).
bug:7079008
Change-Id: Iaf5848e9b065454ebfefccf685415fbf034ae475
This change is the initial check in of the screen magnification
feature. This feature enables magnification of the screen via
global gestures (assuming it has been enabled from settings)
to allow a low vision user to efficiently use an Android device.
Interaction model:
1. Triple tap toggles permanent screen magnification which is magnifying
the area around the location of the triple tap. One can think of the
location of the triple tap as the center of the magnified viewport.
For example, a triple tap when not magnified would magnify the screen
and leave it in a magnified state. A triple tapping when magnified would
clear magnification and leave the screen in a not magnified state.
2. Triple tap and hold would magnify the screen if not magnified and enable
viewport dragging mode until the finger goes up. One can think of this
mode as a way to move the magnified viewport since the area around the
moving finger will be magnified to fit the screen. For example, if the
screen was not magnified and the user triple taps and holds the screen
would magnify and the viewport will follow the user's finger. When the
finger goes up the screen will clear zoom out. If the same user interaction
is performed when the screen is magnified, the viewport movement will
be the same but when the finger goes up the screen will stay magnified.
In other words, the initial magnified state is sticky.
3. Pinching with any number of additional fingers when viewport dragging
is enabled, i.e. the user triple tapped and holds, would adjust the
magnification scale which will become the current default magnification
scale. The next time the user magnifies the same magnification scale
would be used.
4. When in a permanent magnified state the user can use two or more fingers
to pan the viewport. Note that in this mode the content is panned as
opposed to the viewport dragging mode in which the viewport is moved.
5. When in a permanent magnified state the user can use three or more
fingers to change the magnification scale which will become the current
default magnification scale. The next time the user magnifies the same
magnification scale would be used.
6. The magnification scale will be persisted in settings and in the cloud.
Note: Since two fingers are used to pan the content in a permanently magnified
state no other two finger gestures in touch exploration or applications
will work unless the uses zooms out to normal state where all gestures
works as expected. This is an intentional tradeoff to allow efficient
panning since in a permanently magnified state this would be the dominant
action to be performed.
Design:
1. The window manager exposes APIs for setting accessibility transformation
which is a scale and offsets for X and Y axis. The window manager queries
the window policy for which windows will not be magnified. For example,
the IME windows and the navigation bar are not magnified including windows
that are attached to them.
2. The accessibility features such a screen magnification and touch
exploration are now impemented as a sequence of transformations on the
event stream. The accessibility manager service may request each
of these features or both. The behavior of the features is not changed
based on the fact that another one is enabled.
3. The screen magnifier keeps a viewport of the content that is magnified
which is surrounded by a glow in a magnified state. Interactions outside
of the viewport are delegated directly to the application without
interpretation. For example, a triple tap on the letter 'a' of the IME
would type three letters instead of toggling magnified state. The viewport
is updated on screen rotation and on window transitions. For example,
when the IME pops up the viewport shrinks.
4. The glow around the viewport is implemented as a special type of window
that does not take input focus, cannot be touched, is laid out in the
screen coordiates with width and height matching these of the screen.
When the magnified region changes the root view of the window draws the
hightlight but the size of the window does not change - unless a rotation
happens. All changes in the viewport size or showing or hiding it are
animated.
5. The viewport is encapsulated in a class that knows how to show,
hide, and resize the viewport - potentially animating that.
This class uses the new animation framework for animations.
6. The magnification is handled by a magnification controller that
keeps track of the current trnasformation to be applied to the screen
content and the desired such. If these two are not the same it is
responsibility of the magnification controller to reconcile them by
potentially animating the transition from one to the other.
7. A dipslay content observer wathces for winodw transitions, screen
rotations, and when a rectange on the screen has been reqeusted. This
class is responsible for handling interesting state changes such
as changing the viewport bounds on IME pop up or screen rotation,
panning the content to make a requested rectangle visible on the
screen, etc.
8. To implement viewport updates the window manger was updated with APIs
to watch for window transitions and when a rectangle has been requested
on the screen. These APIs are protected by a signature level permission.
Also a parcelable and poolable window info class has been added with
APIs for getting the window info given the window token. This enables
getting some useful information about a window. There APIs are also
signature protected.
bug:6795382
Change-Id: Iec93da8bf6376beebbd4f5167ab7723dc7d9bd00