First step of improving app screen size compatibility mode. When
running in compat mode, an application's windows are scaled up on
the screen rather than being small with 1:1 pixels.
Currently we scale the application to fill the entire screen, so
don't use an even pixel scaling. Though this may have some
negative impact on the appearance (it looks okay to me), it has a
big benefit of allowing us to now treat these apps as normal
full-screens apps and do the normal transition animations as you
move in and out and around in them.
This introduces fun stuff in the input system to take care of
modifying pointer coordinates to account for the app window
surface scaling. The input dispatcher is told about the scale
that is being applied to each window and, when there is one,
adjusts pointer events appropriately as they are being sent
to the transport.
Also modified is CompatibilityInfo, which has been greatly
simplified to not be so insane and incomprehendible. It is
now simple -- when constructed it determines if the given app
is compatible with the current screen size and density, and
that is that.
There are new APIs on ActivityManagerService to put applications
that we would traditionally consider compatible with larger screens
in compatibility mode. This is the start of a facility to have
a UI affordance for a user to switch apps in and out of
compatibility.
To test switching of modes, there is a new variation of the "am"
command to do this: am screen-compat [on|off] [package]
This mode switching has the fundamentals of restarting activities
when it is changed, though the state still needs to be persisted
and the overall mode switch cleaned up.
For the few small apps I have tested, things mostly seem to be
working well. I know of one problem with the text selection
handles being drawn at the wrong position because at some point
the window offset is being scaled incorrectly. There are
probably other similar issues around the interaction between
two windows because the different window coordinate spaces are
done in a hacky way instead of being formally integrated into
the window manager layout process.
Change-Id: Ie038e3746b448135117bd860859d74e360938557
Add a method on ViewGroup to determine whether it supports scrolling.
This allows us to show the pressed feedback immediately in many cases,
improving responsiveness of buttons, etc.
This patch also lengthens the timeout in order to reduce flashes
when the user is scrolling.
Change-Id: Ieb91ae7a1f8e8f7e87448f2a730381a53947996f
Clicking on a node in hierarchyviewer1 and hierarchyviewer2 and then
clicking the new "Dump DisplayList" button will cause the display
list for the selected node (including its children) to be output into
logcat.
Change-Id: Iad05f5f6cca0f8b465dccd962b501dc18fe6e053
Clicking on a node in hierarchyviewer1 and hierarchyviewer2 and then
clicking the new "Dump DisplayList" button will cause the display
list for the selected node (including its children) to be output into
logcat.
Change-Id: Id32f62569ad1ab4d533bc62987f3a7390c1bb4e6
Bug #4343984
TextureView can be used to render media content (video, OpenGL,
RenderScript) inside a View.
The key difference with SurfaceView is that TextureView does
not create a new Surface. This gives the ability to seamlessly
transform, animate, fade, etc. a TextureView, which was hard
if not impossible to do with a SurfaceView.
A TextureView also interacts perfectly with ScrollView,
ListView, etc. It allows application to embed media content
in a much more flexible way than before.
For instance, to render the camera preview at 50% opacity,
all you need to do is the following:
mTextureView.setAlpha(0.5f);
Camera c = Camera.open();
c.setPreviewTexture(mTextureView.getSurfaceTexture());
c.startPreview();
TextureView uses a SurfaceTexture to get the job done. More
APIs are required to make it easy to create OpenGL contexts
for a TextureView. It can currently be done with a bit of
JNI code.
Change-Id: Iaa7953097ab5beb8437bcbbfa03b2df5b7f80cd7
In Honeycomb we introduced navigation controls in the status
bar, for xlarge devices without physical buttons. What about
phones? The status bar is pretty cramped already, and
besides, it's at the top of the display most of the time,
not at the bottom where your thumb is likely to be.
Enter the navigation bar. It's a new window type that
appears atop almost everything (including the keyguard); the
window manager subtracts its rectangle from the default
visible rectangle of other windows (including the status bar
and notification shade).
However, it behaves (on phones) like the status bar in that
applications that request fullscreen windows can get access
to those pixels. Well, almost; they need cooperation from
the navigation bar implementation to make the navbar
disappear, just like the status bar.
The current SystemUI implementation of the navigation bar on
phones is still rough, but it has the basics:
+ back, home, and menu keys (NB: we're showing menu all the
time right now because checking the api level of the
package owning the top window is currently a poor
indicator of whether the app requires the menu key)
+ it tries to stick to the same physical end of the device,
regardless of device orientation (on a phone, this is
the strip of land closest to the microphone)
Change-Id: Ic613a3351220af0bbfbdef63e1d99cbefd5ed1c2
1. Added an Input Filter that interprets the touch screen motion
events to perfrom accessibility exploration. One finger explores.
Tapping within a given time and distance slop on the last exlopred
location does click and long press, respectively. Two fingers close
and in the same diretion drag. Multiple finglers or two fingers in
different directions or two fingers too far away are delegated to
the view hierarchy. Non moving fingers "accidentally grabbed the
device for the scrren" are ignored.
2. Added accessibility events for hover enter, hover exit, touch
exoloration gesture start, and end. Accessibility hover events
are fired by the hover pipeline. An accessibility event is
dispatched up the view tree and the topmost view fires it.
Thus predecessors can augment the fired event. An accessibility
event has several records and a predecessor can optionally
modify, delete, and add such to the event.
3. Added onPopulateAccessibilityEvent and refactored the existing
accessibility code to use it.
4. Added API for querying the currently enabled accessibility services
by feedback type.
Change-Id: Iea2258c07ffae9491071825d966dc453b07e5134
This reverts commit ac84d3ba81.
There seems to be a problem with this API change. Reverting for now to
fix the build.
Change-Id: Ifa7426b080651b59afbcec2d3ede09a3ec49644c
Not yet working, so turned off.
Also fix a bug where the display size configuration became inconsistent
after a configuration change -- we now figure out everything about the
display size when computing a new configuration.
1. Added an Input Filter that interprets the touch screen motion
events to perfrom accessibility exploration. One finger explores.
Tapping within a given time and distance slop on the last exlopred
location does click and long press, respectively. Two fingers close
and in the same diretion drag. Multiple finglers or two fingers in
different directions or two fingers too far away are delegated to
the view hierarchy. Non moving fingers "accidentally grabbed the
device for the scrren" are ignored.
2. Added accessibility events for hover enter, hover exit, touch
exoloration gesture start, and end. Accessibility hover events
are fired by the hover pipeline. An accessibility event is
dispatched up the view tree and the topmost view fires it.
Thus predecessors can augment the fired event. An accessibility
event has several records and a predecessor can optionally
modify, delete, and add such to the event.
3. Added onPopulateAccessibilityEvent and refactored the existing
accessibility code to use it.
4. Added API for querying the currently enabled accessibility services
by feedback type.
Change-Id: Iec03c6c3fe298de3f14cb6efdbb9b198cd531a0c
Applications now get the display size from the window manager. No
behavior should be changed yet, this is just prep for some real
changes.
Change-Id: I2958a6660895c1cba2b670509600014e55ee9273
This fixes spurious verification errors that would be generated
when a view declined an initial event such as ACTION_DOWN. Since
the view would not receive the rest of the event stream, it would
not see the corresponding ACTION_UP and the next ACTION_DOWN would
trigger a spurious verification error.
Change-Id: I2386acf378cd1765d5446faed5ad9c6525f8b400
Added a new PointerIcon API (hidden for now) for loading
pointer icons.
Fixed a starvation problem in the native Looper's sendMessage
implementation which caused new messages to be posted ahead
of old messages sent with sendMessageDelayed.
Redesigned the touch pad gestures to be defined in terms of
more fluid finger / spot movements. The objective is to reinforce
the natural mapping between fingers and spots which means there
must not be any discontinuities in spot motion relative to
the fingers.
Removed the SpotController stub and folded its responsibilities
into PointerController.
Change-Id: I5126b1e69d95252fda7f2a684c9287e239a57163
Multiple threads are adding messages about the current
state of the views to the main looper. This can cause
onDetachedFromWindow to be posted on the looper before
onAttachedToWindow. This change will make sure to only
dispatch onDetachedFromWindow if we have previously
dispatched onAttachToWindow.
Change-Id: Ibc7cbcafb098bc000d2ef5480d2110d3fff4d55a
Some drivers report individual finger updates one at a time
instead of all at once. When 10 fingers are down, this can
cause the framework to have to handle 10 times as many events
each with 10 times as much data. Applications like
PointerLocation would get significantly bogged down by all
of the redundant samples.
This change coalesces samples that are closely spaced in time,
before they are dispatched, as part of the motion event batching
protocol.
Increased the size of the InputChannel shared memory buffer so
that applications can catch up faster if they accumulate a
backlog of samples.
Added logging code to help measure input dispatch and drawing
latency issues in the view hierarchy. See ViewDebug.DEBUG_LATENCY.
Change-Id: Ia5898f781f19901d2225c529a910c32bdf4f504f
Alt-TAB should have different semantics from the APP_SWITCH key
or long-press on HOME. Accordingly, remove the fallback action
for Alt-TAB and initiate the task switching behavior directly
in the policy.
Modified RecentApplicationsDialog to be more precise about the
initial modifiers that it considers to be holding the dialog.
The dialog is now dismissed by a second press on the APP_SWITCH
key or by a second long press on HOME.
Change-Id: Idf4d803f51103819057cb655ff3b770b7729e4be
The idea is to assist with debugging by identifying cases in which
the input event stream is corrupted.
Change-Id: I0a00e52bbe2716be1b3dfc7c02a754492d8e7f1f
This patch adds a mechanism for capturing, filtering, transforming
and injecting input events at a very low level before the input
dispatcher attempts to deliver them to applications. At this time,
the mechanism is only intended to be used by the accessibility
system to implement built-in system-level accessibility affordances.
The accessibility input filter is currently just a stub.
It logs the input events receives and reinjects them unchanged,
except that it transforms KEYCODE_Q into KEYCODE_Z.
Currently, the accessibility input filter is installed whenever
accessibility is enabled. We'll probably want to change that
so it only enables the input filter when a screen reader is
installed and we want touch exploration.
Change-Id: I35764fdf75522b69d09ebd78c9766eb7593c1afe