View source for UI Improvements
From Openmoko
You do not have permission to edit this page, for the following reasons:
You can view and copy the source of this page:
Template used on this page:
Return to UI Improvements.
You do not have permission to edit this page, for the following reasons:
You can view and copy the source of this page:
Template used on this page:
Return to UI Improvements.
Obviously the tools are in the wild to build interfaces that could rival (or better IMO) anything Apple comes up with. We just need to organize this stuff. This would need hardware that can support dynamic interfaces. I can help here, too. sean@openmoko.com
It's been said that having no multitouch screen allows less freedom for innovation. Maybe we could get something out of our touchscreen drivers.
Why ? Think of apple's scroll up/down feature on macbooks touchpads (which aren't multi touch, it's a clever driver hack, iscroll2):
To scroll, just place two fingers on your trackpad instead of one. Both fingers need to be placed next to each other horizontally (not vertically, the trackpad cannot detect that). Some people get better results with their finger spaced a little bit apart, while others prefer having the fingers right next to each other.
iScroll2 provides two scrolling modes: Linear and circular scrolling.
For linear scrolling, move the two fingers up/down or left/right in a straight line, respectively, to scroll in that direction.
Circular scrolling works in a way similar to the iPod's scroll wheel: Move the two fingers in a circle to scroll up or down, depending on whether you move in a clockwise or counterclockwise direction.
Maybe we can port/adapt/get inspiration from this macintosh driver.
When we want to navigate files, mp3s in an mp3 player, etc... Every control that the application needs is a button. What about looking at the polyhedrons? We could find one for each usage, with as many surrounding subzones that way be used as controls. Ex: you need 5 buttons, let's take a pentagon with 5 surrounding zones all around. That way, it's always optimized...
http://en.wikipedia.org/wiki/Polyhedra http://en.wikipedia.org/wiki/List_of_uniform_polyhedra
We can't improve the human-machine interface without knowing the strenghts / weaknesses of our hardware; some of the weaknesses might turn out as exploitable features, some strengths as limiting constraints.
Question:
What exactly does the touchscreen see when you touch the screen with 2 fingers at the same time, when you move them, when you move only one of the 2, etc. I'm also interested in knowing how precise the touchscreen is (ex: refresh rate, possible pressure indication, ...)?
Answer:
Conclusions:
Question:
What does one see when sliding two fingers in parallel up(L,R)->down(L,R)?
Answer:
Question:
What does one see when narrowing two fingers in slide (=zoom effect on iphone)?
Answer:
It would be good to report what performance the current hardware allows:
Please report here your impressions.
If we want to add eye candy & usability to the UI (such as smooth realistic list scrolling, as seen in apple's iphone demo on contacts lists), we'll need a physics engine, so that moves & animations aren't all linear.
The most used technique for calculating trajectories and systems of related geometrical objects seems to be verlet integration implementation; it is an alternative to Euler's integration method, using fast approximation.
We may have no need for such a mathematical method at first, but perhaps there are other use cases. For instance, it may be useful to gesture recognition (i'm not aware if existing gesture recognition engines measure speed, acceleration...).
The akamaru library is the code behind kiba dock's fun and dynamic behaviour. It's dependencies are light (needs just GLib). It takes elasticity, friction, gravity into account.
If you want to take a quick look at the code: svn co http://svn.kiba-dock.org/akamaru/ akamaru
The only (AFAIK) application using this library is kiba-dock, a *fun* app launcher, but we may find another use to it in the future.
As suggested on the mailing list, it is mostly overkill for the uses we intend to have, but this library may be optimized already, the API can spare some time for too. Furthermore, Qui peut le plus, peut le moins.
There's an undergoing verlet integration implementation into the e17 project (by rephorm) see http://rephorm.com/news/tag/physics , so we may see some UI physics integration into e17 someday.
http://www.robertpenner.com/easing/
See the demo: implements non linear behaviour (actionscript), but may give inspiration
If we got it right, when touching the screen on a second place, the cursor oscillates between the two points depending on relative pressure distribution. Using averaging algorithms, we may have the opportunity to detect peculiar behaviours.
We need raw data (x,y,t) from the real hardware for the following behaviours:
When touching the screen with two fingers at the same time, we necessarily see the two points, or are able to extrapolate the position of the second one. This solution can add feature, but will probably be little erratic...
The warping can be used in the 4 diagonals, plus the up/down/right/left cross:
---------------- ---------------- ---------------- ---------------- - 1 - - 1 2 - - 1 - - 2 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 2- - - - 2 - - 1 - ---------------- ---------------- ---------------- ----------------
We need some sort of T9. When typing a word, the first letter determines the next possible ones. Therefore, we may let disappear the less-probable following letters. Ex: type an L, there's no way an X follows...
Hints:
The most critical point is the initial disposition of the letters, before any letter is typed. We may also want to use horizontal two-parts keyboard (with the neo in hands like a psp..)
As an e17 user for quite some time, i always was astonished by the level of efficiency, elegance and eye-candy without using any fancy compositing X features: e17 will work on any Xserver. It seems that using GTK+ for the base UI is a problem: slow, sluggish... So, why not using an EFL-based UI?
These libs are quite embedded-oriented, and could be a great match for openmoko. Some people already worked on the subject, having mitigated results, but these libs definetly work on handeld devices.
There are 2 ways to go:
In fact, i started thinking about this 1 2 3 4 some time ago (2006-04), and some other people might be interested in participating. And some already started, in the mkezx project for instance.
What's to do then?
This is definetly a way to examine further.
EEM is a proof of concept: a simple UI showing an app launcher/menu on an embedded device.
Links:
It has already been partially ported on:
In developing DR17 it was made clear that we needed an entirely new set of libraries and tools. Raster had a bold vision of what was possible and where he wanted the next release to go, starting with Imlib2 and EVAS, and eventually growing into new libraries largely based on or around EVAS. It became clear that the usefulness of these libraries and tools went far beyond the DR17 release itself, just as Imlib did in DR16. Thus the collective library back-end of DR17 was given the independent title: the Enlightenment Foundation Libraries, or EFL for short.
The EFL contains solutions for almost any graphical interface task, far beyond just rendering images. EVAS provides a highly optimized canvas library. Ecore provides a simple and modular abstraction interface and advanced event management including timers. Etox provides a complex text layout library complete with theme-able text stylization capabilities (previously Estyle). EDB provides a compact database format for intuitive and easy configuration management, including the storing of binaries. EET provides an integrated and flexible container that ends the traditions of providing themes in tarballs. Edje provides a revolutionary library and tool set for completely abstracting application interfaces from their code, including a complex and flexible method of designing interfaces. EWL provides a complete widget library built on all the other components of the EFL. And more!
From: http://www.enlightenment.org/
http://www.rasterman.com/files/efl.png
All descriptions beyond come from http://www.enlightenment.org/Enlightenment/DR17/ .
Evas is a very core part of the EFL. It is the rendering and display management engine that sits under anything you see on a screen. It does all the work of managing display objects, their state, state changes, layering, rendering and scaling, image loading, text rendering, update handling, optimizing the display pipeline to avoid work and more. It does a lot of the grunt work of display, and is portable beyond X. It even runs in the framebuffer directly without needing X, under Trolltech's Qtopia, on DirectFB, can render into a memory buffer, and use OpenGL to accelerate rendering. It is extremely flexible and very powerful, saving a lot of time writing repetitive drawing routines that often end up not performing optimally as to do so takes a lot of time, care and effort that most programmers would not want to spend, because it distracts from the important work of making their application.
Evas on embedded
But despite all of the things that Evas can do, it is not very large. It has been kept small and lean to make it viable for use on NOT just heavy-weight desktops, but also on limited resource devices such as PDA's, mobile phones and Smart phones, Stereo systems, DVD Players, PVR/DVR Systems and more. It has already been ported to Mobile phones and PDA's, PVR/DVR systems and has proved itself capable of driving these displays very nicely with beautiful effects. The developer does not have to change how they code for a device or their desktop as the API and rendering are the same, so no special development environments or emulators are needed. This saves time and effort, allowing desktop and device code to be shared and maintained easily. Also since Evas hides the details of the devices display format, and virtualizes the display at an object level, the programmer doesn't need to care how to render things. They can use a standard system that is universal across all instances of Evas.
Evas provides alpha blending, high quality scaling of images, anti-aliased truetype text, gradients, lines, polygons and more. The list of supported objects is growing, and can be extended via smart objects. It has an interface mechanism to allow for video data to be efficiently handled (which is what Emotion exploits) and more.
Edje is one of the more unique parts of EFL, combining many things that Shockwave / FLASH can do with some things it can't, but instead of being designed as a player, it is designed as a slave library to be used by an application to enhance the applications content and display via external compressed data files. It is being expanded continuously, and thanks to its clean design is easy to improve. This is the theme engine behind Enlightenment 0.17 and beyond and at last formalizes Enlightenment themes in a simple and consistent manner.
A Quick list of its features:
* Scalable bitmap images * Highly compressed in-lined images * Lossless and lossy compression with or without alpha channel * In-lined compressed truetype fonts * Multiple inbuilt font effects * Automatic font sizing based on size or area * Text compression and ellipsis based cutting * Rectangle objects * Configurable color scheme system * Ability to embed Edje objects within Edje objects * Embryo scripting language for complex interactions * Sand-boxed scripts so they cannot do much damage * Alpha blending * Completely scalable and re-sizable layout and interface metrics * Completely calculated tweened animation for ultra-smooth display
Compositing seems to give zooming interfaces reality (at last!).
Well, considering recent changes in destkop applications, opengl has a definite future. For instance, the expose (be it apple's or beryl's) is a very interesting and usable feature. Using compositing allows the physics metaphore: the human brain doesn't like "gaps"/jumps (for instance while scrolling a text), it needs continuity, which can be provided by opengl. When you look at apple's iphone prototype, it's not just eye candy, it's maybe the most natural/human way of navigating, because it's sufficiently realistic for the brain to forget the non-physical nature of what's inside.
So, opengl hardware will be needed in a more or less distant hardware, for 100% fluid operation.
How cool will solid-based (polygons, as seen in beryl) interfaces be? :) Real ZUIs...
Clutter, an openedhand project, is an open source software library for creating fast, visually rich graphical user interfaces. The most obvious example of potential usage is in media center type applications. We hope however it can be used for a lot more.
Clutter uses OpenGL (and soon optionally OpenGL ES) for rendering but with an API which hides the underlying GL complexity from the developer. The Clutter API is intended to be easy to use, efficient and flexible.
From the wikipedia article, OpenGL ES (OpenGL for Embedded Systems) is a subset of the OpenGL 3D graphics API designed for embedded devices such as mobile phones, PDAs, and video game consoles.
Please add here any idea that seems of relevance.
Take an item list (ex: adress book), print it on a ribbon of paper, and glue it on a wheel (on the tire). You're looking in the front of it, so when you want to go from the A to Z, you touch the wheel and drag it up. When you let the wheel go, it goes furter, taken by it's inertia. Stop the wheel when you got your contact. Got the idea? That's why we may speak of an "infinite wheel", so that the surface is flat. For our case here, we always want to display square content, so the n-sided uniform prism analogy is mathematically more exact.
Important features:
We can add "parallel wheels", symbolizing different sorting methods. Slide long to the left / right to look at a different wheel = items organization.
Effect: scroll in an inverted/negated fashion (slide down = scroll up, slide up = scroll down)
When finger is released (i.e. touchscreen doesn't detect any press):
if (last_speed_seen > 0 ) then keep this speed and acceleration, with friction else stop scrolling
Scrolling here is seen as unidimensional, but can apply to bidimensional situations (ex: zoomed image) too
Having a scroll that isn't a 1:1 map to the user's action isn't hard. It's just an extra calculation in the scroll code.
<---- Where is the scroll code? :)
I'm wondering what layer of openmoko has to be hacked, i.e. if working at openmoko layer allows enough possibilities for this; if i'm not mistaken, this is part of libmokoui, but i'm pretty afraid that patching gtk itself woud be needed. Working on the lower level would apply changes to every application, not only openmoko's.
TODO:
The same, but for the wheel. It can be very short to do: you don't have 1:1 anymore, but, for example, 1/4 wheel turn = 1 item. It's demultiplicated, but has inertia.
As discussed on community list:
If you hold down one finger and tap the other one, the mouse pops over and back again. If you keep your second finger touching, the cursor follows it. When you release it, cursor goes back to first finger position. This could be a way to set a bounding box or turn on the mode. So the second finger can do something like rotating around the first, or increase or lower the distance to the first.
* slide your righthand finger down, it scrolls up * slide your righthand finger up, it scrolls down * slide it left, next page/item * slide it right, previous page/item * do a circle: rotation * narrow towards the black circle: zoom - * go away: zoom +
The advantages of using simple origin-driven cursor warping as double touch detection criteria is that:
Doable, but tricky...
which is waaaaay impossible for us for now
Obviously, multitouch would'nt allow that many space for innovation