Lots when you consider that almost everyone on every platform has a PDF reader to work with that signature, whereas not many people have GPG or GPG tools that are friendly to the non-technical users.
You might consider some people won't be able to use this calendar as-is; it isn't accessible. Consider http://www.456bereastreet.com/archive/201302/making_elements... and perhaps change the spans to native HTML elements with visual and keyboard focus. Thanks for sharing!
Still not seeing any technical details about if this is a native implementation of Android or virtualized with QNX. One of the key issues with their past approach was lack of any accessibility at the system API level. I wonder if they will be blocking or interfering with native Android app accessibility.
Do you think teams will be able to leverage the same spatial visual information to provide or pair with technologies for spatial audio?
A lot of the game libraries assume and need 3d rendering in order to provide proper sound blocking (from objects) and in scene spatial depth. Having that concept considered in these experiments would perhaps provide for novel and consistent ways for people who are blind or low vision to gain awareness of the scenes (if audio can be attached by design or even by user preference) and augment or improve usability (end-user enjoyment?) if provided.
I've very very recently implemented some positional audio. Combined with animated elements, it's very convincing.
One thing I think might be problematic is the idea of constraining audio to enclosed spaces such as rooms.
For example, with web Audio, it's trivial to make the sound fade off equally in all directions in a circular area around the emitter. How you "contain" sound to a quadrilateral space is something I'm not able to visualize how I would implement.
If anyone has any prior art or recommended reading, I'd happily digest.
Oculus's docs on "Introduction to Virtual Reality Audio" may be of some use[1], specifically the section on environmental modeling[2].
The general idea is to model reverberations and reflections given the distance of a point sound source to the nearest flat surface, giving the impression of a "contained" sound.
I hope they find a way to add this to a portable device for end users. Here in DC I've suspected the air in the metro stations (underground train) is many times worse than the above ground sampling.
Just wondering if this library provided keyboard access only manipulation, would you have expected it to provide mouse events?
I'm biased on the keyboarding should be required perspective: I deal with accessibility day in and day out, and a library such as this had to be ripped out and removed from the organization, simply because junior developers would prototype and ship without thinking about that required use-case of keyboard access.
Open access is a recurring problem, addressed to some degree with presidential orders. You might be interested in FASTR (Fair Access to Science and Technology Research Act): https://plus.google.com/u/0/+PeterSuber/posts/G2uebVhVtBv
In my line of accessibility work, I find QA fills the role of testing to educate. If the engineers don't have the experience, looping issues into the backlog, gives them the experience to train up the skill set. They might have had verified experience before hiring, but getting team collaboration helps mature new people or smooth out the rough skill sets of a dev who might be awesome in one particular area but meh in others.
I suspect this could be the case for any subject matter expert where the skill set is viewed as fringe or not expecting the team will be required to have that exact knowledge before hire.
I'm not confused as to why, but there is pretty good summaries out there: http://josefprusa.cz/open-hardware-meaning/ - all comes down to trying to market to, and mangle the definition of open source hardware IMHO.
I wish they would have better explored "2. People Have Limitations" via mental models and not been biased by "(as humans are 90% visual creatures)." Designing for and separating visuals from content can strengthen not only the user experience, but make accessibility elegant, instead of simply possible.