Multimodal semantic text editing

MUSTE is a 5-year research project financed by Vetenskapsrådet (Swedish research council), running in the years 2015--2020, and led by Peter Ljunglöf. The project finances one PhD student and performs innovative research about text editing where a physical keyboard is not available or desirable. MUSTE is a continuation of the GRASP project.

There is a 10-page project description in PDF format, but here is a shorter abstract for the lazy:

Project abstract

Over the last 10--20 years several different modes of human computer interaction have emerged, such as speech recognition, touch screens and eye tracking. During the same time the amount of text interactions has increase enormously over the years — e.g., over 50 billion text messages are sent every day, only counting SMS and chat clients. But the full potential of the new modalities remains largely unexploited — text authoring is viewed conceptually as an incremental left-to-right process, where a text is authored by adding new words at the end. This view has some problems, especially when it comes to new modalities such as touch screens:

The basic problem that we want to solve in this project is how to reduce the cognitive load when authoring and editing text on devices with non-traditional input modalities. Our approach is that the user should be able to modify any word or phrase in the text at any time, and the system should be helpful and suggest good alternative formulations.


Currently there's a demo of a language learning game, hosted at Github: