It is easy to imagine machines that can communicate using spoken natural language. Constructing such machines is more difficult. The available methods for development of interactive speech applications are costly, and current research is mainly focused on producing more sophisticated systems, rather than on making it easier to build them. This thesis describes how components used in interactive speech applications can be automatically derived from natural language grammars written in Grammatical Framework (GF). By using techniques borrowed from the field of programming language implementation, we can generate speech recognition language models, multimodal fusion and fission components, and support code for abstract syntax transformations. By automatically generating these components, we can reduce duplicated work, ensure consistency, make it easier to build multilingual systems, improve linguistic quality, enable re-use across system domains, and make systems more portable.