Traditionally, autonomous systems have been designed to automate tasks for a set of predefined objectives (e.g., to reduce energy consumption and minimize cost). These objectives often need to be prioritized and traded off against each other. What a “good” trade-off looks like depends on the systems' context and the changing preferences of their human stakeholders. It is unrealistic and undesirable to assume that autonomous systems can set quality priorities without interacting with humans. In this talk, I present research on how humans can be kept “on the loop” when working with autonomous systems and their quality trade-offs. We also discuss how trade-off explanation and decision-making techniques can be used for security and privacy.