Recently, we’ve seen a variety of emerging programming models targeting the next generation of HPC hardware, known as extreme-scale computing systems. Extreme-scale runtime systems need to address not only the problems presented by supporting new hardware, but also issues of scalability—whether in small-scale embedded environments or large-scale supercomputing clusters. While a runtime may present all of the necessary functionality to write software for an extreme-scale system, the runtime APIs are rarely a productive interface for application programmers. In this thesis, we present a set of abstractions, which are designed to be implemented on top of an extreme-scale runtime, that will increase programmability and productivity for software developers. These ab- stractions include support for blocking calls in a fine-grained task-based runtime, a data structure representation for relocatable data chunks, and a hierarchical model for design and analysis of macro-dataflow applications. We discuss and demonstrate the tradeoffs among implementation choices for these abstractions, since the specific hardware and soft- ware details of an application deployment may dictate the ideal method of implementing a given abstraction.