A key challenge for reinforcement learning (RL) consists of learning in environments with sparse extrinsic rewards. In contrast to current RL methods, humans are able to learn new skills with little or no reward by using various forms of intrinsic motivation. We propose AMIGO, a novel agent incorporating—as form of meta-learning—a goal-generating teacher that proposes Adversarially Motivated Intrinsic GOals to train a goal-conditioned “student” policy in the absence of (or alongside) environment reward. Specifically, through a simple but effective “constructively adversarial” objective, the teacher learns to propose increasingly challenging—yet achievable—goals that allow the student to learn general skills for acting in a new environment, independent of the task to be solved. We show that our method generates a natural curriculum of self-proposed goals which ultimately allows the agent to solve challenging procedurally-generated tasks where other forms of intrinsic motivation and state-of-the-art RL methods fail.