Skip to yearly menu bar Skip to main content


Poster

StrokeNet: A Neural Painting Environment

Ningyuan Zheng · Yf Jiang · Dingjiang Huang

Great Hall BC #62

Keywords: [ model based ] [ differentiable model ] [ image generation ] [ reinforcement learning ] [ deep learning ]


Abstract:

We've seen tremendous success of image generating models these years. Generating images through a neural network is usually pixel-based, which is fundamentally different from how humans create artwork using brushes. To imitate human drawing, interactions between the environment and the agent is required to allow trials. However, the environment is usually non-differentiable, leading to slow convergence and massive computation. In this paper we try to address the discrete nature of software environment with an intermediate, differentiable simulation. We present StrokeNet, a novel model where the agent is trained upon a well-crafted neural approximation of the painting environment. With this approach, our agent was able to learn to write characters such as MNIST digits faster than reinforcement learning approaches in an unsupervised manner. Our primary contribution is the neural simulation of a real-world environment. Furthermore, the agent trained with the emulated environment is able to directly transfer its skills to real-world software.

Live content is unavailable. Log in and register to view live content