Learning to Model Multimodal Semantic Alignment for Story Visualization
Bowen Li and Thomas Lukasiewicz
Abstract
Story visualization aims to generate a sequence of images to narrate each sentence in a multi-sentence story, where the images should be realistic and keep a global consistency across dynamic scenes and characters. Current works face the problem of semantic misalignment because of their fixed architecture and diversity of input modalities. To address this problem, we explore the semantic alignment between text and image representations by switching attention and inputs in the GAN-based generative model. More specifically, we introduce learnable layer-wise and cross-layer interactions according to optimizing attention from a set of attention modes, learning to dynamically attend outputs from different modalities and various depths, which thus relieves the text-image semantic misalignment problem. Extensive experiments on different datasets demonstrate the improvements of our approach, neither using segmentation masks nor auxiliary captioning networks, on image quality and story consistency, compared with state-of-the-art methods.