We propose a novel aspect-augmented adversarial network for cross-aspect and cross-domain adaptation tasks. The effectiveness of our approach suggests the potential application of adversarial networks to a broader range of NLP tasks for improved representation learning, such as machine translation and language generation.

We introduce a neural method for transfer learning between two (source and target) classification tasks or aspects over the same domain. Instead of target labels, we assume a few keywords pertaining to source and target aspects indicating sentence relevance rather than document class labels. Documents are encoded by learning to embed and softly select relevant sentences in an aspect-dependent manner. A shared classifier is trained on the source encoded documents and labels, and applied to target encoded documents. We ensure transfer through aspect-adversarial training so that encoded documents are, as sets, aspect-invariant. Experimental results demonstrate that our approach outperforms different baselines and model variants on two datasets, yielding an improvement of 24% on a pathology dataset and 5% on a review dataset.