CT and MRI image fusion is a popular research field that plays a vital role in clinical diagnosis. To retain more salient features and complementary information from source images, we propose a dual-branch generative adversarial network (DBGAN) to fuse the CT and MRI images. The proposed DBGAN is designed in a dual branching structure schema, which consists of a couple of generators and discriminators. The generators and discriminators establish a generative adversarial relationship so that the fused images generated by the generators are indistinguishable from the discriminators. Furthermore, we employ the multiscale extraction module (MEM) and self-attention module (SAM) in the generators to enhance the salient features and detailed information of the fused images. The subjective and objective evaluation demonstrate the superiority of the proposed method over the state-of-the-art methods.