This paper addresses the challenges of view synthesis in scattering media, where image degradation due to medium scattering complicates the accurate reconstruction of scenes. Neural Radiance Fields (NeRF) demonstrate high-quality view synthesis but struggle in scattering environments, while 3D Gaussian Splatting (3DGS) offers efficient geometric modeling but lacks volumetric scattering capabilities. To overcome these limitations, we propose a novel approach that integrates the volumetric rendering of NeRF with the sparse geometric representation of 3DGS. Our method mitigates scattering effects and enhances the synthesis of both appearance and depth in complex environments, such as underwater scenes. Experimental results show significant improvements in rendering quality and efficiency, with a notable reduction in training time and enhanced performance in scattering scenarios compared to existing methods.