Achieving a high-quality image in partial PET geometry is always a challenging task in medical imaging Incomplete data acquisition through this type of scanner will lead to low image quality and high quantitative bias. In this study, we investigated the performance of Deep Learning (DL) for synthesizing high-quality breast PET images obtained by a cylindrical PET scanner from a low-resolution partial PET scanner. The real 18F-FDG breast PET images of 20 patients acquired by mCT Biograph PET scanner were considered as an activity map. A previously validated Monte Carlo (MC) simulation code was used to design a cylindrical and partial PET scanner dedicated to the breast. The cylindrical configuration has a ~20 cm diameter and includes 14 detector modules, while the partial configuration includes two planar detectors with ~20 cm separation and includes 6 similar detector modules. The MC simulation of the activity maps was done with both configurations to generate breast PET images from cylindrical and partial modes A modified cycle-consistent generative adversarial network (CycleGAN) architecture was employed to generate the high resolution and artifact-free images from the partial scanner’s output image. Quantitative metrics, including structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), root mean square error (RMSE), and voxel wised joint histogram were calculated for cylindrical, partial, and CycleGAN’s output by considering the activity map as reference. Our model shows a good performance in recovering the missed or lost data in the partial configuration as the PSNR and SSIM increase from 18.35±3.44, and 0.65±0.02 to 31.42±1.22 and 0.91±0.03. In the case of noise reduction, the CycleGAN can reduce the RMSE in partial configuration from 9.37 10-1±2.95×10-1 to 9.12×10-2±3.51×10-2.