https://ieee-cog.org/2021/assets/papers/paper_286.pdfSince the tremendous success of the AlphaGo family (AlphaGo, AlphaGo Zero, and Alpha Zero), zero learning has become the baseline method for many board games, such as Gomoku and Chess. Recent works on zero learning have demonstrated that improving the performance of neural networks used in zero learning programs remains nontrivial and challenging. Considering both positional information and multiscale features, this paper presents a novel positional attention-based UNet-style model (GomokuNet) for Gomoku AI. An encoder-decoder architecture is adopted as the backbone network to guarantee the fusion of multiscale features. Positional information modules are incorporated into our model in order to further capture the location information of the board. Quantitative results obtained by ablation analysis indicate that our GomokuNet outperforms previous state-of-the-art zero learning networks on the RenjuNet dataset. Our method shows the potential to improve zero learning efficiency and AI engine performance.