Journal of Guangdong University of Technology ›› 2019, Vol. 36 ›› Issue (03): 47-55.doi: 10.12052/gdutxb.180162

Previous Articles     Next Articles

A Two-stage Effect Rendering Method for Art Font Based on CGAN Network

Ye Wu-jian1, Gao Hai-jian1, Weng Shao-wei1, Gao Zhi1, Wang Shan-jin2, Zhang Chun-yu3, Liu Yi-jun1   

  1. 1. School of Information Engineering, Guangdong University of Technology, Guangzhou, 510006, China;
    2. School of Electrical Engineering & Intelligentization, Dongguan University of Technology, Dongguan, 523808, China;
    3. School of Information Engineering, Xizang Minzu University, Xianyang, 712082, China
  • Received:2018-11-28 Online:2019-05-09 Published:2019-04-04

Abstract: Art font rendering is one of the important techniques of media typesetting. How to provide an efficient art font rendering method to realize the diversity and clearness of the special effects of generative art fonts is an urgent problem for technicians in this field. With the help of Conditional Generative Adversarial Networks (CGAN), a two-stage art font rendering method that includes style transfer processing and enhancement processing is proposed to achieve high quality rendering of art fonts. First, by style transfer processing, a stylized network model is built to render various 2D or 3D special effects on a variety of input fonts. Then, by enhancement processing, a sharpening network model is built to sharpen the generated art font images, which overcomes the defect of fuzzy image generation in a single GAN network. The experimental results show that, comparing with existing methods, this scheme has a better performance for generating more diverse and more clear special art fonts, whose texture details are relatively rich, and are not limited to the text skeleton. At the same time, this scheme can also improve the efficiency of art font rendering in the batch processing and has a higher practical value.

Key words: art font, effect rendering, conditional generative adversarial networks, style transfer, sharpness

CLC Number: 

  • TP391
[1] YANG S, LIU J, LIAN Z, et al. Awesome typography:statistics-based text effects transfer[J]. IEEE Computer Vision and Pattern Recognition, 2017:7464-7473
[2] BARNES C, SHECHTMAN E, GOLDMAN D B, et al. The generalized patchmatch correspondence algorithm[C]//European Conference on Computer Vision. Berlin, Heidelberg:Springer 2010:29-43.
[3] BARNES C, SHECHTMAN E, FINKELSTEIN A, et al. PatchMatch:a randomized correspondence algorithm for structural image editing[J]. ACM Transactions on Graphics (ToG), 2009, 28(3):24.
[4] GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]//International Conference on Neural Information Processing Systems. Canada, Montreal:MIT Press, 2014:2672-2680.
[5] ARJOVSKY M, CHINTALA S, BOTTOU L. Wasserstein generative adversarial networks[C]//International Conference on Machine Learning. Sydney,Australia:ICML,2017:214-223
[6] CHANG J, SCHERER S. Learning representations of emotional speech with deep convolutional generative adversarial networks[C]//2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). New Orleans:IEEE, 2017, 2746-2750.
[7] ISOLA P, ZHU J Y, ZHOU T, et al. Image-to-image translation with conditional adversarial networks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. Honolulu, Hawaii:IEEE, 2017, 1125-1134.
[8] ZHU J Y, PARK T, ISOLA P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//Proceedings of the IEEE International Conference on Computer Vision. Italy, Venice:IEEE, 2017:2223-2232.
[9] ZHU J Y, KRÄHENBÜHL, PHILIPP, SHECHTMAN E, et al. Generative visual manipulation on the natural image manifold[J]. 浙江大学学报(英文版)(c辑:计算机与电子), 2016, 15(4):312-320
[10] 曾碧, 任万灵, 陈云华. 基于CycleGAN的非配对人脸图片光照归一化方法[J]. 广东工业大学学报, 2018, 35(5):11-19 ZENG B, REN W L, CHEN Y H. An unpaired face illumination normalization method based on cycleGAN[J]. Journal of Guangdong University of Technology, 2018, 35(5):11-19
[11] LI Y, FANG C, YANG J, et al. Universal style transfer via feature transforms[C]//Conference and Workshop on Neural Information Processing Systems. Long Beach:NIPS, 2017:386-396.
[12] JOHNSON J, ALAHI A, FEI-FEI L. Perceptual losses for real-time style transfer and super-resolution[C]//European conference on computer vision. Amsterdam:Springer, 2016:694-711.
[13] GATYS L A, ECKER A S, BETHGE M. Image style transfer using convolutional neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas:IEEE, 2016:2414-2423.
[14] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. Boston, Massachusetts:IEEE, 2016:770-778.
[15] SZEGEDY C, VANHOUCKE V, IOFFE S, et al. Rethinking the inception architecture for computer vision[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. Las Vegas:IEEE, 2016:2818-2826.
[16] SZEGEDY C, IOFFE S, VANHOUCKE V, et al. Inception-v4, inception-resnet and the impact of residual connections on learning[C]//Thirty-First AAAI Conference on Artificial Intelligence. San Francisco, California. 2017:AAAI, 4278-4284.
[1] Xie Guo-bo, Lin Li, Lin Zhi-yi, He Di-xuan, Wen Gang. An Insulator Burst Defect Detection Method Based on YOLOv4-MP [J]. Journal of Guangdong University of Technology, 2023, 40(02): 15-21.
[2] Chen Jing-yu, Lyu Yi. Frost Detection Method of Cold Chain Refrigerating Machine Based on Spiking Neural Network [J]. Journal of Guangdong University of Technology, 2023, 40(01): 29-38.
[3] Ye Wen-quan, Li Si, Ling Jie. Sparse-view SPECT Image Reconstruction Based on Multilevel-residual U-Net [J]. Journal of Guangdong University of Technology, 2023, 40(01): 61-67.
[4] Zou Heng, Gao Jun-li, Zhang Shu-wen, Song Hai-tao. Design and Implementation of a Dropping Guidance Device for Go Robot [J]. Journal of Guangdong University of Technology, 2023, 40(01): 77-82,91.
[5] Xie Guang-qiang, Xu Hao-ran, Li Yang, Chen Guang-fu. Consensus Opinion Enhancement in Social Network with Multi-agent Reinforcement Learning [J]. Journal of Guangdong University of Technology, 2022, 39(06): 36-43.
[6] Liu Xin-hong, Su Cheng-yue, Chen Jing, Xu Sheng, Luo Wen-jun, Li Yi-hong, Liu Ba. Real Time Detection of High Resolution Bridge Crack Image [J]. Journal of Guangdong University of Technology, 2022, 39(06): 73-79.
[7] Xiong Wu, Liu Yi. Application of Particle Filter Algorithm in Static Deformation Monitoring of BDS High-Speed Rail [J]. Journal of Guangdong University of Technology, 2022, 39(04): 66-72.
[8] Yi Min-qi, Liu Hong-wei, Gao Hong-ming. Research on the Factors Influencing the Co-purchase Network of Products on E-commerce Platforms [J]. Journal of Guangdong University of Technology, 2022, 39(03): 16-24.
[9] Qiu Zhan-chun, Fei Lun-ke, Teng Shao-hua, Zhang Wei. Palmprint Recognition Based on Cosine Similarity [J]. Journal of Guangdong University of Technology, 2022, 39(03): 55-62.
[10] Zheng Jia-bi, Yang Zhen-guo, Liu Wen-yin. Marketing-Effect Estimation Based on Fine-grained Confounder Balancing [J]. Journal of Guangdong University of Technology, 2022, 39(02): 55-61.
[11] Gary Yen, Li Bo, Xie Sheng-li. An Evolutionary Optimization of LSTM for Model Recovery of Geophysical Fluid Dynamics [J]. Journal of Guangdong University of Technology, 2021, 38(06): 1-8.
[12] Li Guang-cheng, Zhao Qing-lin, Xie Kan. A Design of Decentralized Data Processing Scheme [J]. Journal of Guangdong University of Technology, 2021, 38(06): 77-83.
[13] Xie Guang-qiang, Zhao Jun-wei, Li Yang, Xu Hao-ran. Cooperative Lane-changing Based on Multi-cluster System [J]. Journal of Guangdong University of Technology, 2021, 38(05): 1-9.
[14] Zhang Wei, Zhang Zhen-bin. Joint Graph Embedding and Feature Weighting for Unsupervised Feature Selection [J]. Journal of Guangdong University of Technology, 2021, 38(05): 16-23.
[15] Deng Jie-hang, Yuan Zhong-ming, Lin Hao-run, Gu Guo-sheng. Superpixel and Visual Saliency Synergetic Image Quality Assessment [J]. Journal of Guangdong University of Technology, 2021, 38(05): 33-39.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!