广东工业大学学报 ›› 2018, Vol. 35 ›› Issue (03): 107-112.doi: 10.12052/gdutxb.170138

• 综合研究 • 上一篇    下一篇

基于混合现实交互的指挥棒位姿跟踪

吴智敏1, 何汉武2, 吴悦明1   

  1. 1. 广东工业大学 机电工程学院, 广东 广州 510006;
    2. 广东工贸职业技术学院, 广东 广州 510510
  • 收稿日期:2017-09-26 出版日期:2018-05-09 发布日期:2018-05-24
  • 通信作者: 吴悦明(1979-),男,助理研究员,主要研究方向为虚拟现实与增强现实在工业中的可视化应用.E-mail:wuyueming@gdut.edu.cn E-mail:wuyueming@gdut.edu.cn
  • 作者简介:吴智敏(1993-),男,硕士研究生,主要研究方向为头戴式混合现实、立体视觉和三维交互技术.
  • 基金资助:
    广东省科技计划项目(2015B010102011,2016A040403108)

Baton-like Attitude Tracking Based on Mixed Reality Interaction

Wu Zhi-min1, He Han-wu2, Wu Yue-ming1   

  1. 1. School of Electromechanical Engineering, Guangdong University of Technology, Guangzhou 510006, China;
    2. Guangdong Polytechnic of Industry and Commerce, Guangzhou 510510, China
  • Received:2017-09-26 Online:2018-05-09 Published:2018-05-24
  • Supported by:
     

摘要: 为解决目前混合现实系统交互的自然性、直观性与准确性不高的问题,提出一种基于指挥棒的交互方式,其实现的关键是能够实时、准确地对指挥棒位姿进行测算与跟踪.为此,重点探讨一种基于双色标的视觉跟踪方法对指挥棒位姿进行测算.具体思路是先对颜色标进行快速轮廓提取,然后采用最小面积矩形法修补轮廓,从而得到颜色标在平面投影的位姿.最后利用双目视觉定位原理计算出颜色标在三维空间上的位姿即为所求的指挥棒的三维位姿.为验证所提方法的有效性,采用实验对该算法的实时性、准确性进行了分析.结果表明,该方法的实时性与准确性均能满足交互需求.

关键词: 混合现实, 人机交互, 位姿计算

Abstract: In order to solve the problem of the low level of the nature, intuition and accuracy of current mixed reality system interaction, a new method based on baton-like interaction is proposed. The key to its implementation is to measure and track the baton's attitude in real time and accurately. Therefore, the focus is on a two-color vision-based visual tracking method to measure the attitude of the baton. The specific idea is to perform fast contour extraction on the color mark, and then patch the contour by the minimum area rectangle method to get the posture of the color mark projection in the plane. Finally, the principle of binocular vision is used to calculate the three-dimensional pose of the baton. In order to verify the validity of the proposed method, experiments are carried out to analyze the real-time and accuracy of the proposed algorithm. The results show that both the real-time and accuracy of the proposed method can meet the interaction requirements.

Key words: mixed reality, human-computer interaction, attitude calculation

中图分类号: 

  • TP391
[1] 罗超. 鼠标、键盘退场智能硬件掀起交互革命[J]. It时代周刊, 2014,(10):36-37. 2014,(10):36-37.
[2] 田丰, 牟书, 戴国忠, 等. Post-WIMP环境下笔式交互范式的研究[J]. 计算机学报, 2004, 27(07):977-984.TIAN F, MOU S, DAI G Z, et al. Research on a pen-based interaction paradigm in post-WIMP environment[J]. Chinese Journal of Computers, 2004, 27(07):977-984.
[3] MINE M, YOGANANDAN A, COFFEY D. Principles, interactions and devices for real-world immersive model-ing[J]. Computers and Graphics, 2015, 48(C):84-98.
[4] SCHMITZ M, ENDRES C, BUTZ A. A survey of human-computer interaction design in science fiction movies[C]//The 2nd International Conference on Intelligent Technologies for Interactive Entertainment. Mexico, Cancun:ICST, 2008:1-10.
[5] 黄进, 韩冬奇, 陈毅能, 等. 混合现实中的人机交互综述[J]. 计算机辅助设计与图形学学报, 2016, 28(6):869-880.HUANG J, HAN D Q, CHEN Y N, et al. A survey on human-computer interaction in mixed reality[J]. Journal of Computer-Aided Design & Computer Graphics, 2016, 28(6):869-880.
[6] KIM M, LEE J Y. Touch and hand gesture-based interac- tions for directly manipulating 3D virtual objects in mobile augmented reality[J]. Multimedia Tools & Appli-cations, 2016, 75(23):16529
[7] HURST W, WEZEL C V. Gesture-based interaction via finger tracking for mobile augmented reality[J]. Multi-media Tools & Applications, 2013, 62(1):233-258.
[8] LEE S, CHUN J. A stereo-vision approach for a natural 3D hand interaction with an AR object[C]//16th International Conference on Advanced Communication Technology. Pyeongchang:IEEE, 2014:315-321.
[9] CHATHURANGA S K, SAMARAWICKRAMA K C, CHANDIMA H M L, et al. Hands free interface for human computer interaction[C]//2010 Fifth International Conference on Information and Automation for Sustain-ability. Colombo:IEEE, 2011:359-364.
[10] EHSAN T E, SUNDARARAJAN V. Using brain-computer interfaces to detect human satisfaction in human-robot interaction[J]. International Journal of Humanoid Robotics, 2011, 8(1):87-101.
[11] MORIMOTO C H, MIMICA M R. Eye gaze tracking techniques for interactive applications[J]. Computer Vision and Image Understanding, 2005, 98(1):4-24.
[12] PATERAKI M, BALTZAKIS H, TRAHANIAS P E, et al. Visual estimation of pointed targets for robot guidance via fusion of face pose and hand orientation[C]//IEEE Interna- tional Conference on Computer Vision Workshops. Barcelona:IEEE, 2011:1060-1067.
[13] 徐维鹏, 王涌天, 刘越, 等. 增强现实中的虚实遮挡处理综述[J]. 计算机辅助设计与图形学学报, 2013, 25(11):1635-1642.XU W P, WANG Y T, LIU Y, et al. Survey on occlusion handling in augmented reality[J]. Journal of Computer-Aided Design & Computer Graphics, 2013, 25(11):1635-1642.
[14] 蔡荣太, 吴元昊, 王明佳, 等. 视频目标跟踪算法综述[J]. 视频应用与工程, 2010, 34(12):135-138.CAI R T, WU Y H, WANG M J, et al. Survey of visual object tracking algorithms[J]. Video Application and Project, 2010, 34(12):135-138.
[15] 侯志强, 韩崇昭. 视觉跟踪技术综述[J]. 自动化学报, 2006, 32(4):603-617.HOU Z Q, HAN C Z. A survey of visual tracking[J]. Acta Automatica Sinica, 2006, 32(4):603-617.
[1] 杜松华, 吕境仪, 梁乃锋. 游戏化助推下的绿色消费行为研究综述[J]. 广东工业大学学报, 2022, 39(04): 46-54.
[2] 杜松华, 柯晓波, 易虎, 胡少如. 技术依赖与社会构建视角下的社交媒体竞争力研究综述[J]. 广东工业大学学报, 2017, 34(04): 1-11.
[3] 亓晓彬; 杨宜民; 张有松;. 一种基于Modbus协议的嵌入式工控系统人机交互的设计[J]. 广东工业大学学报, 2008, 25(1): 62-65.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!