中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
基于肢体动作的交互技术研究

文献类型:学位论文

作者钟康
学位类别博士
答辩日期2011-06-01
授予单位中国科学院研究生院
授予地点北京
导师王宏安
关键词计算机应用::计算机应用其他学科
其他题名暂无
学位专业计算机应用技术
中文摘要    进入21世纪,计算机硬件的性能还在如摩尔定律描述的那样,每年保持着稳定的增长,这使得计算机的处理能力不断加强,但是人们使用计算机的方式并没有发生本质的改变。反过来,由于计算机能够完成的任务越来越复杂,这对用户提出了更高的要求 — 大量的背景知识,长期的训练以及熟练的使用技巧;另一方面,复杂任务需要的操作步骤也日渐繁多,这些都使得计算机的使用成本大大增加。特别是近些年来,随着嵌入式系统、传感器、无线通信、分布式技术等的快速发展,各种新的应用层出不穷,人们面临的计算环境也发生了诸多变化,从桌面计算到移动计算再到无处不在的计算,人机交互技术面临新的需要和挑战。
    在这样的背景下,传统的基于桌面隐喻的键盘、鼠标以及相应的WIMP(Window, Icon, Menu, Pointing Device)界面难以满足用户对交互技术的需要。于是从上世纪90年代初开始,研究人员将焦点聚集到下一代用户界面的研究上,而基于肢体动作的交互技术就是其中的一个重要方向。
    本文首先探讨了基于肢体动作的交互技术所具有的新特征,如高输入带宽,直接三维空间交互,非精确的信息输入,把物理世界的知识应用到信息世界中和非固定的人机接口等,并综述了基于肢体动作交互技术的国内外研究现状,提出了基于肢体动作的交互框架,以此来指导基于手势和脚势的交互技术研究与开发。鉴于手势和脚势是人们在日常生活中频繁使用的两类肢体动作,对这两种交互方式的研究对整个基于肢体动作的交互技术研究而言具有很强的代表性。本文在这种思路的指导下展开了多项研究工作,其中主要的贡献如下:
    1.在对研究现状进行深入分析的基础上,结合自身的工作,提出了一种基于肢体动作的交互框架,并分析了其中的关键元素及其相互关系。明确了肢体模型、动作分析,以及用户参与在基于肢体动作的交互技术研究中的重要性,以此指导基于肢体动作的用户界面设计与开发。
    2.详细分析和总结了基于手势的交互技术所具有的特性,指出界面的设计和选择跟用户在特定应用下所期望的结果密切相关。据此设计并实现基于手势的三维概念设计系统RtHG3DCD(Real-time Hand Gesture based 3D Conceptual Design)。以该系统为基础,通过用户参与的迭代式交互设计过程,分析了基于手势的交互技术在实际应用中所面临的一些问题,提出了初步的解决办法。
    3.研究了两种广泛使用的辅助记忆策略—静态提示策略和基于上下文感知的提示策略,在用户通过在线学习的方式学习手势命令方面的表现。分析了手势集合规模对用户学习行为的影响,并比较了两种辅助记忆策略在不同条件下的性能。结果表明静态提示策略更适合于帮助和激励用户完成从新手到专家用户的过渡转换,而基于上下文感知的提示策略则更适合于在系统的日常使用中为用户提供帮助。
    4.提出了一种基于脚势的新型交互技术—Foot Menu。作为一种hands-free的交互技术,Foot Menu使用用户的脚后跟旋转信息和脚尖的抬起/放下动作进行菜单的选择。我们使用多种传感器实现了一个原型系统,并进行了可用性评估。此外,在Foot Menu的研究过程中,进一步分析了基于脚势的交互技术所具有的优势及其局限性。
英文摘要  In the 21st century, as Moore's Law states, the performance of computer hardware is still steadily growing every year, which makes the processing capacity of computer more and more powerful. But the way people use computers has changed little. Because the computer can support more complex tasks, the requirements for the user become higher in turn – a lot of background knowledge, long-term training and practiced skills. In addition, completing a task needs many steps of operation. All these result in the great increase in the cost of using computers. In recent years, with the rapid development of embedded system, sensors, wireless communication and distributed technology, lots of new applications are emerging. The computing environment has changed as well, from desktop computing to mobile computing to Ubiquitous computing, which proposes new requirements for the human-computer interface and greatly expands the scope of human-computer interaction (HCI) research.
  In this context, the traditional WIMP (Window, Icon, Menu, and Pointing Device) interface based on the Desktop metaphor has encountered many challenges, and it is difficult to meet the user’s requirements of interaction methods. Thus, since the early 90s of last century, the research of next generation of UI has become a hot field, and gesture-based interaction is of particular interest.
  Research on gesture based interactions is still in infancy today. Complete theoretical foundation and research framework has not been established. We focus our study on hand gesture and foot gesture based interactions, because hands and feet are used very frequently in the daily life, and these two types of interaction methods are typical in this field. The main research work in this thesis includes:
  1. Based on the analysis of the new characteristic of gesture based interactions and a detailed literature review, we propose a Gesture Based Interaction Framework (GBIF) and use it to guide the following research on hand gesture and foot gesture based interactions.
  2. Designing and developing gesture based interface is application oriented. We summarize the requirements of rapid 3D conceptual design, and develope a real-time hand gesture based 3D conceptual design system. A user-centered interaction design is conducted based on the system. During this process, we discusse some problems of hand gesture interactions in the practical applications and provide initial solutions.
  3. We explore the learnability of hand gesture based interaction. A disadvantage of hand gesture interaction is the difficulty of learning due to the absence of visual guidance. Users need a long training time to remember the whole set of gesture commands before they can use the system in practice. To solve this problem, two widely applied mnemonic strategies: static prompts and context-sensitive prompts are investigated to improve on-line learning of gesture commands and help users’ transition from novice to expert. An evaluation is conducted to analyze and understand the performances of these two mnemonic strategies with different sizes of gesture vocabulary. Our studies offer insights into when static and context-sensitive prompts are appropriate for use in such applications.
  4. We present the Foot Menu, a new technique which uses foot gestures to perform selection tasks. As a hands-free technique, the Foot Menu is applicable in hands-busy situations, and users with hand impairment also can benefit from it. A prototype is built and an experiment is conducted to evaluate the usability performance of Foot Menus.
语种中文
公开日期2011-06-08
产权排序暂无
资助信息暂无
源URL[http://124.16.136.157/handle/311060/10209]  
专题软件研究所_人机交互技术与智能信息处理实验室_学位论文
推荐引用方式
GB/T 7714
钟康. 基于肢体动作的交互技术研究[D]. 北京. 中国科学院研究生院. 2011.

入库方式: OAI收割

来源:软件研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。