Skip to content

This AI Paper Aims to Transfer the Hand Motion Semantics Amongst Avatars Based on Each of their Hand Models Aneesh Tickoo Artificial Intelligence Category – MarkTechPost

  • by

In various virtual avatar contexts, including co-speech and sign language synthesis, the production of realistic hand gestures has shown promise. Human hands, the main non-verbal communication, can express minute details while making certain hand motions. People are extremely sensitive to hand movements. Therefore, even minor mistakes can greatly affect how a user interacts with virtual avatar apps. Consequently, it is crucial to maintain consistent hand motion semantics across different virtual avatar hands. However, directly reproducing joint rotations would considerably impair the subtle semantics of hand motions due to the human hand’s highly articulated structure with numerous degrees of freedom (DoFs) and the various hand forms and proportions of different avatars, as seen in Figure 1. 

Figure 1: The “thumbs up” gesture is impossible to read despite the precise body movements that were copied from the finger joints.

Therefore, it is crucial to establish a system that can maintain the semantics of hand gestures while retargeting them to various avatars. Motion retargeting and hand-object interaction were the main topics of earlier studies. The goal of motion retargeting, which Gleicher invented, is to recognize the qualities of source movements and apply them to target motions on various characters. Early research emphasized methods based on optimization. Researchers have recently proposed data-driven strategies using diverse network designs and semantic measures. These strategies don’t work for dexterous hand motion retargeting but can successfully retarget realistic body motions. Researchers suggested a rule-based strategy for retargeting sign language gestures; however, their methodology is constrained to a certain set of pre-defined hand gestures and needs adequate testing. 

In the field of hand-object interaction, which includes static grip synthesis and manipulation motion synthesis, the goal is to simulate realistic hand motions when interacting with objects. These techniques, nonetheless, need to maintain the communication-related semantics of hand gestures. They also don’t apply to different hand models with different sizes and forms. Despite the available techniques, it is still difficult to retarget highly accurate, realistic hand motions while maintaining complex motion semantics across many hand models. Researchers from Tsinghua University in this study focus on retargeting dexterous hand motions while preserving the semantics of the original hand motions across several hand models. This concept is innovative because hand motion retargeting demands greater semantic measurement accuracy than body motion retargeting. 

Due to the high density of hand joints within a small area, which leads to strong spatial interactions between finger joints and the palm, the semantic metrics previously used in motion retargeting, such as cycle consistency and distance matrix, need to be revised. Therefore, their key finding is that maintaining the semantics of hand mobility depends on the spatial connections between the finger joints and the palm. As a result, they create a brand-new anatomy-based semantic matrix (ASM) to represent the spatial correlations. For exact hand motion retargeting, they use ASM as the semantic measurement. They first construct anatomical local coordinate frames for the finger joints on several hand models. Then, they create ASM using the anatomical local coordinate frames as a basis. The locations of the palm and other joints about the particular finger joint’s local frame are quantified by ASM. 

Next, they use an anatomy-based semantics reconstruction network (ASRN) to gain a mapping function from the source motion ASM to the target motion rotations. They use two heterogeneous hand motion datasets to train ASRN. Their solution may be used with different hand models and is independent of template meshes, in contrast to template mesh-based methods for semantic correspondence. They carried out extensive tests to evaluate the effectiveness of the hand gestures produced by their ASRN. These investigations included complex hand motion sequences and various hand shapes in intra-domain and cross-domain hand motion retargeting scenarios. The qualitative and quantitative findings demonstrate that their ASRN significantly outperforms the state-of-the-art motion retargeting techniques. 

Their three contributions are listed below briefly: 

• They suggest a fresh task: retargeting dexterous hand gestures across several hand models while maintaining semantics. 

• They provide an anatomy-based semantic matrix (ASM), which can be used with different hand models and quantifies hand motion semantics without needing a template mesh. 

• By using the ASM, they provide a cutting-edge architecture for semantics-preserving hand motion retargeting. Experimental outcomes on both intra-domain and cross-domain hand motion retargeting tasks confirm that their system performs better than current approaches.

Check out the Paper, Github, and Project Page. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 29k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, please follow us on Twitter

The post This AI Paper Aims to Transfer the Hand Motion Semantics Amongst Avatars Based on Each of their Hand Models appeared first on MarkTechPost.

 In various virtual avatar contexts, including co-speech and sign language synthesis, the production of realistic hand gestures has shown promise. Human hands, the main non-verbal communication, can express minute details while making certain hand motions. People are extremely sensitive to hand movements. Therefore, even minor mistakes can greatly affect how a user interacts with virtual
The post This AI Paper Aims to Transfer the Hand Motion Semantics Amongst Avatars Based on Each of their Hand Models appeared first on MarkTechPost.  Read More AI Shorts, Applications, Artificial Intelligence, Editors Pick, Staff, Tech News, Technology, Uncategorized 

Leave a Reply

Your email address will not be published. Required fields are marked *