Skip to content

Can Computer Vision Systems Infer Your Muscle Activity from Video? Meet Muscles in Action (MIA): A New Dataset to Learn to Incorporate Muscle Activity into Human Motion Representations Tanya Malhotra Artificial Intelligence Category – MarkTechPost

  • by

In recent times, the field of Artificial Intelligence has been the topic of discussion. Be it the human-imitating Large Language Model like GPT 3.5 based on Natural Language Processing and Natural Language Understanding or the text-to-image model called DALL-E based on Computer vision, AI is paving its way toward success. Computer vision, the sub-field of AI, is getting better with the release of every new application. It has become capable of analyzing human motion from video and thereby tackling various tasks such as pose estimation, action recognition, and motion transfer.

Though computer vision has advanced in determining human motion, it is not just about the outward appearance. Every action is the consequence of our brain transmitting electrical impulses to our nerves, which in turn cause our muscles to contract, finally resulting in joint movement. Researchers have been putting in a lot of effort in developing an approach with the help of which the intrinsic muscle activity can be simulated that drives human mobility. To move ahead in this research, two researchers from Columbia University have introduced a new and unique dataset called “Muscles in Action” (MIA). This dataset includes 12.5 hours of synchronized video and surface electromyography (sEMG) data and captures ten subjects performing various exercises. 

Surface electromyography (sEMG) sensors, which are available in invasive and non-invasive versions, are the traditional tool for determining muscle activity. The researchers have developed a representation that can predict muscle activation from video and, in the other direction, reconstruct human motion from muscle activation data using the MIA dataset. The primary aim is to comprehend the complex connection between the underlying muscle activity and the visual information. By jointly modeling both modalities, the model has been conditioned for generating motion that is consistent with muscle activation.

This project’s main part is the framework for modeling the link between human motion seen in the video and the internal muscle activity reflected by sEMG signals. The research paper shared by the team gives a brief overview of relevant work in human activity analysis, conditional motion generation, multimodal learning, electromyography, and human motion generation based on physics. This is followed by an in-depth description and analysis of the multimodal dataset.

For evaluation, the researchers have experimented with both in-distribution participants and exercises as well as out-of-distribution subjects and workouts to determine how well their model works. They have tested the model on data that is different from the training distribution and data that is similar to the data it was trained on. This assessment aids in validating the generalizability of the methodology.

In conclusion, the use of muscles in computer vision systems has numerous potential uses. Richer virtual human models can be produced by comprehending and simulating interior muscle action. These models can be used in a variety of real-world settings, including those related to sports, fitness, and augmented reality and virtual reality.

Check out the Paper and Project Page. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 26k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

The post Can Computer Vision Systems Infer Your Muscle Activity from Video? Meet Muscles in Action (MIA): A New Dataset to Learn to Incorporate Muscle Activity into Human Motion Representations appeared first on MarkTechPost.

 In recent times, the field of Artificial Intelligence has been the topic of discussion. Be it the human-imitating Large Language Model like GPT 3.5 based on Natural Language Processing and Natural Language Understanding or the text-to-image model called DALL-E based on Computer vision, AI is paving its way toward success. Computer vision, the sub-field of
The post Can Computer Vision Systems Infer Your Muscle Activity from Video? Meet Muscles in Action (MIA): A New Dataset to Learn to Incorporate Muscle Activity into Human Motion Representations appeared first on MarkTechPost.  Read More AI Shorts, Applications, Artificial Intelligence, Computer Vision, Editors Pick, Staff, Tech News, Technology, Uncategorized 

Leave a Reply

Your email address will not be published. Required fields are marked *