Categories

  • update

After conducting two weeks of research, a 4-minute video showcasing the project progress was created and shared with the client and TA. Analysis of the MotionInput v3.2 codebase was performed, and extensive testing of the mideapipe functions and OpenCV documentation was conducted.

Editing Literature Review Video

After conducting two weeks of research, we have developed a clear plan for our project implementation. We created a 4-minute video showcasing our progress thus far, which was shared with our client (Prof. Dean and John McNamara) and our TA Aneilia.

The video is structured into three parts. In Part 1, we present our virtual dance mat, which is not limited to dance games. We provide a biological review, highlighting how this technology can assist people with upper body disabilities or postural disorders. We explain the purpose of the virtual dance mat and present potential technical solutions, including an API.

In Part 2, we focus on jitter handling. We provide a biological review and discuss how our project can help. We demonstrate a game demo for “2048” and explain how we plan to map the keys and operations. We also provide details on how to implement and calculate tremor compensation, along with potential technical solutions such as median-based and average filtering for gesture. Finally, we present an API.

Part 3 is optional and focuses on user interface. We discuss Visual C++ and MFC GUI development, and conclude our presentation.

We are developing these projects to create accessible gaming technology that can assist people with disabilities or postural disorders. By creating a virtual dance mat and developing jitter handling technology, we hope to provide people with new ways to enjoy games and participate in physical activity. The optional user interface component is designed to enhance the user experience and make our technology more accessible to a wider audience.

Code review of MotionInput3.2

During this stage, we performed some analysis of the MotionInput v3.2 codebase. Our objective was to gain a better understanding of its structure and operation. As we studied the existing architecture, try to discovered ways to integrate a new layer of gesture detection into it by leveraging abstracted classes and objects while remaining consistent with the current architecture.

In parallel, to get us familiar with the main libraries we’ll be using, we conducted extensive testing of the mideapipe functions and delved into the documentation of OpenCV. Our primary objective is to develop a robust gesture recognition system that can accurately detect lower body movements. Through our analysis and testing, we aim to lay the foundation for the successful implementation of this system.