Research a project

chor-rnn by Luka and Louise Crnkovic-Friis.

In chor-rnn, they collected five hours of data from a single contemporary dancer using a Kinect v2 for motion capture. Then they processed the data with a neural network architecture called LSTM, which is designed for processing sequesntial data as opposed to static data like an image. The model was applied to dance generation, generating a dance sequence one frame at a time.

What makes this dataset work is because it is so focused on one single contemporary dancer rather than a bunch of contemporary dancers who may have many different movement styles. I think that this is an interesting choice, but I wonder what a model with a wider dataset would look like.

handpose particles

I didn’t really know what to make so I just wanted to experiment with interacting with the positions circles on the screen. I created a circle object and made it have a speed x and y, as well as a velocity decay to mimic movement.

I kind of wanted it to feel like you were moving the circles so I used a collision function to calculate if the position of the circle intersected with the position of the hand length. However I found that hand length didn’t work the best and it didn’t catch many of the circles so I’m wondering how I might outline the shape of the hand better here to calculate collisions.

[https://drive.google.com/file/d/175lmVU6V8WH3CIW21RmMGcvE8xjwFiNl/view?usp=sharing](https://drive.google.com/file/d/175lmVU6V8WH3CIW21RmMGcvE8xjwFiNl/view?usp=sharing)

handpose move particles by az2788 -p5.js Web Editor