Sentient

Taking digital interactions away from mundane passive actions, towards a more active and physically engaging task.

TimelineSeptember - December, 2019
University ProjectIndividual Project
Mentored bySayjel.V.Patel
ProcessConceptualization - Research - Coding - Working Prototype

Project Brief

User Interfaces are an essential part of the experience a user has with a product, service, app etc. This project was an exploration of what the next age of user interfaces would look and feel like.


The Challenge

A user interface is constantly evolving and adapting, from having to use multiple peripherals such as a mouse or keyboard, today we can control it at the tips of our fingers, literally.

But a key aspect of technology interfaces is that they are all passive. It encourages us to sit still and have a very monotone experience with technology. The challenge with Sentient was to develop a means of interfacing with screens in a more active and physical sense.



Research

The evolution and multiple possibilities of user interfaces has been well explored by sci-fi movies, ranging from Iron Man, Oblivion, Tron, Minority Report etc. These examples were a great resource to analyze how user interfaces could be and evaluate certain positives and shortcomings it may have.

Minority Report served as the biggest reference point, as Spielberg himself researched this idea of a more intuitive and direct method of interfacing with technology. This is where the idea of developing a means of interfacing with screens physically taking advantage of different hand and arm movements stemmed from.


Sentient

Sentient was made possible by integrating Posenet, a machine learning model that allows for Real-time human pose estimations and Three.js, a javascript library used to animate 3D objects on web browsers. The Posenet model and Three.js library were combined using p5.js to use hand movements as inputs to manipulate the form, position and various other attributes of the objects projected on the screen.

As a proof of concept, a simulation was created where users could control the position of a virtual totem and travel in a virtual environment without the need for any peripheral devices and is made possible with just the movement of their hands.

Taking this idea one step further, an attempt was made to link the same physical inputs to real-world devices and explore the idea of controlling the technology around us just by a simple flick of the finger.

p2 final video (1).mp4
A clip from the first test conducted

To quickly prototype the idea the same Posenet model was used to control a LED cube. The concept was to control the LEDs based on the position of the hand. One of the key features of interfacing with the help of Posenet was that each movement or input can adapt and be customized to fit any individual. This meant that Sentient can be coded to suit the needs of an user with limited motor skills and allow for a more universal means of connecting with technology.



Working Prototype

This image depicts an user directly interfacing with a computer using their hands, without the need for any peripherals


Sentient

Taking digital interactions away from mundane passive actions, towards a more active and physically engaging task.