2019 Startup Internship Reflection: Neuroview, Tiger Wu

Internship Location: Neuroview

My name is Tiger Wu, and I am a fourth-year computer science student in the UVA School of Engineering. For this semester, I am interning part-time with Neuroview, a start-up in Charlottesville that develops machine learning algorithms for detecting stroke, multiple sclerosis, and other neurological diseases. People with neurological diseases can show symptoms of their disease through their facial expressions. Neuroview aims to build an application that records videos of users’ facial expressions and accurately determine which users have neurological diseases. A machine learning algorithm for this problem would need to train on many videos of people making facial expressions. 

In machine learning, data collection is a crucial step, and some may argue that good quality data is more important than a superior algorithm. I am tasked with designing, implementing, and maintaining the front-end components for the Neuroview site used for recording videos. For the first couple of weeks of my internship, I used NodeJS, Javascript, HTML, CSS, and Handlebars to format web pages for the site that collects data. I coordinate with my supervisor via Google Hangouts, and we use Github to share code. Every week or so, we would meet over Google Hangouts to discuss ideas for how the web pages should look, and I would implement our ideas afterwards. 

When a user first visits the web site, the user is taken to a page with text containing Neuroview’s mission and terms-and-conditions. Once the user accepts the conditions, the user is taken to a page that records the videos. The website uses the recorder and video camera from the user’s machine to capture the movements of the user’s facial expressions. The page for recording videos contains two video frames: one for showing a video that demonstrates what facial expressions to make, and one for showing the user what the camera sees. On that same page, there is also a list of detailed instructions for how the applet should be used.

Once the user is ready to record, they may press the record button which would give a three-second delay. In the window that shows what the camera sees, a counter for the delay is displayed. Once the countdown is finished, the user may proceed to make the facial expressions demonstrated in the instructional video. This page has a timer that automatically ends the recording session after a specified amount of time, and this functionality is implemented in the Javascript code. A unique id is assigned to the recording session, and the recorded video is sent to the back-end database. 

Moving forward, we may add security features to our site to keep out malicious users. We may consider adding a captcha on the first page for validating human users, and we may also add a more robust method for accepting terms and conditions. Furthermore, we may also redesign the recorder page to make it foolproof and more user-friendly. For example, a user might not finish making the facial expressions within the time limit. However, if we permit users to stop and start the recorder manually, some videos may take up more storage than necessary.