Our team uses accelerometers and machine learning to help detect vocal disorders. We capture data about the motions of patient's vocal folds to determine if their vocal behavior is normal or abnormal.
Using medical knowledge to improve learned representations of patient health trajectories.
In this project, we aim to build a vision and language system which learns and understands the world like a child.
EQ-Radio can infer a person’s emotions using wireless signals. It transmits an RF signal and analyzes its reflections off a person’s body to recognize his emotional state (happy, sad, etc.).
Adding domain knowledge to word embeddings.
One of the challenges of processing real-world spoken content, such as automatic speech recognition, is the potential presence of different languages and dialects. Language and Dialect identification can be a useful capability to identify which language is being spoken during a recording.
Our focus is using computational tools to summarize health-related data to help clinicians focus on decision-making rather than just keeping up with the data.
Linking probability with geometry to improve the theory and practice of machine learning
To understand human intelligence, we must be able to model how humans understand stories from multiple characters' perspectives.
We propose a novel aspect-augmented adversarial network for cross-aspect and cross-domain adaptation tasks. The effectiveness of our approach suggests the potential application of adversarial networks to a broader range of NLP tasks for improved representation learning, such as machine translation and language generation.