Recurrent neural networks (RNNs) are a class of neural networks that can process sequential data, such as text. RNNs have been successfully applied to many natural language processing tasks, including text generation, classification, and translation.
Recurrent neural networks (RNNs) are a class of neural networks that can process sequential data, such as text. RNNs have been successfully applied to many natural language processing tasks, including text generation, classification, and translation.
Hosted by Ricardo Henao and Michael Pencina
Hosted by David Page, Svati Shah, and Michael Pencina.
This seminar will discuss one of the most popular techniques for automatically identifying latent or hidden themes in text: topic models.
Hosted by Matt Engelhard, Ben Goldstein, and Michael Pencina
This workshop presents an ongoing body of work in computational 3D modeling. The topic revolves around the simulation and rendering of spatial geographic forms, in particular landmasses, landforms and large scale geologic features (mountain ranges).
Recommendation systems—such as the algorithms powering Netflix, suggesting jobs to apply for, and curating Facebook feeds—are powerful tools that can help users navigate an overwhelming array of choices. However, these systems can have negative side effects if left unchecked.
This seminar will provide an introduction to text analysis. Text-based data abounds on social media platforms, digital archives, and elsewhere, but it poses numerous challenges for modeling because it is highly unstructured. We will discuss basic concepts in text analysis (e.g.
In this seminar, we will learn how to collect data using Application Programming Interfaces. This lecture will introduce key concepts such as credentialing and rate limiting, and provide an example of how to collect data from Twitter.