The goal of computer vision is for computers to be able to understand visual content (e.g. images, videos, 3D, stereo), usually for the purpose of making predictions (classification, detection, captioning, generation, etc.).
Artificial intelligence (AI) can reduce costs, improve efficiency, and potentially improve accuracy in many critical areas of life that impact humans. And yet, many of the tools of AI lack transparency, have inherent biases, and are difficult to govern.
There is a big data revolution happening in astrophysics as the next generation of telescopes are coming online, with 20 terabytes of data coming from a single telescope per night.
Like many fields, neuroscience is experiencing a data deluge. Machine learning techniques are being used to learn better biomarkers, make sense of the brain, and automate tasks.
There is widespread concern that social media platforms have created filter bubbles that reinforce peoples’ pre-existing views and prevent them from being exposed to those who do not share them.
A central objective in synthetic biology is to control the dynamics of engineered cells or cell populations in a predictable manner. Achieving this objective requires a quantitative description of biological systems that are both reliable and can be solved fast enough to guide experiments.
Natural language processing (NLP) is a field focused on developing automated methods for analyzing text, and also for computer-driven text generation (synthesis, for example in translation). Neural networks have recently become the state-of-the-art method for NLP.
PyTorch is an open source machine learning framework popular for building neural networks. In this hands-on session, we'll walk through building and training a neural network, introducing the basic mechanics of PyTorch. Bring a laptop and be ready to code!
Recent technological advancements make it possible to closely and continuously monitor patients on multiple scales, both inside and outside of the clinic.
In machine learning, models are developed to represent and make predictions based on data. The model starts with random parameters and must “learn” these parameters by using historical data.