2015-2016 2016-2017 2017-2018 2018-2019 2019-2020
Browse
by subject...
    Schedule
view...
 

1 - 5 of 5 results for: CS229

CS 229: Machine Learning (STATS 229)

Topics: statistical pattern recognition, linear and non-linear regression, non-parametric methods, exponential family, GLMs, support vector machines, kernel methods, deep learning, model/feature selection, learning theory, ML advice, clustering, density estimation, EM, dimensionality reduction, ICA, PCA, reinforcement learning and adaptive control, Markov decision processes, approximate dynamic programming, and policy search. Prerequisites: knowledge of basic computer science principles and skills at a level sufficient to write a reasonably non-trivial computer program in Python/numpy, familiarity with probability theory to the equivalency of CS109 or STATS116, and familiarity with multivariable calculus and linear algebra to the equivalency of MATH51.
Terms: Aut, Spr | Units: 3-4

CS 229T: Statistical Learning Theory (STATS 231)

How do we formalize what it means for an algorithm to learn from data? How do we use mathematical thinking to design better machine learning methods? This course focuses on developing mathematical tools for answering these questions. We will present various learning algorithms and prove theoretical guarantees about them. Topics include generalization bounds, implicit regularization, the theory of deep learning, spectral methods, and online learning and bandits problems. Prerequisites: A solid background in linear algebra ( Math 104, Math 113 or CS205) and probability theory (CS109 or STAT 116), statistics and machine learning ( STATS 315A, CS 229 or STATS 216).
Last offered: Autumn 2018

CS 335: Fair, Accountable, and Transparent (FAT) Deep Learning

Deep learning-based AI systems have demonstrated remarkable learning capabilities. A growing field in deep learning research focuses on improving the Fairness, Accountability, and Transparency (FAT) of a model in addition to its performance. Although FAT will be difficult to achieve, emerging technical approaches in this topic show promise in making better FAT AI systems. In this course, we will study the rigorous computer science necessary for FAT deep learning and dive into the technical underpinnings of topics including fairness, robustness, interpretability, common sense, AI deception, and privacy. These topics reflect state-of-the-art research in FAT, are socially important, and they have strong industrial interest due to government and other policy regulation. This course will focus on the algorithmic and statistical methods needed to approach FAT AI from a deep learning perspective. We will also discuss several application areas where we can apply these techniques. Prerequisites more »
Deep learning-based AI systems have demonstrated remarkable learning capabilities. A growing field in deep learning research focuses on improving the Fairness, Accountability, and Transparency (FAT) of a model in addition to its performance. Although FAT will be difficult to achieve, emerging technical approaches in this topic show promise in making better FAT AI systems. In this course, we will study the rigorous computer science necessary for FAT deep learning and dive into the technical underpinnings of topics including fairness, robustness, interpretability, common sense, AI deception, and privacy. These topics reflect state-of-the-art research in FAT, are socially important, and they have strong industrial interest due to government and other policy regulation. This course will focus on the algorithmic and statistical methods needed to approach FAT AI from a deep learning perspective. We will also discuss several application areas where we can apply these techniques. Prerequisites: Intermediate knowledge of statistics, machine learning, and AI. Qualified students will have taken any one of the following, or their advanced equivalents: CS224N, CS230, CS231N, CS236, CS273B. Alternatively, students who have taken CS229 or have equivalent knowledge can be admitted with the permission of the instructors.
Terms: Spr | Units: 3
Instructors: Landay, J. (PI)

CS 375: Large-Scale Neural Network Modeling for Neuroscience (PSYCH 249)

Introduction to designing, building, and training large-scale neural networks for modeling brain and behavioral data, including: deep convolutional neural network models of sensory systems (vision, audition, somatosensation); variational and generative methods for neural interpretation; recurrent neural networks for dynamics, memory and attention; interactive agent-based deep reinforcement learning for cognitive modeling; and methods and metrics for comparing such models to real-world neural data. Attention will be given both to established methods as well as cutting-edge techniques. Students will learn conceptual bases for deep neural network models and will also implement learn to implement and train large-scale models in Tensorflow using GPUs. Requirements: Fluency in Unix shell and Python programming; familiarity with differential equations, linear algebra, and probability theory; priori experience with modern machine learning concepts (e.g. CS229) and basic neural network training tools (eg. CS230 and/or CS231n). Prior knowledge of basic cognitive science or neuroscience not required but helpful.
Terms: Aut | Units: 1-3
Instructors: Yamins, D. (PI)

PSYCH 249: Large-Scale Neural Network Modeling for Neuroscience (CS 375)

Introduction to designing, building, and training large-scale neural networks for modeling brain and behavioral data, including: deep convolutional neural network models of sensory systems (vision, audition, somatosensation); variational and generative methods for neural interpretation; recurrent neural networks for dynamics, memory and attention; interactive agent-based deep reinforcement learning for cognitive modeling; and methods and metrics for comparing such models to real-world neural data. Attention will be given both to established methods as well as cutting-edge techniques. Students will learn conceptual bases for deep neural network models and will also implement learn to implement and train large-scale models in Tensorflow using GPUs. Requirements: Fluency in Unix shell and Python programming; familiarity with differential equations, linear algebra, and probability theory; priori experience with modern machine learning concepts (e.g. CS229) and basic neural network training tools (eg. CS230 and/or CS231n). Prior knowledge of basic cognitive science or neuroscience not required but helpful.
Terms: Aut | Units: 1-3
Instructors: Yamins, D. (PI)
Filter Results:
term offered
updating results...
number of units
updating results...
time offered
updating results...
days
updating results...
UG Requirements (GERs)
updating results...
component
updating results...
career
updating results...
© Stanford University | Terms of Use | Copyright Complaints