Print Settings
 

CS 229: Machine Learning (STATS 229)

Topics: statistical pattern recognition, linear and non-linear regression, non-parametric methods, exponential family, GLMs, support vector machines, kernel methods, deep learning, model/feature selection, learning theory, ML advice, clustering, density estimation, EM, dimensionality reduction, ICA, PCA, reinforcement learning and adaptive control, Markov decision processes, approximate dynamic programming, and policy search. Prerequisites: linear algebra, and basic probability and statistics.
Terms: Aut, Spr | Units: 3-4 | Grading: Letter or Credit/No Credit

CS 229T: Statistical Learning Theory (STATS 231)

How do we formalize what it means for an algorithm to learn from data? How do we use mathematical thinking to design better machine learning methods? This course focuses on developing mathematical tools for answering these questions. We will present various learning algorithms and prove theoretical guarantees about them. Topics include generalization bounds, implicit regularization, the theory of deep learning, spectral methods, and online learning and bandits problems. Prerequisites: A solid background in linear algebra (Math 104, Math 113 or CS205) and probability theory (CS109 or STAT 116), statistics and machine learning (STATS 315A, CS 229 or STATS 216).
Terms: not given this year | Units: 3 | Grading: Letter or Credit/No Credit

CS 375: Large-Scale Neural Network Modeling for Neuroscience (PSYCH 249)

Introduction to designing, building, and training large-scale neural networks for modeling brain and behavioral data, including: deep convolutional neural network models of sensory systems (vision, audition, somatosensation); variational and generative methods for neural interpretation; recurrent neural networks for dynamics, memory and attention; interactive agent-based deep reinforcement learning for cognitive modeling; and methods and metrics for comparing such models to real-world neural data. Attention will be given both to established methods as well as cutting-edge techniques. Students will learn conceptual bases for deep neural network models and will also implement learn to implement and train large-scale models in Tensorflow using GPUs. Requirements: Fluency in Unix shell and Python programming; familiarity with differential equations, linear algebra, and probability theory; priori experience with modern machine learning concepts (e.g. CS229) and basic neural network training tools (eg. CS230 and/or CS231n). Prior knowledge of basic cognitive science or neuroscience not required but helpful.
Terms: Aut | Units: 1-3 | Grading: Letter or Credit/No Credit
Instructors: ; Yamins, D. (PI)

PSYCH 249: Large-Scale Neural Network Modeling for Neuroscience (CS 375)

Introduction to designing, building, and training large-scale neural networks for modeling brain and behavioral data, including: deep convolutional neural network models of sensory systems (vision, audition, somatosensation); variational and generative methods for neural interpretation; recurrent neural networks for dynamics, memory and attention; interactive agent-based deep reinforcement learning for cognitive modeling; and methods and metrics for comparing such models to real-world neural data. Attention will be given both to established methods as well as cutting-edge techniques. Students will learn conceptual bases for deep neural network models and will also implement learn to implement and train large-scale models in Tensorflow using GPUs. Requirements: Fluency in Unix shell and Python programming; familiarity with differential equations, linear algebra, and probability theory; priori experience with modern machine learning concepts (e.g. CS229) and basic neural network training tools (eg. CS230 and/or CS231n). Prior knowledge of basic cognitive science or neuroscience not required but helpful.
Terms: Aut | Units: 1-3 | Grading: Letter or Credit/No Credit
Instructors: ; Yamins, D. (PI)
© Stanford University | Terms of Use | Copyright Complaints