Print Settings
 

BIODS 237: Deep Learning in Genomics and Biomedicine (BIOMEDIN 273B, CS 273B, GENE 236)

Recent breakthroughs in high-throughput genomic and biomedical data are transforming biological sciences into "big data" disciplines. In parallel, progress in deep neural networks are revolutionizing fields such as image recognition, natural language processing and, more broadly, AI. This course explores the exciting intersection between these two advances. The course will start with an introduction to deep learning and overview the relevant background in genomics and high-throughput biotechnology, focusing on the available data and their relevance. It will then cover the ongoing developments in deep learning (supervised, unsupervised and generative models) with the focus on the applications of these methods to biomedical data, which are beginning to produced dramatic results. In addition to predictive modeling, the course emphasizes how to visualize and extract interpretable, biological insights from such models. Recent papers from the literature will be presented and discussed. Students will be introduced to and work with popular deep learning software frameworks. Students will work in groups on a final class project using real world datasets. Prerequisites: College calculus, linear algebra, basic probability and statistics such as CS109, and basic machine learning such as CS229. No prior knowledge of genomics is necessary.
Terms: Aut | Units: 3 | Grading: Medical Option (Med-Ltr-CR/NC)
Instructors: ; Kundaje, A. (PI); Zou, J. (PI)

BIOMEDIN 273B: Deep Learning in Genomics and Biomedicine (BIODS 237, CS 273B, GENE 236)

Recent breakthroughs in high-throughput genomic and biomedical data are transforming biological sciences into "big data" disciplines. In parallel, progress in deep neural networks are revolutionizing fields such as image recognition, natural language processing and, more broadly, AI. This course explores the exciting intersection between these two advances. The course will start with an introduction to deep learning and overview the relevant background in genomics and high-throughput biotechnology, focusing on the available data and their relevance. It will then cover the ongoing developments in deep learning (supervised, unsupervised and generative models) with the focus on the applications of these methods to biomedical data, which are beginning to produced dramatic results. In addition to predictive modeling, the course emphasizes how to visualize and extract interpretable, biological insights from such models. Recent papers from the literature will be presented and discussed. Students will be introduced to and work with popular deep learning software frameworks. Students will work in groups on a final class project using real world datasets. Prerequisites: College calculus, linear algebra, basic probability and statistics such as CS109, and basic machine learning such as CS229. No prior knowledge of genomics is necessary.
Terms: Aut | Units: 3 | Grading: Medical Option (Med-Ltr-CR/NC)
Instructors: ; Kundaje, A. (PI); Zou, J. (PI)

CS 109: Introduction to Probability for Computer Scientists

Topics include: counting and combinatorics, random variables, conditional probability, independence, distributions, expectation, point estimation, and limit theorems. Applications of probability in computer science including machine learning and the use of probability in the analysis of algorithms. Prerequisites: 103, 106B or X, multivariate calculus at the level of MATH 51 or CME 100 or equivalent.
Terms: Aut, Spr, Sum | Units: 3-5 | UG Reqs: GER:DB-EngrAppSci, WAY-AQR, WAY-FR | Grading: Letter or Credit/No Credit
Instructors: ; Piech, C. (PI)

CS 124: From Languages to Information (LINGUIST 180, LINGUIST 280)

Extracting meaning, information, and structure from human language text, speech, web pages, social networks. Methods include: string algorithms, edit distance, language modeling, the noisy channel, machine learning classifiers, inverted indices, collaborative filtering, neural embeddings, PageRank. Applications such as question answering, sentiment analysis, information retrieval, text classification, social network models, spell checking, recommender systems, chatbots. Prerequisites: CS103, CS107, CS109.
Terms: Win | Units: 3-4 | Grading: Letter or Credit/No Credit
Instructors: ; Jurafsky, D. (PI)

CS 250: Algebraic Error Correcting Codes (EE 387)

Introduction to the theory of error correcting codes, emphasizing algebraic constructions, and diverse applications throughout computer science and engineering. Topics include basic bounds on error correcting codes; Reed-Solomon and Reed-Muller codes; list-decoding, list-recovery and locality. Applications may include communication, storage, complexity theory, pseudorandomness, cryptography, streaming algorithms, group testing, and compressed sensing. Prerequisites: Linear algebra, basic probability (at the level of, say, CS109, CME106 or EE178) and "mathematical maturity" (students will be asked to write proofs). Familiarity with finite fields will be helpful but not required.
Terms: Win | Units: 3 | Grading: Letter or Credit/No Credit
Instructors: ; Wootters, M. (PI)

CS 273B: Deep Learning in Genomics and Biomedicine (BIODS 237, BIOMEDIN 273B, GENE 236)

Recent breakthroughs in high-throughput genomic and biomedical data are transforming biological sciences into "big data" disciplines. In parallel, progress in deep neural networks are revolutionizing fields such as image recognition, natural language processing and, more broadly, AI. This course explores the exciting intersection between these two advances. The course will start with an introduction to deep learning and overview the relevant background in genomics and high-throughput biotechnology, focusing on the available data and their relevance. It will then cover the ongoing developments in deep learning (supervised, unsupervised and generative models) with the focus on the applications of these methods to biomedical data, which are beginning to produced dramatic results. In addition to predictive modeling, the course emphasizes how to visualize and extract interpretable, biological insights from such models. Recent papers from the literature will be presented and discussed. Students will be introduced to and work with popular deep learning software frameworks. Students will work in groups on a final class project using real world datasets. Prerequisites: College calculus, linear algebra, basic probability and statistics such as CS109, and basic machine learning such as CS229. No prior knowledge of genomics is necessary.
Terms: Aut | Units: 3 | Grading: Medical Option (Med-Ltr-CR/NC)
Instructors: ; Kundaje, A. (PI); Zou, J. (PI)

EE 387: Algebraic Error Correcting Codes (CS 250)

Introduction to the theory of error correcting codes, emphasizing algebraic constructions, and diverse applications throughout computer science and engineering. Topics include basic bounds on error correcting codes; Reed-Solomon and Reed-Muller codes; list-decoding, list-recovery and locality. Applications may include communication, storage, complexity theory, pseudorandomness, cryptography, streaming algorithms, group testing, and compressed sensing. Prerequisites: Linear algebra, basic probability (at the level of, say, CS109, CME106 or EE178) and "mathematical maturity" (students will be asked to write proofs). Familiarity with finite fields will be helpful but not required.
Terms: Win | Units: 3 | Grading: Letter or Credit/No Credit
Instructors: ; Wootters, M. (PI)

GENE 236: Deep Learning in Genomics and Biomedicine (BIODS 237, BIOMEDIN 273B, CS 273B)

Recent breakthroughs in high-throughput genomic and biomedical data are transforming biological sciences into "big data" disciplines. In parallel, progress in deep neural networks are revolutionizing fields such as image recognition, natural language processing and, more broadly, AI. This course explores the exciting intersection between these two advances. The course will start with an introduction to deep learning and overview the relevant background in genomics and high-throughput biotechnology, focusing on the available data and their relevance. It will then cover the ongoing developments in deep learning (supervised, unsupervised and generative models) with the focus on the applications of these methods to biomedical data, which are beginning to produced dramatic results. In addition to predictive modeling, the course emphasizes how to visualize and extract interpretable, biological insights from such models. Recent papers from the literature will be presented and discussed. Students will be introduced to and work with popular deep learning software frameworks. Students will work in groups on a final class project using real world datasets. Prerequisites: College calculus, linear algebra, basic probability and statistics such as CS109, and basic machine learning such as CS229. No prior knowledge of genomics is necessary.
Terms: Aut | Units: 3 | Grading: Medical Option (Med-Ltr-CR/NC)
Instructors: ; Kundaje, A. (PI); Zou, J. (PI)

LINGUIST 180: From Languages to Information (CS 124, LINGUIST 280)

Extracting meaning, information, and structure from human language text, speech, web pages, social networks. Methods include: string algorithms, edit distance, language modeling, the noisy channel, machine learning classifiers, inverted indices, collaborative filtering, neural embeddings, PageRank. Applications such as question answering, sentiment analysis, information retrieval, text classification, social network models, spell checking, recommender systems, chatbots. Prerequisites: CS103, CS107, CS109.
Terms: Win | Units: 3-4 | Grading: Letter or Credit/No Credit
Instructors: ; Jurafsky, D. (PI)

LINGUIST 280: From Languages to Information (CS 124, LINGUIST 180)

Extracting meaning, information, and structure from human language text, speech, web pages, social networks. Methods include: string algorithms, edit distance, language modeling, the noisy channel, machine learning classifiers, inverted indices, collaborative filtering, neural embeddings, PageRank. Applications such as question answering, sentiment analysis, information retrieval, text classification, social network models, spell checking, recommender systems, chatbots. Prerequisites: CS103, CS107, CS109.
Terms: Win | Units: 3-4 | Grading: Letter or Credit/No Credit
Instructors: ; Jurafsky, D. (PI)
© Stanford University | Terms of Use | Copyright Complaints