2025-01-24 13:21:11 +00:00

1374 lines
37 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": "slide"
},
"tags": []
},
"source": [
"# Lecture 1: Introduction to machine learning"
]
},
{
"cell_type": "markdown",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": "skip"
},
"tags": []
},
"source": [
"![](https://www.tensorflow.org/images/colab_logo_32px.png)\n",
"[Run in colab](https://colab.research.google.com/drive/1zNonj4k0gGhz8Q9kg-5kMk2y9Rq-yjJQ)"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:13:11.425191Z",
"iopub.status.busy": "2024-01-10T00:13:11.424955Z",
"iopub.status.idle": "2024-01-10T00:13:11.435722Z",
"shell.execute_reply": "2024-01-10T00:13:11.435205Z"
},
"slideshow": {
"slide_type": "skip"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Last executed: 2024-01-10 00:13:11\n"
]
}
],
"source": [
"import datetime\n",
"now = datetime.datetime.now()\n",
"print(\"Last executed: \" + now.strftime(\"%Y-%m-%d %H:%M:%S\"))"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## Course overview"
]
},
{
"cell_type": "markdown",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": ""
},
"tags": []
},
"source": [
"### Description and objectives\n",
"\n",
"This module covers how to apply machine learning techniques to large data-sets, so-called *big-data*. \n",
"\n",
"An introduction to machine learning (ML) is presented to provide a general understanding of the concepts of machine learning, common machine learning techniques, and how to apply these methods to data-sets of moderate sizes. \n",
"\n",
"Deep learning and computing frameworks to scale machine learning techniques to big-data are then presented. \n",
"\n",
"Scientific data formats and data curation methods are also discussed."
]
},
{
"cell_type": "markdown",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": "subslide"
},
"tags": []
},
"source": [
"### Syllabus\n",
"\n",
"Foundations of ML (e.g. overview of ML, training, data wrangling, scikit-learn, performance analysis, gradient descent), data formats and curation (e.g. data pipelines, data version control, databases, big-data), ML methods (e.g. logistic regression, SVMs, ANNs, decision trees, ensemble learning and random forests, dimensionality reduction), deep learning and scaling to big-data (e.g. TensorFlow, \n",
"Deep ANNs, CNNs, RNNs, Autoencoders) and applications of ML in astrophysics, high-energy physics and industry."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Prerequisites\n",
"\n",
"Students should have a reasonable working knowledge of Python, some familiarity with working in the command line environment in Linux/Unix based operating systems, and a general understanding of elementary mathematics, including linear algebra and calculus. \n",
"\n",
"No previous familiarity with machine learning is required."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Resources"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Textbooks \n",
"\n",
"- VanderPlas, [\"*Python data science handbook*\"](https://jakevdp.github.io/PythonDataScienceHandbook/), O'Reilly, 2017, ISBN 9781491912058\n",
" ([Example code](https://github.com/jakevdp/PythonDataScienceHandbook))\n",
"\n",
"- Geron (1st Edition), [\"*Hands-on machine learning with Scikit-Learn and TensorFlow*\"](https://www.oreilly.com/library/view/hands-on-machine-learning/9781491962282/), O'Reilly, 2017, ISBN 9781491962299\n",
" ([Example code](https://github.com/ageron/handson-ml))\n",
"\n",
"- Geron (2nd Edition), [\"*Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow*\"](https://www.oreilly.com/library/view/hands-on-machine-learning/9781492032632/), O'Reilly, 2019, ISBN 9781492032649 ([Example code](https://github.com/ageron/handson-ml2))\n",
"\n",
"- Goodfellow, Bengio, Courville (GBC), [\"*Deep learning*\"](http://www.deeplearningbook.org), MIT Press, 2016, ISBN 9780262035613"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"#### Tutorials \n",
" \n",
"- [Scikit-Learn tutorial](https://github.com/jakevdp/sklearn_tutorial), VanderPlas"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"#### Main code frameworks and libraries\n",
"\n",
"- [Scikit-Learn](http://scikit-learn.org/stable/)\n",
" \n",
"- [TensorFlow](https://www.tensorflow.org/)"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Schedule\n",
"\n",
"Lectures will run on Friday's from 10am-1pm. \n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Jupyter notebooks\n",
"\n",
"Each lecture has an accompaning Jupyter notebook, with executable code.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"These slides are a Jupyter notebook.\n",
"\n",
"Notebooks can be viewed in slide mode using [RISE](https://rise.readthedocs.io/en/stable/)."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"The supporting Jupyter notebooks thus serve as the course *slides*, *lecture notes*, and *examples*.\n",
"\n",
"A book version is also made available."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Course philosophy\n",
"\n",
"This is a practical, hands-on course. While we will cover basic concepts and background theory (but not in great mathematical depth or rigor), a large component of the course will focus on implementing and running machine learning algorithms. Many code examples and exercises will be considered."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"The course Jupyter notebooks will be made available weekly, in advance of lectures. Students can then follow examples in the lectures by running code live (and inspecting variables and making modifications). "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Exercises\n",
"\n",
"A number of lectures are accompanies by an additional Jupyter notebook with related examples for you to complete. The solutions to these exercises will be made available as the module progresses. These exercises will not be graded but are intended to help improve your understanding of the lecture material."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Assessment\n",
"\n",
"\n",
"- Courseworks: 2 x 20% = 40%\n",
"- Exam: 60%\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"#### Coursework\n",
"\n",
"Courseworks will involve downloading a Jupyter notebook, which you will need to complete. \n",
"\n",
"Throughout the notebook you will need to complete code, analytic exercises and descriptive answers. Much of the grading of the coursework will be performed automatically.\n",
"\n",
"There will be two courseworks. The first coursework will be issued after the first 9 lectures, when all the material required to complete the first coursework will be covered. The second coursework will be issued after the first 15 lectures, when all the material required to complete the second coursework will be covered. "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"#### Exam\n",
"\n",
"*Answer THREE questions* of the FOUR questions provided.\n",
"\n",
"Each question has equal mark (15 marks per question).\n",
"\n",
"Markers place importance on clarity and a portion of the marks are awarded for clear descriptions, answers, drawings, and diagrams, and attention to precision in quantitative answers."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Computing setup\n",
"\n",
"Students can bring their own laptops to class in order to run notebooks and complete examples.\n",
" \n",
"All examples are implemented in Python 3. \n",
"\n",
"The main Python libraries that are required include the following:\n",
"```\n",
"- numpy \n",
"- scipy\n",
"- matplotlib\n",
"- scikit-learn\n",
"- ipython/jupyter\n",
"- seaborn\n",
"- tensorflow\n",
"- astroML\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"An environment to run the notebooks can be set up with the versions of the libraries in `requirements.txt` (details below), following the steps below in terminal (MacOS, Linux) or anaconda prompt (Windows): \n",
"\n",
"1. Create an environment named mlbd with Python 3.11.\n",
"\n",
" ```\n",
" conda create --name mlbd python=3.11\n",
" ```\n",
"\n",
"2. Activate the `mlbd` environment and then install the libraries in the requirements.txt file. \n",
"\n",
" ```\n",
" conda activate mlbd \n",
" pip install -r requirements.txt \n",
" ```\n",
"3. Finally, start Jupyter, which will open the explorer and let you run the notebooks. \n",
"\n",
" ```\n",
" jupyter lab\n",
" ```"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"Content of `requirements.txt`:\n",
"\n",
"```\n",
"numpy==1.24.3\n",
"matplotlib==3.7.4\n",
"pandas==2.0.3\n",
"scikit-learn==1.3.2\n",
"seaborn==0.13.1\n",
"tensorflow==2.13.1\n",
"tensorflow_datasets==4.9.2\n",
"jupyterlab==4.0.10\n",
"jupyter-book==0.15.1\n",
"jupyterlab_rise== 0.42.0\n",
"astroML==1.0.2.post1\n",
"nbdime==4.0.1\n",
"boto3==1.34.15\n",
"pyarrow==14.0.2\n",
"pyspark==3.5.0\n",
"pyppeteer==1.0.2\n",
"dvc==3.38.1\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"Content of `requirements_macosx.txt` for Mac:\n",
"\n",
"```\n",
"numpy==1.24.3\n",
"matplotlib==3.7.4\n",
"pandas==2.0.3\n",
"scikit-learn==1.3.2\n",
"seaborn==0.13.1\n",
"tensorflow==2.13.1\n",
"tensorflow-metal==1.1.0\n",
"tensorflow_datasets==4.9.2\n",
"jupyterlab==4.0.10\n",
"jupyter-book==0.15.1\n",
"jupyterlab_rise== 0.42.0\n",
"astroML==1.0.2.post1\n",
"nbdime==4.0.1\n",
"boto3==1.34.15\n",
"pyarrow==14.0.2\n",
"pyspark==3.5.0\n",
"pyppeteer==1.0.2\n",
"dvc==3.38.1\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": "slide"
},
"tags": []
},
"source": [
"## What is machine learning?"
]
},
{
"cell_type": "markdown",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": "subslide"
},
"tags": []
},
"source": [
"### Artifical intelligence (AI)\n",
"\n",
"Ironically...\n",
"\n",
"- Solving \"computational problems\" that are difficult for humans is straightforward for machines (i.e. problems described by list of formal mathematical rules).\n",
"\n",
"- Solving \"intuitive problems\" that are easy for humans is difficult for machines (i.e. problems difficult to describe formally).\n",
"\n",
"This is often known as [Moravec's paradox](https://en.wikipedia.org/wiki/Moravec%27s_paradox) (although formal definition is a little more specific)."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"Solution is to allow computers to learn from experience and to build an understanding of the world through a hierarchy of concepts."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Knowledge base approach\n",
"\n",
"Hard-code knowledge about world in formal set of rules and use logical inference.\n",
"\n",
"Very difficult to capture complexity of intuitive problems in this manner.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Machine learning (ML)\n",
"\n",
"Arthur Samuel (1959):\n",
"> \"[Machine learning is the] field of study that gives computers the ability to learn without being explicitly programmed.\"\n",
"\n",
"<br>\n",
"\n",
"Tom Mitchell (1997):\n",
"> \"A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E.\"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Uses of machine learning\n",
"\n",
"1. **Prediction:** Predict outcome given data.\n",
"2. **Inference:** Better understand data (and their distribution)."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Data representations\n",
"\n",
"\n",
"Performance of machine learning depends on representation of data given.\n",
"\n",
"Data presented to learning algorithm as *features*.\n",
"\n",
"Traditional approach to machine learning involved *\"feature engineering\"*, where a practitioner with domain expertise would develop techniques to extract informative features from raw data. \n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"#### Examples of features\n",
"\n",
"- Computer visions: edges and corners\n",
"- Spam: frequency of words\n",
"- Character recognition: histograms of black pixels along rows/columns, number of holes, number of strokes"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Learning representations\n",
"\n",
"Alternative is to learn features.\n",
"\n",
"- Can discover informative features from data.\n",
"- Minimal human intervention.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Approaches to representation learning\n",
"\n",
"\n",
"\n",
"- Dedicated feature learning, e.g. autoencoder combining encoder and decoder.\n",
"\n",
"- Representation learning integral to overall machine learning technique, e.g. deep learning."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Approaches to artifical intelligence\n",
"\n",
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture01_Images/ai_venn_diagram.png\" width=\"500\" style=\"display:block; margin:auto\"/>\n",
"\n",
"[Image credit: [GBC](http://www.deeplearningbook.org/)]"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### AI pipelines\n",
"\n",
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture01_Images/ai_approaches.png\" width=\"400\" style=\"display:block; margin:auto\"/>\n",
"\n",
"[Image credit: [GBC](http://www.deeplearningbook.org/)]"
]
},
{
"cell_type": "markdown",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": "slide"
},
"tags": []
},
"source": [
"## The unreasonable effectiveness of data"
]
},
{
"cell_type": "markdown",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": ""
},
"tags": []
},
"source": [
"As society becomes increasing digitised, the volume of available data is exploding. \n",
"\n",
"A significant increase in the volume of data can lead to dramatic increases in the performance of machine learning techniques.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"(Term coined in Halevy, Norbig & Pereira, 2009, [*The unreasonable effectiveness of data*](http://goo.gl/q6LaZ8).)"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Size of benchmark data-sets\n",
"\n",
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture01_Images/data_sizes.png\" width=\"800\" style=\"display:block; margin:auto\"/>\n",
" \n",
"[Image credit: [GBC](http://www.deeplearningbook.org/)]"
]
},
{
"cell_type": "markdown",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": "subslide"
},
"tags": []
},
"source": [
"### Size of data can have a larger impact than algorihm\n",
"\n",
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture01_Images/importance_of_data.png\" width=\"500\" style=\"display:block; margin:auto\"/>\n",
"\n",
"Source: Banko & Brill, 2001, [*Scaling to very very large corpora for natural language disambiguation*](http://goo.gl/R5enIE)"
]
},
{
"cell_type": "markdown",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": "subslide"
},
"tags": []
},
"source": [
"\n",
"\n",
"> As a rule of thumb, a supervised deep learning algorithm will perform reasonably well with around 5,000 labelled samples. \n",
"\n",
"> With 10 million samples, it will match or exceed human performance. \n",
"\n",
"[Source: [GBC](http://www.deeplearningbook.org/)]\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": "subslide"
},
"tags": []
},
"source": [
"However, in many cases very large datasets are not available and in some cases not possible. \n",
"\n",
"Hence, developing effective algorithms remains critical."
]
},
{
"cell_type": "markdown",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": "slide"
},
"tags": []
},
"source": [
"## A brief history of deep learning"
]
},
{
"cell_type": "markdown",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": ""
},
"tags": []
},
"source": [
"### AlexNet: an inflection point in machine learning\n",
"\n",
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture01_Images/alexnet_performance.png\" width=\"800\" style=\"display:block; margin:auto\"/>\n",
"\n",
"Source: [*Ten Years of AI in Review*](https://towardsdatascience.com/ten-years-of-ai-in-review-85decdb2a540), Towards Data Science"
]
},
{
"cell_type": "markdown",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": "subslide"
},
"tags": []
},
"source": [
"### Deep learning timeline\n",
"\n",
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture01_Images/deeplearning_timeline.png\" width=\"800\" style=\"display:block; margin:auto\"/>\n",
"\n",
"Source: [*Ten Years of AI in Review*](https://towardsdatascience.com/ten-years-of-ai-in-review-85decdb2a540), Towards Data Science"
]
},
{
"cell_type": "markdown",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": "subslide"
},
"tags": []
},
"source": [
"### A fourth industrial revolution?\n",
"\n",
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture01_Images/industrial_revolution_4.jpg\" width=\"600\" style=\"display:block; margin:auto\"/>\n",
"\n",
"[[Image Source](https://rw-rw.facebook.com/195228108045971/photos/a.195229821379133/195229781379137/)]"
]
},
{
"cell_type": "markdown",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": ""
},
"tags": []
},
"source": [
"- First industrial revolution (1760-1840): mechanisation through steam and water power.\n",
"- Second industrial revolution (1871-1914): electrification, railroad and telegraph networks.\n",
"- Third industrial revolution (late 20th century): digital revolution.\n",
"- Fourth industrial revolution (21st century): AI revolution."
]
},
{
"cell_type": "markdown",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": "slide"
},
"tags": []
},
"source": [
"## Classes of machine learning\n",
"\n",
"1. **Supervised:** Learn to predict output given input (given labelled training data).\n",
"2. **Unsupervised:** Discover internal representation of input.\n",
"3. **Reinforcement:** Learn action to maximise payoff.\n",
"\n",
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture01_Images/supervised_unsupervised_learning.png\" width=\"800\" style=\"display:block; margin:auto\"/>\n",
" \n",
"[[Image source](http://beta.cambridgespark.com/courses/jpm/01-module.html)]"
]
},
{
"cell_type": "markdown",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": "subslide"
},
"tags": []
},
"source": [
"### Supervised learning\n",
"\n",
"Learn to predict output given input (given labelled training data).\n",
"\n",
"1. **Regression:** Target output is a (real) number, <br>\n",
" e.g. estimate flux intensity.\n",
"\n",
"2. **Classification:** Target output is a class label,<br>\n",
" e.g. classify galaxy morphology."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"#### How supervised learning works\n",
"\n",
"- Select model defined by function $f$, and model target $y$ from inputs $x$ by\n",
"$y = f(x, \\theta),$\n",
"where $\\theta$ are the parameters of the model that are learnt during training.\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- Learning typically involves minimising the difference between the inputs and outputs for the model, given a training data-set (more on training, validation and test data-sets later)."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Unsupervised learning\n",
"\n",
"Discover internal representation of input.\n",
"\n",
"1. **Cluster finding:** Learn cluster of similar structure in data.\n",
"2. **Density estimation:** Learn representations of data (probability distributions).\n",
"3. **Dimensionality reduction:** Provides compact, low-dimensional representation of data."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"#### Unsupervised learning examples\n",
"\n",
"\n",
"Anomaly detection, clustering groups of similar objects, visualising high-dimensional data in 2D or 3D plots are examples of unsupervised learning."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Reinforcement learning\n",
"\n",
"Learn action to maximise payoff.\n",
"\n",
"- Output is an action or sequence of actions and the only supervisory signal is an occasional numerical (scalar) reward.\n",
"- Difficult since rewards are delayed.\n",
"- Not covered in this course.\n",
"\n",
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture01_Images/rl_interaction.png\" width=\"500\" style=\"display:block; margin:auto\"/>\n",
" \n",
"[[Image credit](https://www.analyticsvidhya.com/blog/2016/12/getting-ready-for-ai-based-gaming-agents-overview-of-open-source-reinforcement-learning-platforms/)]"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Reinforcement learning examples\n",
"\n",
"Go, playing computer games, driverless cars, self navigating vaccum cleaners, scheduling of elevators are all applications of reinforcement learning.\n",
"\n",
"E.g. [Google [DeepMind] machine learns to master video games](http://www.bbc.co.uk/news/science-environment-31623427)"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## Training\n",
"\n",
"Machine *learning* often involves solving an *optimization* problem, i.e. finding the parameters $\\theta$ of the model $f$ to best represent the training data (for supervised learning).\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Objective function\n",
"\n",
"Typically maximise/minimise some goodness-of-fit/cost function."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"#### Example of convex objective function\n",
"\n",
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture01_Images/optimization_convex.png\" width=\"500\" style=\"display:block; margin:auto\"/>\n",
" \n",
"[Image credit: Kirkby, UC Irvine, LSST Dark Energy Summer School 2017]"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"#### Example of non-convex objective function"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "-"
}
},
"source": [
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture01_Images/optimization_nonconvex.jpg\" width=\"500\" style=\"display:block; margin:auto\"/>"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "-"
}
},
"source": [
"[[Image source](https://cs.hse.ru/data/2016/08/26/1121363361/moml.jpg)]"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Using gradients to optimize objective function (i.e. perform training)\n",
"\n",
"- **(Batch) Gradient descent:** Use all data at each iteration (full dimension).\n",
"- **Stochastic gradient descent:** Use a random data-point at each iteration (1 dimension).\n",
"- **Backpropagation:** propagate errors backwards through networks.\n",
"\n",
"<!--\n",
"<table>\n",
" <tr>\n",
" <td><img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture01_Images/optimization_gd.png\" width=\"80%\"/></td>\n",
" <td><img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture01_Images/optimization_sgd.png\" width=\"80%\"/></td>\n",
" </tr>\n",
" <tr>\n",
" <td><center>Batch gradient descent</center></td>\n",
" <td><center>Stochastic gradient descent</center></td>\n",
" </tr>\n",
"</table>\n",
"-->"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"#### Batch gradient descent \n",
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture01_Images/optimization_gd.png\" width=\"400\" style=\"display:block; margin:auto\"/>\n",
"\n",
"#### Stochastic gradient descent \n",
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture01_Images/optimization_sgd.png\" width=\"400\" style=\"display:block; margin:auto\"/>\n",
"\n",
"[[Image source](http://www.holehouse.org/mlclass)]"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Batch and online learning\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"#### Batch learning\n",
"\n",
"Algorithm is trained using all available training data at once.\n",
"\n",
"Also called *offline learning*."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- Requires substantial resources (CPU, memory space, disk space).\n",
"- If want to add new training data, must re-train from scratch on new full set of data (i.e. not just the new data but also the old data)."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"#### Online learning\n",
"\n",
"Algorithm is trained using a sub-set of the training data.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"- Each learning step does *not* require substantial resources. \n",
"- Can integate new training data on the fly.\n",
"- May be able to throw away data once used it (although might not want to).\n",
"- If fed bad data, performance will decline.\n",
"- Noisy training."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## Overfitting and underfitting\n",
"\n",
"- **Problem:** The learned model may fit the training set extremely well but fail to generalise to new examples."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### 1D example\n",
"\n",
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture01_Images/overfitting_1d.png\" width=\"900\" style=\"display:block; margin:auto\"/>\n",
"\n",
"[[Image source](http://scikit-learn.org/stable/_images/sphx_glr_plot_underfitting_overfitting_001.png)]"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### 2D example\n",
"\n",
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture01_Images/overfitting_2d.png\" width=\"900\" style=\"display:block; margin:auto\"/>\n",
"\n",
"[[Image source](https://www.safaribooksonline.com/library/view/deep-learning/9781491924570/assets/dpln_0107.png)]"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Techniques to avoid overfitting\n",
"\n",
"- Reduce complexity of model.\n",
"- Regularization:\n",
" - Place additional constraints (priors) on features/parameters.\n",
" - E.g. smoothness of parameters, sparsity of model (i.e. limit complexity).\n",
"- Split data into training, validation and test sets (e.g. cross-validation). \n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## Testing and validation"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### No free lunch theorem\n",
"\n",
"Essentially, all algorithms are equivalent when performance is averaged over all possible problems.\n",
"\n",
"Consequently, there is no a priori model that is guaranteed to work best on all problems.\n",
"\n",
"(Wolpert, 1996, [*The lack of a priori distinctions between learning algorithms*](http://goo.gl/q6LaZ8))\n",
"\n",
"It is therefore a matter of validating models empirically."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Training and test datasets\n",
"\n",
"Split data into training and test sets (e.g. 80% for training and 20% for testing).\n",
"\n",
"The model is trained on the *training set* and then tested on the *test set*. "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"**No data used in training the method is then used to evaluate it.**"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"Error rate on the test set is called the *generalization error* or *out of sample error*.\n",
"\n",
"If the training error is low but the generalization error is high, it suggests the model is overfitted."
]
},
{
"cell_type": "markdown",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": "subslide"
},
"tags": []
},
"source": [
"### Hyperparameters\n",
"\n",
"Many machine learning algorithms contain hyperparameters to control the model. \n",
"\n",
"One (**bad**) approach is to evaluate alternative models defined by different hyperparameters on test set and select the model that performs best.\n",
"\n",
"However, this optimizes the model for the test set and may not generalise to other data well."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Validation\n",
"\n",
"\n",
"A better approach is to split the data into three sets: \n",
"1. Training set\n",
"2. Validation set\n",
"3. Test set\n",
"\n",
"Train models on the training set and evaluate different models (with different hyperparameters) on the validation set.\n",
"\n",
"Only once the final model to be used is fully specified should it be applied to the test set to estimate its generalization performance."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Cross-validation\n",
"\n",
"A disadvantage of the previous approach is that less data are available for training.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"*Cross-validation* addresses this issue by performing a sequence of fits where each subset of the data is used both as a training set and a validation set.\n",
"\n",
"\n",
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture01_Images/2-fold-CV.png\" width=\"600\" style=\"display:block; margin:auto\"/>\n",
"\n",
"\n",
"[Image credit: [VanderPlas](https://github.com/jakevdp/PythonDataScienceHandbook)]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Get validation accuracy scores for each trial, which could be combined."
]
},
{
"cell_type": "markdown",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": "subslide"
},
"tags": []
},
"source": [
"#### Extension to n-fold cross-validation\n",
"\n",
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture01_Images/5-fold-CV.png\" width=\"700\" style=\"display:block; margin:auto\"/>\n",
"\n",
"[Image credit: [VanderPlas](https://github.com/jakevdp/PythonDataScienceHandbook)]"
]
}
],
"metadata": {
"celltoolbar": "Slideshow",
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.1"
}
},
"nbformat": 4,
"nbformat_minor": 4
}