spce0038-machine-learning-w.../week4/slides/Lecture11_IntroToTensorFlow.ipynb
2025-02-28 11:02:07 +00:00

1916 lines
46 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Lecture 11: Introduction to TensorFlow"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "skip"
}
},
"source": [
"![](https://www.tensorflow.org/images/colab_logo_32px.png)\n",
"[Run in colab](https://colab.research.google.com/drive/1H8iqFsQn9FuoNregKha7MJAQVp_eT77-)"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:45.682771Z",
"iopub.status.busy": "2024-01-10T00:20:45.682383Z",
"iopub.status.idle": "2024-01-10T00:20:45.691614Z",
"shell.execute_reply": "2024-01-10T00:20:45.691076Z"
},
"slideshow": {
"slide_type": "skip"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Last executed: 2024-01-10 00:20:45\n"
]
}
],
"source": [
"import datetime\n",
"now = datetime.datetime.now()\n",
"print(\"Last executed: \" + now.strftime(\"%Y-%m-%d %H:%M:%S\"))"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## Overview of TensorFlow"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[TensorFlow](https://www.tensorflow.org/) is an open source library developed by Google for numerical computation. It is particularly well suited for large-scale machine learning. \n",
"\n",
"TensorFlow is based on the construction of *computational graphs*. It has evolved considerably since it's open source release in 2015. We will use TF2, which offers many additional features built on top of core features (the most important is `tf.keras` discussed in later lectures)."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:45.730161Z",
"iopub.status.busy": "2024-01-10T00:20:45.729719Z",
"iopub.status.idle": "2024-01-10T00:20:49.187358Z",
"shell.execute_reply": "2024-01-10T00:20:49.186619Z"
},
"slideshow": {
"slide_type": "skip"
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-01-10 00:20:46.295156: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.\n",
"2024-01-10 00:20:46.707194: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.\n",
"2024-01-10 00:20:46.709789: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
"To enable the following instructions: AVX2 AVX512F FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-01-10 00:20:47.698046: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n"
]
}
],
"source": [
"import numpy as np\n",
"import tensorflow as tf\n",
"from tensorflow import keras"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Features \n",
"\n",
"- Similar to [`numpy`](https://numpy.org/doc/stable/) but with GPU support.\n",
"- Supports distributed computing.\n",
"- Includes a kind of just-in-time (JIT) compiler to optimise speed and memory usage.\n",
"- Computational graphs can be saved and exported.\n",
"- Supports autodiff and provides numerous advanced optimisers."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### TensorFlow's Python API\n",
"\n",
"<br>\n",
"\n",
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture11_Images/tensorflow-Python-API.png\" width=\"700px\" style=\"display:block; margin:auto\"/>\n",
"\n",
"[Credit: Geron]"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### TensorFlow's Architecture\n",
"\n",
"<br>\n",
"\n",
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture11_Images/tensorflow-Architecture.png\" width=\"700px\" style=\"display:block; margin:auto\"/>\n",
"\n",
"[Credit: Geron]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"At lowest level TensorFlow is implemented in C++ so that it is highly efficient."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will focus solely on the python TensorFlow interfaces (typical approach). Most of the time you will simple need to interact with the Keras interface but sometimes you might want to use the low-level python API for greater flexibility."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Hardware"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"One of the factors responsible for the dramatic recent growth of machine learning is advances in computing power. \n",
"\n",
"In particular, hardware that supports high levels of parallelism."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<br>\n",
"\n",
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture11_Images/cpu_gpu_tpu.png\" width=\"750px\" style=\"display:block; margin:auto\"/>"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"- Central Processing Unit (CPU):\n",
" - General purpose \n",
" - Low latency \n",
" - Low throughput\n",
" - Sequential\n",
" \n",
"- Graphics Processing Unit (GPU)\n",
" - Specialised (for graphics initially)\n",
" - High latency \n",
" - High throughput\n",
" - Parallel execution\n",
" \n",
"- Tensor Processing Unit (TPU)\n",
" - Specialised for matrix operations\n",
" - High latency\n",
" - Very high throughput\n",
" - Extreme parallel execution"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"In TensorFlow many operations are implemented in low-level kernels, optimised for specific hardware, e.g. CPUs, GPUS, or TPUs."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"TensorFlow's execution engine will ensure operations are run efficiently (across multiple machines and devices if set up accordingly)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture11_Images/tensorflow-Architecture.png\" width=\"700px\" style=\"display:block; margin:auto\"/>\n",
"\n",
"[Credit: Geron]"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"#### Aside: chips optimised for machine learning are an active area of development\n",
"\n",
"Google developed TPU.\n",
"\n",
"[Graphcore](https://www.graphcore.ai/) have developed the Intelligence Processing Unit (IPU)."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Computational graphs\n",
"\n",
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture11_Images/computational_graph_simple.png\" width=\"750px\" style=\"display:block; margin:auto\"/>\n",
"\n",
"[Credit: Geron]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"User code constructs the computational graph (can be constructed in Python). With TensorFlow 2, graph construction is less explicit and much simpler.\n",
"\n",
"TensorFlow takes computational graph and runs it efficiently via optimized C++ code."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Parallel and distributed computation\n",
"\n",
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture11_Images/computational_graph_hpc.png\" width=\"750px\" style=\"display:block; margin:auto\"/>\n",
"\n",
"[Credit: Geron]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Computational graphs can be broken up into different chunks, which are then run in parallel across many CPUs/GPUs/TPUs (or highly distributed systems).\n",
"\n",
"This approach allows TensorFlow to scale to big-data."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Scaling to big-data\n",
"\n",
"For example, TensorFlow can be used to train neural networks with millions of parameters and training sets with billions of training instances.\n",
"\n",
"Provides the infrastructure behind many of Google's large-scale machine learning products, e.g. Google Search, Google Photos, ..."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## Tensors and operations"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"TensorFlow API centers around \"Tensors\" (essentially multi-dimensional arrays of matrices), hence its name."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Similar to numpy [`ndarray`](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html)."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Tensors"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Can construct constant tensors with `tf.constant`."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.192536Z",
"iopub.status.busy": "2024-01-10T00:20:49.191777Z",
"iopub.status.idle": "2024-01-10T00:20:49.221646Z",
"shell.execute_reply": "2024-01-10T00:20:49.221051Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"<tf.Tensor: shape=(2, 3), dtype=float32, numpy=\n",
"array([[1., 2., 3.],\n",
" [4., 5., 6.]], dtype=float32)>"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tf.constant([[1., 2., 3.], [4., 5., 6.]]) # 2x3 matrix"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.224596Z",
"iopub.status.busy": "2024-01-10T00:20:49.224220Z",
"iopub.status.idle": "2024-01-10T00:20:49.229626Z",
"shell.execute_reply": "2024-01-10T00:20:49.229096Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"<tf.Tensor: shape=(), dtype=int32, numpy=42>"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tf.constant(42) # scalar"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Tensors have a shape and data type (dtype)."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.232492Z",
"iopub.status.busy": "2024-01-10T00:20:49.232147Z",
"iopub.status.idle": "2024-01-10T00:20:49.236577Z",
"shell.execute_reply": "2024-01-10T00:20:49.236045Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"TensorShape([2, 3])"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"t = tf.constant([[1., 2., 3.], [4., 5., 6.]])\n",
"t.shape"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.239238Z",
"iopub.status.busy": "2024-01-10T00:20:49.238896Z",
"iopub.status.idle": "2024-01-10T00:20:49.242669Z",
"shell.execute_reply": "2024-01-10T00:20:49.242147Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"tf.float32"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"t.dtype"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Indexing"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Tensor indexing is very similar to numpy."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.245639Z",
"iopub.status.busy": "2024-01-10T00:20:49.245273Z",
"iopub.status.idle": "2024-01-10T00:20:49.253018Z",
"shell.execute_reply": "2024-01-10T00:20:49.252523Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"<tf.Tensor: shape=(2, 2), dtype=float32, numpy=\n",
"array([[2., 3.],\n",
" [5., 6.]], dtype=float32)>"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"t[:, 1:]"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.255766Z",
"iopub.status.busy": "2024-01-10T00:20:49.255421Z",
"iopub.status.idle": "2024-01-10T00:20:49.261818Z",
"shell.execute_reply": "2024-01-10T00:20:49.261305Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"<tf.Tensor: shape=(2, 1), dtype=float32, numpy=\n",
"array([[2.],\n",
" [5.]], dtype=float32)>"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"t[..., 1, tf.newaxis]"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Operations"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Variety of tensor operations are possible."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.264747Z",
"iopub.status.busy": "2024-01-10T00:20:49.264389Z",
"iopub.status.idle": "2024-01-10T00:20:49.269613Z",
"shell.execute_reply": "2024-01-10T00:20:49.269093Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"<tf.Tensor: shape=(2, 3), dtype=float32, numpy=\n",
"array([[11., 12., 13.],\n",
" [14., 15., 16.]], dtype=float32)>"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"t + 10"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.272306Z",
"iopub.status.busy": "2024-01-10T00:20:49.271964Z",
"iopub.status.idle": "2024-01-10T00:20:49.277408Z",
"shell.execute_reply": "2024-01-10T00:20:49.276893Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"<tf.Tensor: shape=(2, 3), dtype=float32, numpy=\n",
"array([[ 1., 4., 9.],\n",
" [16., 25., 36.]], dtype=float32)>"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tf.square(t)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.280108Z",
"iopub.status.busy": "2024-01-10T00:20:49.279770Z",
"iopub.status.idle": "2024-01-10T00:20:49.337574Z",
"shell.execute_reply": "2024-01-10T00:20:49.336941Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"<tf.Tensor: shape=(2, 2), dtype=float32, numpy=\n",
"array([[14., 32.],\n",
" [32., 77.]], dtype=float32)>"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"t @ tf.transpose(t) # matrix multiplication"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Using `keras.backend`"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Keras API also includes its own low-level API with similar functionality, which is basically a wrapper for the corresponding TensorFlow operations (more on Keras in next lecture)."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.340932Z",
"iopub.status.busy": "2024-01-10T00:20:49.340368Z",
"iopub.status.idle": "2024-01-10T00:20:49.375908Z",
"shell.execute_reply": "2024-01-10T00:20:49.375217Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"<tf.Tensor: shape=(3, 2), dtype=float32, numpy=\n",
"array([[11., 26.],\n",
" [14., 35.],\n",
" [19., 46.]], dtype=float32)>"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from tensorflow import keras\n",
"K = keras.backend\n",
"K.square(K.transpose(t)) + 10"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Tensors and Numpy"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "skip"
}
},
"source": [
"**Note:** From `tf.__version__ == 2.4.0` tensorflow.numpy functionality will be added: https://www.tensorflow.org/api_docs/python/tf/experimental/numpy"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Can create a tensor from ndarray."
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.379245Z",
"iopub.status.busy": "2024-01-10T00:20:49.378802Z",
"iopub.status.idle": "2024-01-10T00:20:49.386986Z",
"shell.execute_reply": "2024-01-10T00:20:49.386343Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"<tf.Tensor: shape=(3,), dtype=float64, numpy=array([2., 4., 5.])>"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"a = np.array([2., 4., 5.])\n",
"tf.constant(a)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Can convert ndarray to tensor."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.390159Z",
"iopub.status.busy": "2024-01-10T00:20:49.389613Z",
"iopub.status.idle": "2024-01-10T00:20:49.395820Z",
"shell.execute_reply": "2024-01-10T00:20:49.395186Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"array([[1., 2., 3.],\n",
" [4., 5., 6.]], dtype=float32)"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"t.numpy()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Can apply numpy operations to tensors and vice versa."
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.398836Z",
"iopub.status.busy": "2024-01-10T00:20:49.398388Z",
"iopub.status.idle": "2024-01-10T00:20:49.402669Z",
"shell.execute_reply": "2024-01-10T00:20:49.402129Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"array([[1., 2., 3.],\n",
" [4., 5., 6.]], dtype=float32)"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"np.array(t)"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.405317Z",
"iopub.status.busy": "2024-01-10T00:20:49.404889Z",
"iopub.status.idle": "2024-01-10T00:20:49.410243Z",
"shell.execute_reply": "2024-01-10T00:20:49.409597Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"<tf.Tensor: shape=(3,), dtype=float64, numpy=array([ 4., 16., 25.])>"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tf.square(a)"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.413047Z",
"iopub.status.busy": "2024-01-10T00:20:49.412621Z",
"iopub.status.idle": "2024-01-10T00:20:49.416846Z",
"shell.execute_reply": "2024-01-10T00:20:49.416325Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"array([[ 1., 4., 9.],\n",
" [16., 25., 36.]], dtype=float32)"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"np.square(t)"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Conflicting Types"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"TensorFlow does not perform type conversions automatically since they can significantly degrade performance and can easily go unnoticed."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Therefore you cannot add a float to an integer."
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.419883Z",
"iopub.status.busy": "2024-01-10T00:20:49.419332Z",
"iopub.status.idle": "2024-01-10T00:20:49.423300Z",
"shell.execute_reply": "2024-01-10T00:20:49.422795Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"cannot compute AddV2 as input #1(zero-based) was expected to be a float tensor but is a int32 tensor [Op:AddV2] name: \n"
]
}
],
"source": [
"try:\n",
" tf.constant(2.0) + tf.constant(40)\n",
"except tf.errors.InvalidArgumentError as ex:\n",
" print(ex)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Similarly, you cannot add a float (32 bit) and a double (64 bit)."
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.426756Z",
"iopub.status.busy": "2024-01-10T00:20:49.426032Z",
"iopub.status.idle": "2024-01-10T00:20:49.432204Z",
"shell.execute_reply": "2024-01-10T00:20:49.431684Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"cannot compute AddV2 as input #1(zero-based) was expected to be a float tensor but is a double tensor [Op:AddV2] name: \n"
]
}
],
"source": [
"try:\n",
" tf.constant(2.0) + tf.constant(40., dtype=tf.float64)\n",
"except tf.errors.InvalidArgumentError as ex:\n",
" print(ex)"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"If you want to consider operations with different types you need to explicitly cast them first."
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.435076Z",
"iopub.status.busy": "2024-01-10T00:20:49.434539Z",
"iopub.status.idle": "2024-01-10T00:20:49.440496Z",
"shell.execute_reply": "2024-01-10T00:20:49.439809Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"<tf.Tensor: shape=(), dtype=float32, numpy=42.0>"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"t2 = tf.constant(40., dtype=tf.float64)\n",
"tf.constant(2.0) + tf.cast(t2, tf.float32)"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Variables"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Previous tensors we've considered are constant and immutable so they cannot be changed."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We also need tensors that can act as variables that can change over time, for example for weights of a neural network that are regularly updated during training."
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.443728Z",
"iopub.status.busy": "2024-01-10T00:20:49.443202Z",
"iopub.status.idle": "2024-01-10T00:20:49.452540Z",
"shell.execute_reply": "2024-01-10T00:20:49.451892Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"<tf.Variable 'Variable:0' shape=(2, 3) dtype=float32, numpy=\n",
"array([[1., 2., 3.],\n",
" [4., 5., 6.]], dtype=float32)>"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"v = tf.Variable([[1., 2., 3.], [4., 5., 6.]])\n",
"v"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Can be modified in place using the `assign` method."
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.455522Z",
"iopub.status.busy": "2024-01-10T00:20:49.455092Z",
"iopub.status.idle": "2024-01-10T00:20:49.462111Z",
"shell.execute_reply": "2024-01-10T00:20:49.461451Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"<tf.Variable 'UnreadVariable' shape=(2, 3) dtype=float32, numpy=\n",
"array([[ 2., 4., 6.],\n",
" [ 8., 10., 12.]], dtype=float32)>"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"v.assign(2 * v)"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## TensorFlow Functions"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once TensorFlow has constructed a computational graph, it optimises it (e.g. simplying expressions, pruning unused nodes, etc.).\n",
"\n",
"Consequently, a TensorFlow function will typically run a lot faster than an equivalent numpy function.\n",
"\n",
"`tf.function` can be used to turn a python function into a TensorFlow function."
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.465056Z",
"iopub.status.busy": "2024-01-10T00:20:49.464624Z",
"iopub.status.idle": "2024-01-10T00:20:49.467713Z",
"shell.execute_reply": "2024-01-10T00:20:49.467060Z"
},
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"def cube(x):\n",
" return x ** 3"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.470876Z",
"iopub.status.busy": "2024-01-10T00:20:49.470259Z",
"iopub.status.idle": "2024-01-10T00:20:49.474488Z",
"shell.execute_reply": "2024-01-10T00:20:49.473826Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"8"
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"cube(2)"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.477379Z",
"iopub.status.busy": "2024-01-10T00:20:49.476854Z",
"iopub.status.idle": "2024-01-10T00:20:49.483425Z",
"shell.execute_reply": "2024-01-10T00:20:49.482846Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"<tensorflow.python.eager.polymorphic_function.polymorphic_function.Function at 0x7f00e437e1c0>"
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tf_cube = tf.function(cube)\n",
"tf_cube"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.486486Z",
"iopub.status.busy": "2024-01-10T00:20:49.485922Z",
"iopub.status.idle": "2024-01-10T00:20:49.528929Z",
"shell.execute_reply": "2024-01-10T00:20:49.528289Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"<tf.Tensor: shape=(), dtype=int32, numpy=8>"
]
},
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tf_cube(2)"
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.532023Z",
"iopub.status.busy": "2024-01-10T00:20:49.531477Z",
"iopub.status.idle": "2024-01-10T00:20:49.548844Z",
"shell.execute_reply": "2024-01-10T00:20:49.548233Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"<tf.Tensor: shape=(), dtype=float32, numpy=8.0>"
]
},
"execution_count": 27,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tf_cube(tf.constant(2.0))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When you write custom functionality with a Keras model, Keras will automatically convert your function to a TensorFlow function so typically you will not need to worry about this."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
},
"tags": [
"exercise_pointer"
]
},
"source": [
"**Exercises:** *You can now complete Exercise 1 in the exercises associated with this lecture.*"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Reuse"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A TensorFlow function generates a new graph for each unique set of input shapes and data types. The graph is then cached for subsequent use.\n",
"\n",
"This is only the case for tensor arguments.\n",
"\n",
"If you pass numerical python values a new graph will be created for each execution. This could considerably slow down your code and may use up a lot of RAM (for the storage of many computational graphs)."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## Gradients"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As we have seen when considering training, we often need to compute the gradients to train models, e.g. for gradient descent based approaches. Typically we need to compute the gradient of the cost function with respect to the model weights. \n",
"\n",
"TensorFlow supports automatical differentiation, which allows gradients to be computed automatically."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"Consider the following function."
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.552388Z",
"iopub.status.busy": "2024-01-10T00:20:49.551862Z",
"iopub.status.idle": "2024-01-10T00:20:49.555240Z",
"shell.execute_reply": "2024-01-10T00:20:49.554602Z"
}
},
"outputs": [],
"source": [
"def f(w1, w2):\n",
" return 3 * w1 ** 2 + 2 * w1 * w2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will compute gradients analytically, numerically and using TensorFlow's Autodiff functionality at the following point."
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.558283Z",
"iopub.status.busy": "2024-01-10T00:20:49.557845Z",
"iopub.status.idle": "2024-01-10T00:20:49.560882Z",
"shell.execute_reply": "2024-01-10T00:20:49.560244Z"
}
},
"outputs": [],
"source": [
"w1, w2 = 5.0, 3.0"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Computing gradients analytically"
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.563711Z",
"iopub.status.busy": "2024-01-10T00:20:49.563282Z",
"iopub.status.idle": "2024-01-10T00:20:49.566598Z",
"shell.execute_reply": "2024-01-10T00:20:49.565962Z"
}
},
"outputs": [],
"source": [
"def df_dw1(w1, w2):\n",
" return 6 * w1 + 2 * w2\n",
"def df_dw2(w1, w2):\n",
" return 2 * w1"
]
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.569286Z",
"iopub.status.busy": "2024-01-10T00:20:49.568925Z",
"iopub.status.idle": "2024-01-10T00:20:49.572928Z",
"shell.execute_reply": "2024-01-10T00:20:49.572409Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"36.0"
]
},
"execution_count": 31,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"df_dw1(w1, w2)"
]
},
{
"cell_type": "code",
"execution_count": 32,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.575718Z",
"iopub.status.busy": "2024-01-10T00:20:49.575285Z",
"iopub.status.idle": "2024-01-10T00:20:49.579249Z",
"shell.execute_reply": "2024-01-10T00:20:49.578727Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"10.0"
]
},
"execution_count": 32,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"df_dw2(w1, w2)"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Computing gradients numerically\n",
"\n",
"Compute the gradient by finite differences."
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.582134Z",
"iopub.status.busy": "2024-01-10T00:20:49.581584Z",
"iopub.status.idle": "2024-01-10T00:20:49.587829Z",
"shell.execute_reply": "2024-01-10T00:20:49.587195Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"36.000003007075065"
]
},
"execution_count": 33,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"eps = 1e-6\n",
"(f(w1 + eps, w2) - f(w1, w2)) / eps"
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.590728Z",
"iopub.status.busy": "2024-01-10T00:20:49.590286Z",
"iopub.status.idle": "2024-01-10T00:20:49.594499Z",
"shell.execute_reply": "2024-01-10T00:20:49.593955Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"10.000000003174137"
]
},
"execution_count": 34,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"(f(w1, w2 + eps) - f(w1, w2)) / eps"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Gradients computed are approximate.\n",
"\n",
"Required an extra function evaluation for every gradient. Computationally infeasible for many cases, e.g. large neural networks with hundreds of thousands or millions of parameters (or more!)."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Computing gradients with Autodiff"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Autodiff builds derivatives of each stage of the computational graph so that gradients can be computed automatically and efficiently."
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.597673Z",
"iopub.status.busy": "2024-01-10T00:20:49.597157Z",
"iopub.status.idle": "2024-01-10T00:20:49.611136Z",
"shell.execute_reply": "2024-01-10T00:20:49.610524Z"
}
},
"outputs": [],
"source": [
"w1, w2 = tf.Variable(5.), tf.Variable(3.)\n",
"with tf.GradientTape() as tape:\n",
" z = f(w1, w2)\n",
"\n",
"gradients = tape.gradient(z, [w1, w2])"
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.614366Z",
"iopub.status.busy": "2024-01-10T00:20:49.613940Z",
"iopub.status.idle": "2024-01-10T00:20:49.619847Z",
"shell.execute_reply": "2024-01-10T00:20:49.619202Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"[<tf.Tensor: shape=(), dtype=float32, numpy=36.0>,\n",
" <tf.Tensor: shape=(), dtype=float32, numpy=10.0>]"
]
},
"execution_count": 36,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"gradients"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Only requires one computation regardless of how many derivatives need to be computed and result does not suffer from any numerical approximations (only limited by machine precision arithmetic)."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"#### Persistence\n",
"\n",
"Tape is erased immediately after call to `gradient` method. So will fail if you try to call it twice."
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.622949Z",
"iopub.status.busy": "2024-01-10T00:20:49.622494Z",
"iopub.status.idle": "2024-01-10T00:20:49.629996Z",
"shell.execute_reply": "2024-01-10T00:20:49.629380Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"A non-persistent GradientTape can only be used to compute one set of gradients (or jacobians)\n"
]
}
],
"source": [
"with tf.GradientTape() as tape:\n",
" z = f(w1, w2)\n",
"\n",
"dz_dw1 = tape.gradient(z, w1)\n",
"try:\n",
" dz_dw2 = tape.gradient(z, w2)\n",
"except RuntimeError as ex:\n",
" print(ex)"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"Can make the tape persistent if you need to call it more than once. Then be sure to delete it once done to free resources."
]
},
{
"cell_type": "code",
"execution_count": 38,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.633025Z",
"iopub.status.busy": "2024-01-10T00:20:49.632451Z",
"iopub.status.idle": "2024-01-10T00:20:49.638824Z",
"shell.execute_reply": "2024-01-10T00:20:49.638206Z"
}
},
"outputs": [],
"source": [
"with tf.GradientTape(persistent=True) as tape:\n",
" z = f(w1, w2)\n",
"\n",
"dz_dw1 = tape.gradient(z, w1)\n",
"dz_dw2 = tape.gradient(z, w2) # works now!\n",
"del tape"
]
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.642038Z",
"iopub.status.busy": "2024-01-10T00:20:49.641485Z",
"iopub.status.idle": "2024-01-10T00:20:49.647865Z",
"shell.execute_reply": "2024-01-10T00:20:49.647205Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"(<tf.Tensor: shape=(), dtype=float32, numpy=36.0>,\n",
" <tf.Tensor: shape=(), dtype=float32, numpy=10.0>)"
]
},
"execution_count": 39,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"dz_dw1, dz_dw2"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"#### Computing gradients wrt variables and watched tensors\n",
"\n",
"The tape only tracks variables (recall constants are immutable so it does not make sense to compute a gradient with respect to a constant).\n",
"\n",
"If you try to compute the gradient with respect to (wrt) anything other than a variable you will get a None result."
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.650849Z",
"iopub.status.busy": "2024-01-10T00:20:49.650289Z",
"iopub.status.idle": "2024-01-10T00:20:49.656640Z",
"shell.execute_reply": "2024-01-10T00:20:49.655986Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"[None, None]"
]
},
"execution_count": 40,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"c1, c2 = tf.constant(5.), tf.constant(3.)\n",
"with tf.GradientTape() as tape:\n",
" z = f(c1, c2)\n",
"\n",
"gradients = tape.gradient(z, [c1, c2])\n",
"gradients"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"But you can `watch` tensors and then compute gradients with respect to watched tensors as if they were variables."
]
},
{
"cell_type": "code",
"execution_count": 41,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.659751Z",
"iopub.status.busy": "2024-01-10T00:20:49.659219Z",
"iopub.status.idle": "2024-01-10T00:20:49.665560Z",
"shell.execute_reply": "2024-01-10T00:20:49.664859Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"[<tf.Tensor: shape=(), dtype=float32, numpy=36.0>,\n",
" <tf.Tensor: shape=(), dtype=float32, numpy=10.0>]"
]
},
"execution_count": 41,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"with tf.GradientTape() as tape:\n",
" tape.watch(c1)\n",
" tape.watch(c2)\n",
" z = f(c1, c2)\n",
"\n",
"gradients = tape.gradient(z, [c1, c2])\n",
"gradients"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"#### Stopping gradients propagating"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Sometimes you may want to stop gradients propagating through the computational graph.\n",
"\n",
"This can be performed with `tf.stop_gradient`, which allows the function to be evaluated in the forward evaluation pass but not in the reverse gradient pass."
]
},
{
"cell_type": "code",
"execution_count": 42,
"metadata": {
"execution": {
"iopub.execute_input": "2024-01-10T00:20:49.668653Z",
"iopub.status.busy": "2024-01-10T00:20:49.668088Z",
"iopub.status.idle": "2024-01-10T00:20:49.677969Z",
"shell.execute_reply": "2024-01-10T00:20:49.677364Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"[<tf.Tensor: shape=(), dtype=float32, numpy=30.0>, None]"
]
},
"execution_count": 42,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"def f(w1, w2):\n",
" return 3 * w1 ** 2 + tf.stop_gradient(2 * w1 * w2)\n",
"\n",
"with tf.GradientTape() as tape:\n",
" z = f(w1, w2)\n",
"\n",
"tape.gradient(z, [w1, w2])"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
},
"tags": [
"exercise_pointer"
]
},
"source": [
"**Exercises:** *You can now complete Exercise 2 in the exercises associated with this lecture.*"
]
}
],
"metadata": {
"celltoolbar": "Slideshow",
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.18"
}
},
"nbformat": 4,
"nbformat_minor": 4
}