1485 lines
42 KiB
Plaintext
1485 lines
42 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "33d0dc33-df81-463f-84a7-3ff36b8ac6ad",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "slide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"# Lecture 15: Deep CNN architectures"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "5fdcc63b-2ef4-4a6f-8087-1c2407255169",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "skip"
|
|
}
|
|
},
|
|
"source": [
|
|
"\n",
|
|
"[Run in colab](https://colab.research.google.com/drive/1kD3_vXFwesra2AhY-_SribsZ2a1bI82A)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 1,
|
|
"id": "4fd69b7e-c369-4fca-a5b8-a7812573279b",
|
|
"metadata": {
|
|
"execution": {
|
|
"iopub.execute_input": "2024-01-10T00:30:27.675329Z",
|
|
"iopub.status.busy": "2024-01-10T00:30:27.674913Z",
|
|
"iopub.status.idle": "2024-01-10T00:30:27.683754Z",
|
|
"shell.execute_reply": "2024-01-10T00:30:27.683067Z"
|
|
},
|
|
"slideshow": {
|
|
"slide_type": "skip"
|
|
},
|
|
"tags": []
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Version: 2024-01-10 00:30:27\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"import datetime\n",
|
|
"now = datetime.datetime.now()\n",
|
|
"print(\"Version: \" + now.strftime(\"%Y-%m-%d %H:%M:%S\"))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "f002948c-4d58-4ff5-9f61-16abf416cd2d",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "slide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"## Classical CNN architecture"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "a869adea-e0db-4c0a-946d-ffd1e560225c",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"### General CNN architecture\n",
|
|
"\n",
|
|
"- (convolution, activation, pooling) $\\times N_1$\n",
|
|
"- (fully connected layer) $\\times N_2$\n",
|
|
"\n",
|
|
"\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "562f0737-4d4d-4219-af48-d6aaf575a132",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
}
|
|
},
|
|
"source": [
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/aCNN.jpeg\" alt=\"Drawing\" width=\"1100px\" style=\"display:block; margin:auto\"/>\n",
|
|
"\n",
|
|
"[Credit: Geron]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "828af899-d3c8-4256-9eb5-c6a998a52284",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"### Decreasing resolution and increasing number of channels\n",
|
|
"\n",
|
|
"In typical architectures decrease image resolution and increase number of channels as progress deeper in the network.\n",
|
|
"\n",
|
|
"Decreasing image resolution (with the same convolutional kernel size), acts to increase the size of the receptor field of neurons as progress deeper in the network.\n",
|
|
"\n",
|
|
"Increasing number of channels (i.e. filters), provides larger feature set (and is computationally possible since image resolution decreased).\n",
|
|
"\n",
|
|
"\n",
|
|
"\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "3f644b11-666a-4be7-bf44-987e89fb94f4",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"#### For example VGG-16"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "4ba511fb-5851-4d16-b377-d5bc8f87e463",
|
|
"metadata": {},
|
|
"source": [
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/vgg16.png\" width=\"900px\" style=\"display:block; margin:auto\"/>\n",
|
|
"\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "1fb4dfd5-748f-4f37-8032-e172542bf1ef",
|
|
"metadata": {},
|
|
"source": [
|
|
"Networks becoming very deep, e.g. VGG-16 has 138 million parameters.\n",
|
|
"\n",
|
|
"Even the techniques we have discussed for training deep networks can begin to struggle."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "d84a09ba-d67b-402f-9066-599ed92d2b24",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "slide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"## ResNet"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "f39632d2-477e-4c1d-b027-7a3624a052b5",
|
|
"metadata": {},
|
|
"source": [
|
|
"ResNets (residual networks) were introduced to mitigate problems of training deep networks.\n",
|
|
"\n",
|
|
"Introduce skip connections, which are a common feature of many of the latest cutting-edge deep learning architectures being developed today."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "1c962393-580d-4ff3-9da2-47211e05e66d",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"### Standard neural network block"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "4ca246f8-2f4e-43a5-9b09-913f9731c986",
|
|
"metadata": {},
|
|
"source": [
|
|
"Recall the classical neural network block."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "5f02dbcf-1bb5-491d-ad84-d72ea19a7fd7",
|
|
"metadata": {},
|
|
"source": [
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/resnet_skip_connection_skip_removed.png\" width=\"500px\" style=\"display:block; margin:auto\"/>"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "4c5be1db-df15-4d91-9375-69ccad7916d1",
|
|
"metadata": {},
|
|
"source": [
|
|
"[[Credit (modified)](https://medium.com/machine-learning-bites/deeplearning-series-convolutional-neural-networks-a9c2f2ee1524)]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "00e2b0d4-8b7b-4609-b1da-9fa19a9d87a9",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"### Residual block"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "7674f512-9f75-4122-901e-98c13d4102e6",
|
|
"metadata": {},
|
|
"source": [
|
|
"ResNets introduce a residual block, with a connection that skips a layer and connects the activations of one layer to another layer deeper in the network."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "e2e49475-ae0c-4f6c-a9af-7c0c56ead443",
|
|
"metadata": {},
|
|
"source": [
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/resnet_skip_connection.png\" width=\"500px\" style=\"display:block; margin:auto\"/>"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "0ba40032-d9d2-46a3-a068-3c55125983cc",
|
|
"metadata": {},
|
|
"source": [
|
|
"[[Credit](https://medium.com/machine-learning-bites/deeplearning-series-convolutional-neural-networks-a9c2f2ee1524)]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "89b0362f-71fd-43b2-9f41-27d3f54559db",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "fragment"
|
|
}
|
|
},
|
|
"source": [
|
|
"Information can flow directly from $a^{[l]}$ to $a^{[l+2]}$, and so can more easily\n",
|
|
"flow deeper in network.\n",
|
|
"\n",
|
|
"The connection is drawn as connecting *into* the subsequent layer (rather than after it) since the connection is typically made *before* the non-linear activation function, e.g. ReLU."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "fc1a236f-8968-4518-9051-7ca9a42d352a",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"### Residual connection\n",
|
|
"\n",
|
|
"The residual connection involves *adding* the earlier activation, which is typically added before the activation."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "d8253b47-0768-48ef-8048-48912b6e7bea",
|
|
"metadata": {},
|
|
"source": [
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/resnet_skip_connection_internal.png\" width=\"900px\" style=\"display:block; margin:auto\"/>"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "caeb0c83-6709-47c7-b8ba-9602e4b808e2",
|
|
"metadata": {},
|
|
"source": [
|
|
"[[Credit](https://medium.com/machine-learning-bites/deeplearning-series-convolutional-neural-networks-a9c2f2ee1524)]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "77d70814-686b-4ac9-b166-1fd93a372808",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"### Residual network architecture"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "7679a29a-e691-4539-8b28-5dc3fdbcf373",
|
|
"metadata": {},
|
|
"source": [
|
|
"ResNet architectures constructed by concatenating residual blocks."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "40da97c4-cd05-4d8b-8efc-bd8a1e41cc84",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Standard architecture\n",
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/resnet_plain.jpg\" width=\"700px\" style=\"display:block; margin:auto\"/>\n",
|
|
"\n",
|
|
"[[Credit: He et al.](https://arxiv.org/abs/1512.03385)]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "f46b8fbf-aa55-42bc-8dd2-a085b0460b31",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### ResNet architecture\n",
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/resnet_residual.jpg\" width=\"700px\" style=\"display:block; margin:auto\"/>\n",
|
|
"\n",
|
|
"[[Credit: He et al.](https://arxiv.org/abs/1512.03385)]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "b6b7786e-bbe5-45e9-a32c-3461477c0254",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "fragment"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"In order to add activations at different levels, must have compatible shapes.\n",
|
|
"\n",
|
|
"ResNets architectures often include operations that preseve the shape of activations. When shapes are not preservered a suitable adjustment is made in the skip connection, e.g. downsampling."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "2a890575-c40b-4ab4-b03a-d7fe110212f3",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"### Why are ResNets effective?"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "221ecc6f-c14f-48a4-a301-abfc4d8cf5d7",
|
|
"metadata": {},
|
|
"source": [
|
|
"ResNets revise the computation of the next activation as follows:\n",
|
|
"\\begin{align*}\n",
|
|
"a^{[l+2]} = g(z^{[l+2]}) \\rightarrow\n",
|
|
"a^{[l+2]} = g(z^{[l+2]} + a^{[l]}) .\n",
|
|
"\\end{align*}"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "a4beaa68-7ab9-45be-8822-b824d79b9b37",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "fragment"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"Expanding this out in terms of the intermediate activation:\n",
|
|
"\\begin{align*}\n",
|
|
"a^{[l+2]} &= g(z^{[l+2]} + a^{[l]}) \\\\\n",
|
|
"&= g(W^{[l+2]} a^{[l+1]} + b^{[l+2]} + a^{[l]}) .\n",
|
|
"\\end{align*}"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "b65e46fd-3236-42d1-8597-1c5deb20753f",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "fragment"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"Relatively easy for network to learn $W^{[l+2]}=0$ and $b^{[l+2]}=0$ (particularly with small weight initialisation and regularisation).\n",
|
|
"\n",
|
|
"Then, for ReLU, $a^{[l+2]} = g(a^{[l]}) = a^{[l]}$."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "9129aaf2-14ec-435b-81fd-a5e537a67bfe",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "fragment"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"So adding additional blocks generally shouldn't hinder performance and has the potential to further improve performance (each block can learn a residual to improve performance or leave essentially unchanged)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "4d204271-4b5d-424f-a5b8-79cbbb033f6c",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"### Performance for deep networks\n",
|
|
"\n",
|
|
"Due to issues with training deep standard networks, performance generally starts to decrease if the network get too deep.\n",
|
|
"\n",
|
|
"For ResNets, increasing depth generally continues to lead to improving performance."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "40678721-62f3-4504-801d-e2737562b47e",
|
|
"metadata": {},
|
|
"source": [
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/resnet_training_loss.png\" width=\"700px\" style=\"display:block; margin:auto\"/>"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "88bcaccb-75c2-47fd-8aff-e4a9e95ff9b7",
|
|
"metadata": {},
|
|
"source": [
|
|
"[[Credit](https://medium.com/machine-learning-bites/deeplearning-series-convolutional-neural-networks-a9c2f2ee1524)]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "af2faa20-4b75-41f3-9adb-7081e1df9a21",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"### Skip connections\n",
|
|
"\n",
|
|
"The residual connection is an example of a skip connection.\n",
|
|
"\n",
|
|
"In ResNets, the connection is made by *adding* activations. Alternatively, one could also concatenate layers to allow information to flow deeper into the network more easily.\n",
|
|
"\n",
|
|
"Skip connections are a useful concept used widely in many cutting-edge architectures.\n",
|
|
"\n",
|
|
"Will see concept again later in this lecture when we look at UNets."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "91615aea-da5e-466d-8732-fb935dfc02d8",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
},
|
|
"tags": [
|
|
"exercise_pointer"
|
|
]
|
|
},
|
|
"source": [
|
|
"**Exercises:** *You can now complete Exercise 1 in the exercises associated with this lecture.*"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "7b5a1fba-5110-4ddd-bcd3-7f52845f31f2",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "slide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"## Inception "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "da5e461d-514e-4b6f-aab5-f8207a5d8450",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"### Motivation for Inception\n",
|
|
"\n",
|
|
"What size convolutional kernel should we use?\n",
|
|
"\n",
|
|
"Can decide empirically by cross-validation but could also use many at once and let network decide how to combine.\n",
|
|
"\n",
|
|
"This is the general idea behind the Inception module. \n",
|
|
"\n",
|
|
"Leverages 1x1 convolutions."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "c121758f-f671-4321-bb8d-fefc2bec510e",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"### 1x1 convolution"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "875ffd3e-7cad-481b-937e-9e587ef8a5a9",
|
|
"metadata": {},
|
|
"source": [
|
|
"1x1 convolution is a powerful layer used in many cutting-edge deep learning architectures."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "3a9dda28-eb3b-4ea3-b479-3963d198e119",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "fragment"
|
|
}
|
|
},
|
|
"source": [
|
|
"But isn't a 1x1 convolution just multiplying by a number?\n",
|
|
"\n",
|
|
"It is for a single channel but not when considering multiple input and output channels."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "08f29b3a-7e2e-4530-bd4b-4c6a972a2e0d",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"#### Graphical illustration of 1x1 convolution"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "2a83c1a2-eea5-423f-874f-b9ca1b5427c7",
|
|
"metadata": {},
|
|
"source": [
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/1x1-convolution1.png\" width=\"700px\" style=\"display:block; margin:auto\"/>"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "655e4780-b2b9-4afa-804f-11423a9cbbc6",
|
|
"metadata": {},
|
|
"source": [
|
|
"When have multiple input channels, weighting, summation, and activation applied across channels.\n",
|
|
"\n",
|
|
"Acts like a fully-connected neural network across channels. Sometimes called *network in a network*.\n",
|
|
"\n",
|
|
"Then repeat with multiple filters to create multiple output channels."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "4057896e-b577-45c9-bf33-501fcbf7a767",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"#### 1x1 convolution to control channel size"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "40c30b53-4815-45dc-bc84-4586660096ad",
|
|
"metadata": {},
|
|
"source": [
|
|
"1x1 convolutions often used to control the number of channels at intermediate points in a network, e.g. as a channel bottleneck (as well see shortly in the Inception module).\n",
|
|
"\n",
|
|
"\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "5d26ee5f-f5bf-4de3-9b12-34a7a47e44ea",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"### Inception module"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "2434c5af-0cfc-4abe-989a-e34791e924ff",
|
|
"metadata": {},
|
|
"source": [
|
|
"We saw how 1x1 convolutions can be considered as a *network in a network*.\n",
|
|
"\n",
|
|
"Need to go deeper!"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "9bad5892-9b75-49d9-9069-3d92bbac26dd",
|
|
"metadata": {},
|
|
"source": [
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/we_need_to_go_deeper.jpeg\" width=\"700px\" style=\"display:block; margin:auto\"/>"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "961d2656-9b11-4163-88e6-fa758d44b692",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"#### General Inception module\n",
|
|
"\n",
|
|
"Consider multiple kernel sizes at once and a pooling layer. Then concatenate outputs.\n",
|
|
"\n",
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/inception_module_szegedy_1.png\" width=\"700px\" style=\"display:block; margin:auto\"/>\n",
|
|
"\n",
|
|
"[[Credit: Szegedy et al.](https://arxiv.org/abs/1409.4842)]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "025d6206-65e0-454a-b96e-837b153d2455",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "fragment"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"This architecture can quickly become computationally demanding."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "1ccca0ea-4d39-4b15-b981-0f0324e280ac",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
}
|
|
},
|
|
"source": [
|
|
"#### Inception module with 1x1 convolutions\n",
|
|
"\n",
|
|
"Include 1x1 convolutions as a channel bottleneck to reduce computational cost.\n",
|
|
"\n",
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/inception_module_szegedy_2.png\" width=\"700px\" style=\"display:block; margin:auto\"/>\n",
|
|
"\n",
|
|
"[[Credit: Szegedy et al.](https://arxiv.org/abs/1409.4842)]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "48703c50-3c2b-4b02-9bdd-d2f2ddad4861",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
}
|
|
},
|
|
"source": [
|
|
"#### Example of Inception module computational costs\n",
|
|
"\n",
|
|
"Consider a 28x28 input feature map, with 192 channels. Require an output map with resolution 28x28 and 32 channels (\"same\" convolution so input and output resolutions the same)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "b45ae21a-1a9b-452c-9484-dc44fe62d12e",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "fragment"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"##### Standard convolutional layer\n",
|
|
"\n",
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/inception_no_bottleneck.png\" width=\"700px\" style=\"display:block; margin:auto\"/>\n",
|
|
"\n",
|
|
"[[Credit](https://medium.com/machine-learning-bites/deeplearning-series-convolutional-neural-networks-a9c2f2ee1524)]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "4361812b-7be8-4038-ac37-a3c550300e92",
|
|
"metadata": {},
|
|
"source": [
|
|
"Number of flops = (28 x 28 x 32) x (5 x 5 x 192) = 120,422,400 = 120 million"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "99e8c2f5-d817-4eab-8c48-6f97d94ff887",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"##### 1x1 convolution bottleneck\n",
|
|
"\n",
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/inception_bottleneck.png\" width=\"700px\" style=\"display:block; margin:auto\"/>\n",
|
|
"\n",
|
|
"[[Credit](https://medium.com/machine-learning-bites/deeplearning-series-convolutional-neural-networks-a9c2f2ee1524)]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "842a51a4-6b63-4bac-b127-e42af152b948",
|
|
"metadata": {},
|
|
"source": [
|
|
"Number of flops = (28 x 28 x 16) x (1 x 1 x 192) + (28 x 28 x 32) x (5 x 5 x 16) = 2,408,448 + 10,035,200 = 12 million"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "123d9d02-ff58-4d2b-bd5a-c55442d87129",
|
|
"metadata": {},
|
|
"source": [
|
|
"Generally, performance is not significantly degraded (within reason)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "dbffb188-a78f-412b-81c7-cd090ca69c61",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"### Inception network / GoogLeNet architecture\n",
|
|
"\n",
|
|
"Overall Inception network architecture (also called GoogLeNet, cf. LeNet) involves combining multiple Inception modules."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "05646249-ee28-4efa-800b-b1db4df1dbaf",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
}
|
|
},
|
|
"source": [
|
|
"#### Inception module (from above but drawn sideways)\n",
|
|
"\n",
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/inception-module1.png\" width=\"500px\" style=\"display:block; margin:auto\"/>"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "044f8059-e213-4aa2-96a9-e53963c1f0f7",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "fragment"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"#### Inception network\n",
|
|
"\n",
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/googlenet_diagram1.png\" width=\"1000px\" style=\"display:block; margin:auto\"/>\n",
|
|
"\n",
|
|
"[[Credit: Szegedy et al.](https://arxiv.org/abs/1409.4842)]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "1b15aed8-d766-447c-8a38-5df68b6f8d21",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "slide"
|
|
}
|
|
},
|
|
"source": [
|
|
"## MobileNet"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "d14bbda8-45b2-4efb-949a-1410aa8a620c",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
}
|
|
},
|
|
"source": [
|
|
"### Motivation for MobileNet\n",
|
|
"\n",
|
|
"The architectures we've seen above are computationally demanding (even when leveraging 1x1 convolution channel bottlenecks).\n",
|
|
"\n",
|
|
"MobileNet architecture provides a more computationally efficient architecture, for example for low cost deployment on mobile devices (hence its name).\n",
|
|
"\n",
|
|
"Based on *depthwise separable convolution*, which includes a *depthwise convolution*, followed by a *pointwise convolution*."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "62c86cf9-d6a4-4849-b849-cdee84df150b",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
}
|
|
},
|
|
"source": [
|
|
"### Recap standard convolution"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "5f349d12-a284-43a3-b0f7-62ec2d476b98",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Multiple input channels, single output channel"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "6c2bd748-728e-4d43-8e33-eba17ee03914",
|
|
"metadata": {},
|
|
"source": [
|
|
"Consider 5x5 kernel with no padding and stride of one.\n",
|
|
"\n",
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/dsc_normal_conv_1_channel.png\" width=\"700px\" style=\"display:block; margin:auto\"/>\n",
|
|
"\n",
|
|
"[[Credit](https://towardsdatascience.com/a-basic-introduction-to-separable-convolutions-b99ec3102728)]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "bdeb5ae9-7d08-4a6b-99c6-5d5c71645be3",
|
|
"metadata": {},
|
|
"source": [
|
|
"Number of flops = (8 x 8) x (5 x 5 x 3) = 4,800 (no padding)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "83fb19c6-c133-4c5f-a645-98f22ac83eb6",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
}
|
|
},
|
|
"source": [
|
|
"#### Multiple input channels, multiple output channels\n",
|
|
"\n",
|
|
"Repeat the above for each output channel."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "6b876323-6d84-459c-b3d3-ba9a544bad2d",
|
|
"metadata": {},
|
|
"source": [
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/dsc_normal_conv_256_channels_annotated.png\" width=\"700px\" style=\"display:block; margin:auto\"/>\n",
|
|
"\n",
|
|
"[[Credit](https://towardsdatascience.com/a-basic-introduction-to-separable-convolutions-b99ec3102728)]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "c45b0f97-e134-4dbf-ad5d-96656b8a70e9",
|
|
"metadata": {},
|
|
"source": [
|
|
"Number of flops = (8 x 8 x 256) x (5 x 5 x 3) = 1,228,800 = 1.2 million"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "c53318fd-6018-4f1b-9f00-b19b8d829079",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
}
|
|
},
|
|
"source": [
|
|
"### Depthwise convolution\n",
|
|
"\n",
|
|
"One filter for each channel (no summation over channels). Number of input and output channels are the same."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "aee7ebd7-165c-476a-8cad-846a0789d613",
|
|
"metadata": {},
|
|
"source": [
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/dsc_depthwise_conv.png\" width=\"700px\" style=\"display:block; margin:auto\"/>\n",
|
|
"\n",
|
|
"[[Credit](https://towardsdatascience.com/a-basic-introduction-to-separable-convolutions-b99ec3102728)]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "3978109d-3ca0-4187-ac91-90ce07c0ea86",
|
|
"metadata": {},
|
|
"source": [
|
|
"Number of flops = (8 x 8) x (5 x 5) x 3 = 4,800"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "262e0b27-49a6-428b-8103-704f303097b9",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "fragment"
|
|
}
|
|
},
|
|
"source": [
|
|
"Computation cost reduced to setting of a single output channel.\n",
|
|
"\n",
|
|
"But no mixing across channels and number of output channels must be identical to input."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "5d6c7644-2da7-400f-831c-9a11f3b96625",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
}
|
|
},
|
|
"source": [
|
|
"### Pointwise convolution\n",
|
|
"\n",
|
|
"Introduce mixing of output channels using pointwise convolutions (1x1 convolutions). "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "c2c22710-1313-4f43-b74d-fcd98c586c16",
|
|
"metadata": {},
|
|
"source": [
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/dsc_pointwise_conv_1_channel.png\" width=\"700px\" style=\"display:block; margin:auto\"/>\n",
|
|
"\n",
|
|
"[[Credit](https://towardsdatascience.com/a-basic-introduction-to-separable-convolutions-b99ec3102728)]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "29be5fe8-b72d-47a3-bec6-83afd02bc138",
|
|
"metadata": {},
|
|
"source": [
|
|
"Number of flops = (8 x 8) x (1 x 1 x 3) = 192"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "cfae4e5a-bcc8-404e-893e-93d8357ee20f",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
}
|
|
},
|
|
"source": [
|
|
"Can also control number of output channels."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "2341097d-e426-46e4-b28a-3b90c60a4caa",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "-"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/dsc_pointwise_conv_256_channels_annotated.png\" width=\"700px\" style=\"display:block; margin:auto\"/>\n",
|
|
"\n",
|
|
"[[Credit](https://towardsdatascience.com/a-basic-introduction-to-separable-convolutions-b99ec3102728)]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "84081514-fd4d-4635-858e-a401bc54409a",
|
|
"metadata": {},
|
|
"source": [
|
|
"Number of flops = (8 x 8 x 256) x (1 x 1 x 3) = 49,152"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "45a52d94-69d6-4acf-bb75-1149d06e5049",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"### Depthwise separable convolutions\n",
|
|
"\n",
|
|
"Depthwise separable convolutions include depthwise convolution, followed by pointwise convolution. Separable since we separate the spatial and channel mixing."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "52966a51-7360-44eb-919f-2ce7ed800068",
|
|
"metadata": {},
|
|
"source": [
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/Depthwise-separable-convolution-block.png\" width=\"700px\" style=\"display:block; margin:auto\"/>\n",
|
|
"\n",
|
|
"[[Credit](https://www.researchgate.net/figure/Depthwise-separable-convolution-block_fig1_343943234)]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "f1d8c939-3cb3-416a-87b5-2f1e5e6e2227",
|
|
"metadata": {},
|
|
"source": [
|
|
"In the example considered above, 1.2 million flops (standard convolution) $\\rightarrow$ 4,800 (depthwise convolutions) + 49,152 (pointwise convolutions) = 53,952 (same output resolution and number of channels)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "abafc35a-3c9c-4be3-9d54-b0cd7e38b415",
|
|
"metadata": {},
|
|
"source": [
|
|
"Generally, performance is not significantly degraded (within reason)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "08db1943-7820-4f7a-a1cf-b45b6e027264",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"### MobileNet architectures"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "27b82377-db73-49cc-88c0-762c631c2bea",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### MobileNet v1\n",
|
|
"\n",
|
|
"Use depthwise separable convolutions as building block for architecture."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "d7f190c7-6cba-41c1-921b-c5ef82926485",
|
|
"metadata": {},
|
|
"source": [
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/mobilenet_v1.png\" width=\"700px\" style=\"display:block; margin:auto\"/>\n",
|
|
"\n",
|
|
"[[Credit](https://www.coursera.org/learn/convolutional-neural-networks/)]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "8584a3ac-7a07-45d2-98a4-f9ed14ac5eee",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"#### MobileNet v2\n",
|
|
"\n",
|
|
"Add a pointwise 1x1 convolution before the depthwise separable convolution to increase number of channels in the intermediate stage (results in inverted channel bottleneck).\n",
|
|
"\n",
|
|
"Also include residual connection (same as ResNet)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "e615c9f9-3760-43c6-811b-a9f98f1d6e84",
|
|
"metadata": {},
|
|
"source": [
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/mobilenet_v2.png\" width=\"700px\" style=\"display:block; margin:auto\"/>\n",
|
|
"\n",
|
|
"[[Credit](https://www.coursera.org/learn/convolutional-neural-networks/)]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "67d0124e-461b-4c7f-b60d-45f67519f545",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "slide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"## UNet"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "12ca143a-77ff-473d-975f-2b85cdb601ed",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"### Motivation for UNet\n",
|
|
"\n",
|
|
"So far we've considered problems where the ouputs of the machine learning model are very low resolution, e.g. classification, low-dimensional regression.\n",
|
|
"\n",
|
|
"For many problems we require high-resolution outputs for dense predictions.\n",
|
|
"\n",
|
|
"We need to modify architectures to support dense predictions."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "225652bb-bd9e-4c42-94d9-cf0be66674c9",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"#### Semantic segmentation\n",
|
|
"\n",
|
|
"Semantic segmentation is a common type of problem where we require a high-resolution output with dense predictions.\n",
|
|
"\n",
|
|
"Goal is to predict a class for every single pixel in an image.\n",
|
|
"\n",
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/semantic_segmentation.png\" width=\"700px\" style=\"display:block; margin:auto\"/>\n",
|
|
"\n",
|
|
"[[Credit](https://www.jeremyjordan.me/semantic-segmentation/)]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "56aff601-9467-4db8-a4fa-dd8b38fa90e7",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
}
|
|
},
|
|
"source": [
|
|
"#### Segmentation of medical images\n",
|
|
"\n",
|
|
"The UNet architecture was initially proposed for segmentation of medical images but has proven useful widely.\n",
|
|
"\n",
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/semantic_segmentation_unet.png\" width=\"700px\" style=\"display:block; margin:auto\"/>\n",
|
|
"\n",
|
|
"[[Credit](https://arxiv.org/abs/1701.08816)]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "24b4b574-5607-4311-86c3-8b5ca6830daa",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
}
|
|
},
|
|
"source": [
|
|
"### General approach"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "1c5eded1-f376-4f32-8e4d-c23ff7329e9e",
|
|
"metadata": {},
|
|
"source": [
|
|
"Naive approach is to adopt standard architectures and stay at high-resolutions"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "41b60d4f-8d7b-4e67-8042-8fc40623a0be",
|
|
"metadata": {},
|
|
"source": [
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/cnn_high_res_naive.png\" width=\"900px\" style=\"display:block; margin:auto\"/>\n",
|
|
"\n",
|
|
"[[Credit](http://cs231n.stanford.edu/)]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "b4c62c4c-80f9-4b87-9ab7-0e5314d38853",
|
|
"metadata": {},
|
|
"source": [
|
|
"If don't reduce image resolutions through network, then we need very large kernels deeper into the network to have larger receptive fields. Also difficult to increase number of channels due to computational costs.\n",
|
|
"\n",
|
|
"Becomes extremely computationally demanding."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "ef4c2875-f9f5-472b-ade9-fdc2a21f0de4",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"Alternative approach is to reduce image resolution through network as usual but then include subsequent layers to increase resolution."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "3c801ee4-4211-4592-b49e-095421ccd003",
|
|
"metadata": {},
|
|
"source": [
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/cnn_high_res_down_up.png\" width=\"900px\" style=\"display:block; margin:auto\"/>\n",
|
|
"\n",
|
|
"[[Credit](http://cs231n.stanford.edu/)]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "8baf1cf8-c4b8-47f6-bb8a-b4591a63142d",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "fragment"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"Require an upsampling layer."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "2bdd926a-4b4f-441e-8e8d-c9bae8dbf282",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"### Transpose convolution\n",
|
|
"\n",
|
|
"Transpose convolutional layer provides a way to upsample images."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "982289af-4b05-495a-b95e-b36c79297a92",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
}
|
|
},
|
|
"source": [
|
|
"#### Mathematical representation"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "cf12c0b7-dcd0-4607-9a36-322f4610dec9",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "-"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"Recall the convolution output is given by\n",
|
|
"\\begin{align*}\n",
|
|
"z_{i,j} = \\sum_{u,v} w_{u-i,v-j} x_{u,v} ,\n",
|
|
"\\end{align*}\n",
|
|
"where $x$ is the input image, $w$ is the filter (kernel) and $i$ ($u$) and $j$ ($v$) denote row and column indices, respectively."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "74eec71d-606a-402c-b002-39df67ac0034",
|
|
"metadata": {},
|
|
"source": [
|
|
"Can represent convolution in matrix form:\n",
|
|
"\n",
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/conv_eq.png\" width=\"900px\" style=\"display:block; margin:auto\"/>\n",
|
|
"\n",
|
|
"Convolution applied by matrix multiplication with $\\mathsf{W}$."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "bcc922d6-111f-4b0f-9525-a34d908268b2",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"Consider multiplication of transpose of $\\mathsf{W}$:\n",
|
|
"\n",
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/transpose_conv_eq.png\" width=\"900px\" style=\"display:block; margin:auto\"/>"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "e3ee179c-0732-4230-91d0-1ba51e9d58d5",
|
|
"metadata": {},
|
|
"source": [
|
|
"Can see transpose convolution involves placing shifted kernels down on *output*, weighting by input at shifted position and summing.\n",
|
|
"\n",
|
|
"Contrast with convolution, which involves placing shifted kernels down on *input*, weighting by inputs that overlap and summing."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "d2921570-173c-46ae-9c9c-71965552533f",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"#### Graphical representation"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "099f6660-0c44-4353-9a96-898e8729678f",
|
|
"metadata": {},
|
|
"source": [
|
|
"Consider input, kernel and output shape.\n",
|
|
"\n",
|
|
"<!--\n",
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/transpose_conv_input.png\" width=\"150px\" style=\"display:block; margin:auto\"/>\n",
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/transpose_conv_kernel.png\" width=\"150px\" style=\"display:block; margin:auto\"/>\n",
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/transpose_conv_output.png\" width=\"200px\" style=\"display:block; margin:auto\"/>\n",
|
|
"\n",
|
|
"[[Credit](https://towardsdatascience.com/transposed-convolution-demystified-84ca81b4baba#)]\n",
|
|
"-->"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "01b2d4e3-566d-431e-a537-83b77602537c",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "-"
|
|
}
|
|
},
|
|
"source": [
|
|
"Transpose convolution is given by placing shifted kernels down on *output*, weighting by input at shifted position and summing."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "5be952ea-eeac-4cea-aa30-1f05b6648acf",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "-"
|
|
}
|
|
},
|
|
"source": [
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/transpose_conv_result.png\" width=\"1000px\" style=\"display:block; margin:auto\"/>\n",
|
|
"\n",
|
|
"[[Credit](https://towardsdatascience.com/transposed-convolution-demystified-84ca81b4baba#)]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "42beef37-3d8d-48e4-b40a-6e3f362e2050",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
},
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"### UNet architecture"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "67f59b19-e004-4736-b4bc-29a583f2b93f",
|
|
"metadata": {},
|
|
"source": [
|
|
"UNet architecture leverages standard convolutions and pooling for downsampling stages.\n",
|
|
"\n",
|
|
"Then adopts transpose convolution for upsampling stages.\n",
|
|
"\n",
|
|
"Also includes skip connections to copy higher resolution feature maps from the downsampling path to the upsampling path."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "86f59840-d73b-4f72-aa1d-6553377a630e",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "-"
|
|
}
|
|
},
|
|
"source": [
|
|
"<img src=\"https://raw.githubusercontent.com/astro-informatics/course_mlbd_images/master/Lecture15_Images/unet_paper.png\" width=\"700px\" style=\"display:block; margin:auto\"/>\n",
|
|
"\n",
|
|
"[[Credit](https://arxiv.org/abs/1505.04597)]"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"celltoolbar": "Slideshow",
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.8.18"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 5
|
|
}
|