{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# TensorFlow basics" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "TensorFlow was developed by google (https://www.tensorflow.org/), it is nowadays one of the most popular deep learning frameworks.\n", "\n", "Alternatives:\n", "\n", "Torch\n", "\n", "Caffe\n", "\n", "Theano\n", "\n", "...\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Computational Graph\n", "\n", "In tensorflow we use computational graphs, the nodes and edges of the graph indicates how individual operations are composed together, but not prescribing how they should be used. The graph structure is like assembly code: inspecting it can convey some useful information, but it does not contain all of the useful context that source code conveys.\n", "Graph collections provides a general mechanism for storing collections of metadata in a tf.Graph.\n", "\n", "![comp_grapf](https://cdn-images-1.medium.com/max/800/1*mvhm5_r6LY-eHsin21RJTg.png)\n", "\n" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "import tensorflow as tf\n", "\n", "new_graph = tf.Graph()\n", "\n", "with new_graph.as_default(): # use default graph\n", " new_g_const = tf.constant([1., 2.])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Session\n", "\n", "TensorFlow uses the tf.Session class to represent a connection between the client program---typically a Python program, although a similar interface is available in other languages---and the C++ runtime. A tf.Session object provides access to devices in the local machine, and remote devices using the distributed TensorFlow runtime. It also caches information about your tf.Graph so that you can efficiently run the same computation multiple times." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "3\n" ] } ], "source": [ "sess = tf.Session()\n", "a = tf.constant(1) # default graph\n", "b = tf.constant(2)\n", "c = a + b\n", "# instead of sess.run(c)\n", "print(sess.run(c))\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Variables\n" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "tensorflow_var = tf.Variable(1, name=\"my_variable\")" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "# 4x4 matrix normally distributed mean 0 std 1\n", "normal_var = tf.Variable(tf.truncated_normal([4,4] , mean=0.0, stddev=1.0))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To initialize normal_val we need a session!" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[ 1.18177807 -0.60717821 -0.17077304 -0.67607886]\n", " [-0.87775654 -1.59927022 -1.14966702 -0.16366777]\n", " [-1.76555407 -1.53875613 0.90461886 0.36684439]\n", " [-0.21893002 -0.20088263 -1.58203256 -0.86652845]]\n" ] } ], "source": [ "init = tf.global_variables_initializer()\n", "sess = tf.Session()\n", "sess.run(init)\n", "print(sess.run(normal_var))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Scope" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "with tf.name_scope(\"Scope1\"):\n", " with tf.name_scope(\"Scope_nested\"):\n", " nested_var = tf.multiply(5, 5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercises\n", "\n", "1. Let's implement a simple neuron/preceptron!\n", "2. Train it on some data!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. \n" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[0.91664338]\n" ] } ], "source": [ "import numpy as np\n", "\n", "def step(x):\n", " is_greater = tf.greater(x, 0)\n", " as_float = tf.to_float(is_greater)\n", " doubled = tf.multiply(as_float, 2)\n", " return tf.subtract(doubled, 1)\n", "\n", "Data = np.array([[0.0, 0.0], [1.0,0.0], [0.0,1.0], [1.0,1.0]])\n", "Label = tf.Variable([0, 1, 1, 1]) # Logical OR\n", "\n", "X = tf.placeholder(tf.float32, shape=[4,2]) # placeholder for data\n", "\n", "W = tf.Variable(tf.truncated_normal([2,1] , mean=0.0, stddev=1.0)) # weight vector\n", "B = tf.Variable(0.0, name=\"bias\")\n", "Neuron = tf.add(tf.matmul(X, W),B) # the neuron\n", "#Preds = tf.reshape(step(Neuron), [4])\n", "Preds = tf.sigmoid(Neuron) # activation\n", "\n", "loss = tf.reduce_mean(tf.squared_difference(Neuron, tf.to_float(Label))) \n", "\n", "#check accuracy\n", "init = tf.global_variables_initializer()\n", "sess = tf.Session()\n", "sess.run(init)\n", "\n", "MSE = sess.run([loss], feed_dict={X: Data})\n", "print(MSE)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. " ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[0.79427695]\n" ] } ], "source": [ "optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1)\n", "train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step())\n", "sess.run(train_op, feed_dict={X: Data})\n", "MSE = sess.run([loss], feed_dict={X: Data})\n", "print(MSE)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.12" } }, "nbformat": 4, "nbformat_minor": 2 }