Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- {
- "cells": [
- {
- "cell_type": "code",
- "execution_count": 1,
- "metadata": {
- "collapsed": true
- },
- "outputs": [],
- "source": [
- "import torch"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 2,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "data": {
- "text/plain": [
- "'1.0.0'"
- ]
- },
- "execution_count": 2,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "torch.__version__"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 3,
- "metadata": {
- "collapsed": true
- },
- "outputs": [],
- "source": [
- "x = torch.tensor([[1, 2, 3], [4, 5, 6]])\n",
- "y = torch.tensor([[7, 8, 9], [10, 11, 12]])"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 4,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "data": {
- "text/plain": [
- "<torch._C.Generator at 0x1b55d9d1db0>"
- ]
- },
- "execution_count": 4,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "torch.manual_seed(42)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 5,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "tensor([[0.8823, 0.9150, 0.3829],\n",
- " [0.9593, 0.3904, 0.6009]])\n"
- ]
- }
- ],
- "source": [
- "print(torch.rand([2, 3]))"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 6,
- "metadata": {
- "collapsed": true
- },
- "outputs": [],
- "source": [
- "import numpy as np"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 7,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "ename": "TypeError",
- "evalue": "add(): argument 'other' (position 1) must be Tensor, not numpy.ndarray",
- "output_type": "error",
- "traceback": [
- "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
- "\u001b[1;31mTypeError\u001b[0m Traceback (most recent call last)",
- "\u001b[1;32m<ipython-input-7-2e651424c109>\u001b[0m in \u001b[0;36m<module>\u001b[1;34m()\u001b[0m\n\u001b[0;32m 1\u001b[0m \u001b[0mxnp\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mnp\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0marray\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;36m1\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;36m2\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;36m3\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m[\u001b[0m\u001b[1;36m4\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;36m5\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;36m6\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m----> 2\u001b[1;33m \u001b[0mf2\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mxnp\u001b[0m \u001b[1;33m+\u001b[0m \u001b[0my\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m",
- "\u001b[1;31mTypeError\u001b[0m: add(): argument 'other' (position 1) must be Tensor, not numpy.ndarray"
- ]
- }
- ],
- "source": [
- "xnp = np.array([[1, 2, 3], [4, 5, 6]])\n",
- "f2 = xnp + y"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "えっ??? \n",
- "numpy_arrayとTensorって計算できるの?????\n",
- "\n",
- "-> torchのversionが1.0.0だとエラー吐くらしい. \n",
- "その前のversionだと通るっぽい"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "1. p.10 installing-pytorchまで\n",
- "2. p.13 終わりまで\n",
- "3. p.14-18\n",
- "4. p.19-22(loading dataの前まで)(伊藤)\n",
- "5. p.22(loading dataから)-26(Data Loaderまで)\n",
- "6. p.26-(DataLoader以降最後まで)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# 輪講chapter1\n",
- "## Slicing and indexing and reshaping(P.19)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 8,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "data": {
- "text/plain": [
- "tensor([[1, 2, 3],\n",
- " [4, 5, 6]])"
- ]
- },
- "execution_count": 8,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "x"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "* 一個目の要素を表示"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 12,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "tensor([1, 2, 3])\n"
- ]
- }
- ],
- "source": [
- "print(x[0])"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "* 二個目の要素の0~1を表示 \n",
- "slicingは0始まりで,最後を含まないので注意 "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 10,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "tensor([4, 5])\n"
- ]
- }
- ],
- "source": [
- "print(x[1][0:2])"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "`view()`関数を使って既存のtensorをreshapeされたcopyを作れるよ. \n",
- "3つの例を示すね "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 11,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "tensor([1, 2, 3, 4, 5, 6])\n"
- ]
- }
- ],
- "source": [
- "print(x.view(-1))"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 13,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "tensor([[1, 2],\n",
- " [3, 4],\n",
- " [5, 6]])\n"
- ]
- }
- ],
- "source": [
- "print(x.view(3, 2))"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 14,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "tensor([[1],\n",
- " [2],\n",
- " [3],\n",
- " [4],\n",
- " [5],\n",
- " [6]])\n"
- ]
- }
- ],
- "source": [
- "print(x.view(6, 1))"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "(3, 2)と(6, 1)はわかりやすいけど,-1は何をやっているんだろう. \n",
- "必要な列数がわかっているけど,何行のデータが来るかわからないときに便利だよ. \n",
- "-1を使うと,全体の要素数から適切な行数(や列数)を計算してくれるよ. "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 15,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "tensor([[1, 2],\n",
- " [3, 4],\n",
- " [5, 6]])\n"
- ]
- }
- ],
- "source": [
- "print(x.view(3, -1))"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "重要な操作として,軸の入れ替えがあるよ. \n",
- "これは`tensor.transpose()`メソッド(2軸の入れ替えの場合)で実現できるよ. "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 16,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "data": {
- "text/plain": [
- "tensor([[1, 2, 3],\n",
- " [4, 5, 6]])"
- ]
- },
- "execution_count": 16,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "x"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 24,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "data": {
- "text/plain": [
- "tensor([[1, 4],\n",
- " [2, 5],\n",
- " [3, 6]])"
- ]
- },
- "execution_count": 24,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "x.transpose(0, 1)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "ここで注意したいのは,`transpose()`は一度に二軸の入れ替えしかできないよ. \n",
- "2軸より大きい軸同士を入れ替えたいときは,`permute()`メソッドを使うといいよ. \n",
- "軸の番号を指定することで利用できるよ. "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 31,
- "metadata": {
- "collapsed": true
- },
- "outputs": [],
- "source": [
- "a = torch.ones(1, 2, 3, 4)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 32,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "data": {
- "text/plain": [
- "tensor([[[[1., 1., 1., 1.],\n",
- " [1., 1., 1., 1.],\n",
- " [1., 1., 1., 1.]],\n",
- "\n",
- " [[1., 1., 1., 1.],\n",
- " [1., 1., 1., 1.],\n",
- " [1., 1., 1., 1.]]]])"
- ]
- },
- "execution_count": 32,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "a"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 33,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "data": {
- "text/plain": [
- "torch.Size([4, 3, 2, 1])"
- ]
- },
- "execution_count": 33,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "# 2回に分けて全ての軸を入れ替えた場合\n",
- "a.transpose(0, 3).transpose(1, 2).size()"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 34,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "data": {
- "text/plain": [
- "torch.Size([4, 3, 2, 1])"
- ]
- },
- "execution_count": 34,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "# permuteを使うと一回で済むよ\n",
- "a.permute(3, 2, 1, 0).size()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "二次元までのtensorだとフラットなtableで表せるよ. \n",
- "より高次元になると表せないよ. \n",
- "magic of deep learning では問題にならないよ. \n",
- "実世界の特徴量はデータ構造の次元にエンコードされています. \n",
- "なので,"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## In place operations(p.21-22)\n",
- "上書きする関数と,しない関数の違いを理解することは大事だよ. \n",
- "例えば,`transpose(x)`だと処理後の変数が戻り値として返ってきたけれど,元のxは変更されませんでした. \n",
- "今までの例で使ったのは,上書きしないものばかりでした. \n",
- "別に `x = x.transpose(x)` と書いてもいいんだけど,より便利な方法として上書きする関数を使う方法があるよ. \n",
- "大体アンダースコアをつけると上書き版になるよ. "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 35,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "data": {
- "text/plain": [
- "tensor([[1, 2, 3],\n",
- " [4, 5, 6]])"
- ]
- },
- "execution_count": 35,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "x"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 36,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "data": {
- "text/plain": [
- "tensor([[1, 4],\n",
- " [2, 5],\n",
- " [3, 6]])"
- ]
- },
- "execution_count": 36,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "x.transpose_(1, 0)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 37,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "data": {
- "text/plain": [
- "tensor([[1, 4],\n",
- " [2, 5],\n",
- " [3, 6]])"
- ]
- },
- "execution_count": 37,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "x"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 38,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "data": {
- "text/plain": [
- "tensor([[ 7, 8, 9],\n",
- " [10, 11, 12]])"
- ]
- },
- "execution_count": 38,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "y"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 39,
- "metadata": {
- "collapsed": false
- },
- "outputs": [
- {
- "ename": "RuntimeError",
- "evalue": "The size of tensor a (3) must match the size of tensor b (2) at non-singleton dimension 1",
- "output_type": "error",
- "traceback": [
- "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
- "\u001b[1;31mRuntimeError\u001b[0m Traceback (most recent call last)",
- "\u001b[1;32m<ipython-input-39-e46502eb7b4e>\u001b[0m in \u001b[0;36m<module>\u001b[1;34m()\u001b[0m\n\u001b[1;32m----> 1\u001b[1;33m \u001b[0my\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0madd_\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mx\u001b[0m\u001b[1;33m*\u001b[0m\u001b[1;36m2\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m",
- "\u001b[1;31mRuntimeError\u001b[0m: The size of tensor a (3) must match the size of tensor b (2) at non-singleton dimension 1"
- ]
- }
- ],
- "source": [
- "y.add_(x*2)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "いや,上で上書きさせといて,下はそれかよ \n",
- "copyしとくとか書いとけや "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "collapsed": true
- },
- "outputs": [],
- "source": []
- }
- ],
- "metadata": {
- "anaconda-cloud": {},
- "kernelspec": {
- "display_name": "Python [conda env:Anaconda3]",
- "language": "python",
- "name": "conda-env-Anaconda3-py"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.5.5"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 1
- }
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement