Advertisement
NolanSyKinsley

Untitled

Mar 6th, 2016
109
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 7.67 KB | None | 0 0
  1.  
  2. # coding: utf-8
  3.  
  4. # # Deep Dreams (with Caffe)
  5. #
  6. # This notebook demonstrates how to use the [Caffe](http://caffe.berkeleyvision.org/) neural network framework to produce "dream" visuals shown in the [Google Research blog post](http://googleresearch.blogspot.ch/2015/06/inceptionism-going-deeper-into-neural.html).
  7. #
  8. # It'll be interesting to see what imagery people are able to generate using the described technique. If you post images to Google+, Facebook, or Twitter, be sure to tag them with **#deepdream** so other researchers can check them out too.
  9. #
  10. # ##Dependencies
  11. # This notebook is designed to have as few dependencies as possible:
  12. # * Standard Python scientific stack: [NumPy](http://www.numpy.org/), [SciPy](http://www.scipy.org/), [PIL](http://www.pythonware.com/products/pil/), [IPython](http://ipython.org/). Those libraries can also be installed as a part of one of the scientific packages for Python, such as [Anaconda](http://continuum.io/downloads) or [Canopy](https://store.enthought.com/).
  13. # * [Caffe](http://caffe.berkeleyvision.org/) deep learning framework ([installation instructions](http://caffe.berkeleyvision.org/installation.html)).
  14. # * Google [protobuf](https://developers.google.com/protocol-buffers/) library that is used for Caffe model manipulation.
  15.  
  16. # In[12]:
  17.  
  18. # imports and basic notebook setup
  19. from cStringIO import StringIO
  20. import numpy as np
  21. import scipy.ndimage as nd
  22. import PIL.Image
  23. from IPython.display import clear_output, Image, display
  24. from google.protobuf import text_format
  25.  
  26. import caffe
  27.  
  28. # If your GPU supports CUDA and Caffe was built with CUDA support,
  29. # uncomment the following to run Caffe operations on the GPU.
  30. caffe.set_mode_gpu()
  31. caffe.set_device(0) # select GPU device if multiple devices exist
  32.  
  33. def showarray(a, fmt='jpeg'):
  34. a = np.uint8(np.clip(a, 0, 255))
  35. f = StringIO()
  36. PIL.Image.fromarray(a).save(f, fmt)
  37.  
  38.  
  39.  
  40. # ## Loading DNN model
  41. # In this notebook we are going to use a [GoogLeNet](https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet) model trained on [ImageNet](http://www.image-net.org/) dataset.
  42. # Feel free to experiment with other models from Caffe [Model Zoo](https://github.com/BVLC/caffe/wiki/Model-Zoo). One particularly interesting [model](http://places.csail.mit.edu/downloadCNN.html) was trained in [MIT Places](http://places.csail.mit.edu/) dataset. It produced many visuals from the [original blog post](http://googleresearch.blogspot.ch/2015/06/inceptionism-going-deeper-into-neural.html).
  43.  
  44. # In[21]:
  45.  
  46. model_path = '/home/nolan/caffe/models/bvlc_googlenet/' # substitute your path here
  47. net_fn = model_path + 'deploy.prototxt'
  48. param_fn = model_path + 'bvlc_googlenet.caffemodel'
  49.  
  50. # Patching model to be able to compute gradients.
  51. # Note that you can also manually add "force_backward: true" line to "deploy.prototxt".
  52. model = caffe.io.caffe_pb2.NetParameter()
  53. text_format.Merge(open(net_fn).read(), model)
  54. model.force_backward = True
  55. open('tmp.prototxt', 'w').write(str(model))
  56.  
  57. net = caffe.Classifier('tmp.prototxt', param_fn,
  58. mean = np.float32([104.0, 116.0, 122.0]), # ImageNet mean, training set dependent
  59. channel_swap = (2,1,0)) # the reference model has channels in BGR order instead of RGB
  60.  
  61. # a couple of utility functions for converting to and from Caffe's input image layout
  62. def preprocess(net, img):
  63. return np.float32(np.rollaxis(img, 2)[::-1]) - net.transformer.mean['data']
  64. def deprocess(net, img):
  65. return np.dstack((img + net.transformer.mean['data'])[::-1])
  66.  
  67.  
  68. # ## Producing dreams
  69.  
  70. # Making the "dream" images is very simple. Essentially it is just a gradient ascent process that tries to maximize the L2 norm of activations of a particular DNN layer. Here are a few simple tricks that we found useful for getting good images:
  71. # * offset image by a random jitter
  72. # * normalize the magnitude of gradient ascent steps
  73. # * apply ascent across multiple scales (octaves)
  74. #
  75. # First we implement a basic gradient ascent step function, applying the first two tricks:
  76.  
  77. # In[3]:
  78.  
  79. def objective_L2(dst):
  80. dst.diff[:] = dst.data
  81.  
  82. def make_step(net, step_size=1.5, end='inception_4a/output',
  83. jitter=32, clip=True, objective=objective_L2):
  84. '''Basic gradient ascent step.'''
  85.  
  86. src = net.blobs['data']
  87. dst = net.blobs[end]
  88.  
  89. ox, oy = np.random.randint(-jitter, jitter+1, 2)
  90. src.data[0] = np.roll(np.roll(src.data[0], ox, -1), oy, -2) # apply jitter shift
  91.  
  92. net.forward(end=end)
  93. objective(dst) # specify the optimization objective
  94. net.backward(start=end)
  95. g = src.diff[0]
  96. # apply normalized ascent step to the input image
  97. src.data[:] += step_size/np.abs(g).mean() * g
  98.  
  99. src.data[0] = np.roll(np.roll(src.data[0], -ox, -1), -oy, -2) # unshift image
  100.  
  101. if clip:
  102. bias = net.transformer.mean['data']
  103. src.data[:] = np.clip(src.data, -bias, 255-bias)
  104.  
  105.  
  106. # Next we implement an ascent through different scales. We call these scales "octaves".
  107.  
  108. # In[4]:
  109.  
  110. def deepdream(net, base_img, iter_n=16, octave_n=4, octave_scale=1.4,
  111. end='inception_3a/3x3', clip=True, **step_params):
  112. # prepare base images for all octaves
  113. octaves = [preprocess(net, base_img)]
  114. for i in xrange(octave_n-1):
  115. octaves.append(nd.zoom(octaves[-1], (1, 1.0/octave_scale,1.0/octave_scale), order=1))
  116.  
  117. src = net.blobs['data']
  118. detail = np.zeros_like(octaves[-1]) # allocate image for network-produced details
  119. for octave, octave_base in enumerate(octaves[::-1]):
  120. h, w = octave_base.shape[-2:]
  121. if octave > 0:
  122. # upscale details from the previous octave
  123. h1, w1 = detail.shape[-2:]
  124. detail = nd.zoom(detail, (1, 1.0*h/h1,1.0*w/w1), order=1)
  125.  
  126. src.reshape(1,3,h,w) # resize the network's input image size
  127. src.data[0] = octave_base+detail
  128. for i in xrange(iter_n):
  129. make_step(net, end=end, clip=clip, **step_params)
  130.  
  131. # visualization
  132. vis = deprocess(net, src.data[0])
  133. if not clip: # adjust image contrast if clipping is disabled
  134. vis = vis*(255.0/np.percentile(vis, 99.98))
  135. showarray(vis)
  136. print octave, i, end, vis.shape
  137. clear_output(wait=True)
  138.  
  139. # extract details produced on the current octave
  140. detail = src.data[0]-octave_base
  141. # returning the resulting image
  142. return deprocess(net, src.data[0])
  143.  
  144.  
  145. # Now we are ready to let the neural network reveal its dreams! Let's take a [cloud image](https://commons.wikimedia.org/wiki/File:Appearance_of_sky_for_weather_forecast,_Dhaka,_Bangladesh.JPG) as a starting point:
  146.  
  147. # In[5]:
  148.  
  149. img = np.float32(PIL.Image.open('fractal.jpg'))
  150.  
  151.  
  152.  
  153. net.blobs.keys()
  154.  
  155.  
  156. # What if we feed the `deepdream` function its own output, after applying a little zoom to it? It turns out that this leads to an endless stream of impressions of the things that the network saw during training. Some patterns fire more often than others, suggestive of basins of attraction.
  157. #
  158. # We will start the process from the same sky image as above, but after some iteration the original image becomes irrelevant; even random noise can be used as the starting point.
  159.  
  160. # In[7]:
  161.  
  162. frame = img
  163. frame_i = 0
  164.  
  165.  
  166. # In[22]:
  167.  
  168. h, w = frame.shape[:2]
  169. s = 0.02 # scale coefficient
  170. for i in xrange(500):
  171. frame = deepdream(net, frame)
  172. PIL.Image.fromarray(np.uint8(frame)).save("frames/%04d.jpg"%frame_i)
  173. frame = nd.affine_transform(frame, [1-s,1-s,1], [h*s/2,w*s/2,0], order=1)
  174. frame_i += 1
  175.  
  176.  
  177. # Be careful running the code above, it can bring you into very strange realms!
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement