# Introduction to Computational Analysis

 Pay Notebook Creator: Roy Hyunjin Han 0 Set Container: Numerical CPU with TINY Memory for 10 Minutes 0 Total 0

# Theano parallelizes symbolic mathematics on multi-dimensional arrays¶

Research in artificial intelligence involves experimenting with different ways to maximize learning and minimize error. Theano accelerates the experimentation process by making it easy to define, evaluate, differentiate functions on multi-dimensional arrays, with the optional benefit of using GPUs for speedups of up to 140x.

In [1]:
import theano
import theano.tensor as T

Using gpu device 0: ION


## Compile symbolic mathematics¶

Add two scalars after defining and compiling a function.

In [12]:
x = T.dscalar('x') # Decimal scalar (float32)
y = T.dscalar('y')
f = theano.function([x, y], x + y)
f(2, 3)

Out[12]:
array(5.0)
In [13]:
f

Out[13]:
<theano.compile.function_module.Function at 0xb8bf60c>

Add two matrices after defining and compiling a function.

In [26]:
x = T.dmatrix('x')
y = T.dmatrix('y')
f = theano.function([x, y], x + y)
f([[1, 2], [3, 4]], [[-1, -2], [-3, -4]])

Out[26]:
array([[ 0.,  0.],
[ 0.,  0.]])
In [10]:
state = theano.shared(0)
x = T.iscalar('x') # Integer scalar (int32)
accumulate = theano.function([x], state, updates=[(state, state + x)])
accumulate(10); print state.get_value()
accumulate(20); print state.get_value()

10
30


## Differentiate functions on multi-dimensional arrays¶

In [28]:
x = T.dscalar('x')
y = x ** 2
print 'The derivative of %s is %s.' % (
theano.pp(y),
theano.pp(f.maker.env.outputs[0]))
print 'Evaluating the derivative at x = 4 gives %s.' % f(4)

The derivative of (x ** TensorConstant{2}) is (TensorConstant{2.0} * x).
Evaluating the derivative at x = 4 gives 8.0.

In [2]:
x = T.dmatrix('x')
s = T.sum(1 / (1 + T.exp(-x)))
dlogistic = theano.function([x], gs)
dlogistic([[0, 1], [-1, -2]])

Out[2]:
array([[ 0.25      ,  0.19661193],
[ 0.19661193,  0.10499359]])
In [8]:
f = theano.function([x], T.jacobian(s, x))
f([[0, 1], [-1, -2]])

Out[8]:
array([[ 0.25      ,  0.19661193],
[ 0.19661193,  0.10499359]])

## Use GPUs for speedups of up to 140x¶

In [29]:
from theano import function, config, shared, sandbox
import theano.tensor as T
import numpy
import time

vlen = 10 * 30 * 768  # 10 x #cores x # threads per core
iters = 1000

rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], T.exp(x))
t0 = time.time()
for i in xrange(iters):
r = f()
print 'Looping %d times took'%iters, time.time() - t0, 'seconds'
print 'Result is', r
print 'Used the','cpu' if numpy.any( [isinstance(x.op,T.Elemwise) for x in f.maker.env.toposort()]) else 'gpu'

Looping 1000 times took 1.57673311234 seconds
Result is <CudaNdarray object at 0xcc0f2f0>
Numpy result is [ 1.23178029  1.61879349  1.52278066 ...,  2.20771813  2.29967761
1.62323296]
Used the gpu

In [ ]:
from theano import function, config, shared, sandbox, Out
import theano.tensor as T
import numpy
import time

vlen = 10 * 30 * 768  # 10 x #cores x # threads per core
iters = 1000

rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([],
Out(sandbox.cuda.basic_ops.gpu_from_host(T.exp(x)),
borrow=True))
t0 = time.time()
for i in xrange(iters):
r = f()
print 'Looping %d times took'%iters, time.time() - t0, 'seconds'
print 'Result is', r
print 'Numpy result is', numpy.asarray(r)
print 'Used the','cpu' if numpy.any( [isinstance(x.op,T.Elemwise) for x in f.maker.env.toposort()]) else 'gpu'