brain is an open-source neural network framework for R which is easy to use but nonetheless provides a lot of depth, this document focuses on the former, ease of use.

## Install

brain depends on Chrome V8 which thus needs to be installed.

Binary packages for OS-X or Windows can be installed directly from CRAN:

# install.packages("remotes")
remotes::install_github("brain-r/brain")

Installation from source on Linux requires libv8 3.14 or 3.15 (no newer!). On Debian or Ubuntu use libv8-3.14-dev:

sudo apt-get install -y libv8-3.14-dev

On Fedora we need v8-314-devel:

sudo yum install v8-314-devel

On CentOS / RHEL we install v8-devel via EPEL:

sudo yum install epel-release
sudo yum install v8-devel

On OS-X use v8@3.15 (not regular v8) from Homebrew:

brew install v8@3.15

On other systems you might need to install libv8 from source. A compatible version of V8 3.14 is available from https://github.com/v8-314/v8. Build instructions are in the build directory.

## Sequential API

brain follows the usual steps involved in building any model, define the model, train it, run it, with brain you can pipe those along. On sequential models, the package provides four distinct network types:

• Perceptron
• Long Short Term Memory
• Liquid State Machine
• Hopfield

## XOR

To demonstrate the basic building blocks of the package we shall build a simple perceptron to sovle a XOR. First, all networks are initialised with the function brain.

library(brain)

brain()
## -- Brain -----------------------------------------------------------------------
##  x Architecture: undefined
##  ( ) Untrained

This initialises an empty brain, with no architecture, we want to build a perceptron. We can do so with the perceptron function which takes an argument layers. This layers argument can be a single int or a vector of int the length of the layers where each int represents the size of the layers. In order to define the latter we need to know the number of inputs out network is going to take.

# training data
train <- dplyr::tibble(
input1 = c(0, 0, 1, 1),
input2 = c(0, 1, 0, 1),
output = c(0, 1, 1, 0)
)

Our network will take two inputs input1 and input2, we therefore need to input layers. The output of the network (train$output) is a single number (0 or 1), so we need one output layer. # 2 input layers # 3 hidden layers # 1 output layer brain() %>% perceptron(c(2, 3, 1))  ## -- Brain ----------------------------------------------------------------------- ## v Architecture: perceptron ## = Layers: ## > input: logistic - 2 ## = hidden: logistic - [3] ## < output: logistic - 1 ## ( ) Untrained Since the output is either 0 or 1 we should use either a Tanh or a sigmoid (logistic) activation function, which is actually the default in brain, we’ll state tanh to demonstarte how it works. brain() %>% perceptron(c(2, 3, 1)) %>% squash_input( squash_tanh() ) %>% squash_hidden( squash_tanh() ) %>% squash_output( squash_tanh() ) ## -- Brain ----------------------------------------------------------------------- ## v Architecture: perceptron ## = Layers: ## > input: tanh - 2 ## = hidden: tanh - [3] ## < output: tanh - 1 ## ( ) Untrained Now we need to pass our training dataset (train) to our model. brain() %>% perceptron(c(2, 3, 1)) %>% squash_input( squash_tanh() ) %>% squash_hidden( squash_tanh() ) %>% squash_output( squash_tanh() ) %>% train_data(train) %>% train_input(input1, input2) %>% train_output(output) ## -- Brain ----------------------------------------------------------------------- ## v Architecture: perceptron ## = Layers: ## > input: tanh - 2 ## = hidden: tanh - [3] ## < output: tanh - 1 ## ( ) Untrained Finally we can pass training options with train_opts, it’d be good to increase the number of iterations. brain() %>% perceptron(c(2, 3, 1)) %>% squash_input( squash_tanh() ) %>% squash_hidden( squash_tanh() ) %>% squash_output( squash_tanh() ) %>% train_data(train) %>% train_input(input1, input2) %>% train_output(output) %>% train_opts(iterations = 2000) ## -- Brain ----------------------------------------------------------------------- ## v Architecture: perceptron ## = Layers: ## > input: tanh - 2 ## = hidden: tanh - [3] ## < output: tanh - 1 ## ( ) Untrained Note that the above merely passes the data but does not actually train the model. Once we have specified inputs and outputs we can train the model with the function train. However the latter takes one required argument cost which is the cost function to use. The cost function is returned by cost_function. In our case we want to use cross entropy. br <- brain() %>% perceptron(c(2, 3, 1)) %>% squash_input( squash_tanh() ) %>% squash_hidden( squash_tanh() ) %>% squash_output( squash_tanh() ) %>% train_data(train) %>% train_input(input1, input2) %>% train_output(output) %>% train_opts(iterations = 2000) %>% train( cost = cost_function("cross_entropy") ) ## -- Training -------------------------------------------------------------------- ## x Error: ## i Iterations: 1 ## > Time: 6 ms That is our model build, let’s define a test set to see how it fares with the activate function. test <- dplyr::tibble( input1 = c(0, 0, 1), input2 = c(0, 1, 0), expected = c(0, 1, 1) ) br <- br %>% activate_data(test) %>% activate(input1, input2)  We can then get the activations with get_activations. get_activations(br) ## [,1] ## [1,] 0.2796627 ## [2,] 0.2768219 ## [3,] 0.2914401 Our activation function (tanh) returns probabilities between -1 and 1 so we need to round them in order to get our “real” output. get_activations(br) %>% round() ## [,1] ## [1,] 0 ## [2,] 0 ## [3,] 0 To clarify the output, let’s compare it to the expected output. output <- get_activations(br) %>% round() %>% unlist() cbind(test, output) ## input1 input2 expected output ## 1 0 0 0 0 ## 2 0 1 1 0 ## 3 1 0 1 0 100% correct! ## Multiclass Let’s use the iris dataset and predict the specie of a flower given measurement of it’s petals. This is really just for demonstration purposes as we do not have enough data to make an accurate prediction using brain brain, like all neural networks, expects inputs to range between 0 and 1. Therefore we need to rescale it. library(dplyr) df <- iris %>% mutate_if(is.factor, as.integer) %>% mutate_if(is.numeric, balance) %>% mutate(id = 1:n()) train <- df %>% group_by(Species) %>% sample_frac(.95) %>% ungroup() test <- df %>% anti_join(train, by = "id") br <- brain() %>% perceptron(c(1, 6, 1)) %>% train_data(train) %>% train_input( Petal.Length, Petal.Width ) %>% train_output(Species, data = train) %>% train_opts(iterations = 10000) %>% train( cost_function("cross_entropy") ) ## -- Training -------------------------------------------------------------------- ## x Error: 0.2565802 ## i Iterations: 10000 ## > Time: 11461 ms br <- br %>% activate_data(test) %>% activate( Petal.Length, Petal.Width ) act <- get_activations(br) tibble( predicted = round(act[,1], 1), actual = test$Species
)
## # A tibble: 6 x 2
##   predicted actual
##       <dbl>  <dbl>
## 1       0      0
## 2       0      0
## 3       0.9    0.5
## 4       0.9    0.5
## 5       1      1
## 6       1      1

## Continuous

data("faithful")

geyser <- faithful %>%
mutate(
id = 1:n(),
eruptions = balance(eruptions),
waiting = balance(waiting)
)

train <- sample_frac(geyser, .98)

test <- anti_join(geyser, train, by = "id")

br <- brain() %>%
perceptron(c(1, 3, 1)) %>%
train_data(train) %>%
train_input(waiting) %>%
train_output(eruptions) %>%
train_opts(iterations = 200) %>%
train(
cost = cost_function("cross_entropy")
)
## -- Training --------------------------------------------------------------------
##  x Error: 0.4857029
##  i Iterations: 200
##  > Time: 580 ms
br <- br %>%
activate_data(test) %>%
activate(
waiting
)

act <- get_activations(br)

library(echarts4r)

tibble(
id = 1:5,
predicted = act[,1],
actual = test\$eruptions,
residuals = actual - predicted
) %>%
e_charts(actual) %>%
e_scatter(predicted, color = "yellow") %>%
e_lm(predicted ~ actual, color = "white") %>%
e_x_axis(splitLine = FALSE) %>%
e_title("Residuals vs Predicted") %>%
e_theme("dark")