Multiple regression: Difference between revisions

From Rosetta Code
Content added Content deleted
m (Fixed lang tags.)
(added Haskell)
Line 15: Line 15:
Note: This is more general than [[Polynomial Fitting]], which only deals with 2 datasets and only deals with polynomial equations. Ordinary least squares can deal with an arbitrary number of datasets (limited by the processing power of the machine) and can have more advanced equations such as:
Note: This is more general than [[Polynomial Fitting]], which only deals with 2 datasets and only deals with polynomial equations. Ordinary least squares can deal with an arbitrary number of datasets (limited by the processing power of the machine) and can have more advanced equations such as:
<math>y = \beta_1log(x_1) + \beta_22^{x_1} + \beta_3sin(x_2)</math>
<math>y = \beta_1log(x_1) + \beta_22^{x_1} + \beta_3sin(x_2)</math>

=={{header|Haskell}}==
Using package [http://hackage.haskell.org/package/hmatrix hmatrix] from HackageDB
<lang haskell>import Numeric.LinearAlgebra
import Numeric.LinearAlgebra.LAPACK

m :: Matrix Double
m = (3><3)
[7.589183,1.703609,-4.477162,
-4.597851,9.434889,-6.543450,
0.4588202,-6.115153,1.331191]

v :: Matrix Double
v = (3><1)
[1.745005,-4.448092,-4.160842]</lang>
Using lapack::dgels
<lang haskell>*Main> linearSolveLSR m v
(3><1)
[ 0.9335611922087276
, 1.101323491272865
, 1.6117769115824 ]</lang>
Or
<lang haskell>*Main> inv m `multiply` v
(3><1)
[ 0.9335611922087278
, 1.101323491272865
, 1.6117769115824006 ]</lang>


=={{header|J}}==
=={{header|J}}==

Revision as of 14:12, 4 December 2009

Task
Multiple regression
You are encouraged to solve this task according to the task description, using any language you may know.

Given a set of data vectors in the following format:

Compute the vector using ordinary least squares regression using the following equation:

You can assume y is given to you as an array, and x is given to you as a two-dimensional array.

Note: This is more general than Polynomial Fitting, which only deals with 2 datasets and only deals with polynomial equations. Ordinary least squares can deal with an arbitrary number of datasets (limited by the processing power of the machine) and can have more advanced equations such as:

Haskell

Using package hmatrix from HackageDB <lang haskell>import Numeric.LinearAlgebra import Numeric.LinearAlgebra.LAPACK

m :: Matrix Double m = (3><3)

 [7.589183,1.703609,-4.477162,
   -4.597851,9.434889,-6.543450,
   0.4588202,-6.115153,1.331191]

v :: Matrix Double v = (3><1)

 [1.745005,-4.448092,-4.160842]</lang>

Using lapack::dgels <lang haskell>*Main> linearSolveLSR m v (3><1)

[ 0.9335611922087276
,  1.101323491272865
,    1.6117769115824 ]</lang>

Or <lang haskell>*Main> inv m `multiply` v (3><1)

[ 0.9335611922087278
,  1.101323491272865
, 1.6117769115824006 ]</lang>

J

<lang j> NB. Wikipedia data

  x=: 1.47 1.50 1.52 1.55 1.57 1.60 1.63 1.65 1.68 1.70 1.73 1.75 1.78 1.80 1.83
  y=: 52.21 53.12 54.48 55.84 57.20 58.57 59.93 61.29 63.11 64.47 66.28 68.10 69.92 72.19 74.46
  y %. x ^/ i.3   NB. calculate coefficients b1, b2 and b3 for 2nd degree polynomial

128.813 _143.162 61.9603</lang>

Breaking it down: <lang j> X=: x ^/ i.3 NB. form Design matrix

  X=: (x^0) ,. (x^1) ,. (x^2)   NB. equivalent of previous line
  4{.X                          NB. show first 4 rows of X

1 1.47 2.1609 1 1.5 2.25 1 1.52 2.3104 1 1.55 2.4025

  NB. Where y is a set of observations and X is the design matrix
  NB. y %. X does matrix division and gives the regression coefficients
  y %. X

128.813 _143.162 61.9603</lang> In other words beta=: y %. X is the equivalent of:

To confirm: <lang j> mp=: +/ .* NB. matrix product

                                NB. %.X is matrix inverse of X
                                NB. |:X is transpose of X
  
  ((%.(|:X) mp X) mp |:X) mp y

128.814 _143.163 61.9606</lang>

LAPACK routines are also available via the Addon math/lapack.

Python

Using

Library: matplotlib

The following

Library: IPython

session gives:

<lang python>In [7]: x = [1.47, 1.50, 1.52, 1.55, 1.57, 1.60, 1.63, 1.65, 1.68, 1.70, 1.73, 1.75, 1.78, 1.80, 1.83]

In [8]: y = [52.21, 53.12, 54.48, 55.84, 57.20, 58.57, 59.93, 61.29, 63.11, 64.47, 66.28, 68.10, 69.92, 72.19, 74.46]

In [9]: polyfit(x, y, 2) Out[9]: array([ 61.96032544, -143.16202287, 128.81280358])</lang>

R

R provides the lm() function for linear regression.

<lang R>## Wikipedia Data x <- c(1.47, 1.50, 1.52, 1.55, 1.57, 1.60, 1.63, 1.65, 1.68, 1.70, 1.73, 1.75, 1.78, 1.80, 1.83) } y <- c(52.21, 53.12, 54.48, 55.84, 57.20, 58.57, 59.93, 61.29, 63.11, 64.47, 66.28, 68.10, 69.92, 72.19, 74.46)

lm( y ~ x + I(x^2))</lang> Producing output,

Call:
lm(formula = y ~ x + I(x^2))

Coefficients:
(Intercept)            x       I(x^2)  
     128.81      -143.16        61.96  

A simple implementation of multiple regression in native R is useful to illustrate R's model description and linear algebra capabilities.

<lang R>simpleMultipleReg <- function(formula) {

   ## parse and evaluate the model formula
   mf <- model.frame(formula)
   ## create design matrix
   X <- model.matrix(attr(mf, "terms"), mf)
   ## create dependent variable
   Y <- model.response(mf)
   ## solve
   solve(t(X) %*% X) %*% t(X) %*% Y

}

simpleMultipleReg(y ~ x + I(x^2))</lang>

This produces the same coefficients as lm()

                  [,1]
(Intercept)  128.81280
x           -143.16202
I(x^2)        61.96033


A more efficient way to solve , than the method above, is to solve the linear system directly and use the crossprod function.

<lang R>solve( crossprod(X), crossprod(X, Y))</lang>

Ruby

Using the standard library Matrix class:

<lang ruby>require 'matrix'

def regression_coefficients y, x

 y = Matrix.column_vector y.map { |i| i.to_f }
 x = Matrix.columns x.map { |xi| xi.map { |i| i.to_f }}
 (x.t * x).inverse * x.t * y

end</lang>

Testing: <lang ruby>puts regression_coefficients([1, 2, 3, 4, 5], [ [2, 1, 3, 4, 5] ])</lang> Output:

Matrix[[0.981818181818182]]

Tcl

Uses the

Library: tcllib

linear algebra package.

<lang tcl>package require math::linearalgebra namespace eval multipleRegression {

   namespace export regressionCoefficients
   namespace import ::math::linearalgebra::*
   # Matrix inversion is defined in terms of Gaussian elimination
   # Note that we assume (correctly) that we have a square matrix
   proc invert {matrix} {

solveGauss $matrix [mkIdentity [lindex [shape $matrix] 0]]

   }
   # Implement the Ordinary Least Squares method
   proc regressionCoefficients {y x} {

matmul [matmul [invert [matmul $x [transpose $x]]] $x] $y

   }

} namespace import multipleRegression::regressionCoefficients</lang> Using an example from the Wikipedia page on the correlation of height and weight: <lang tcl># Simple helper just for this example proc map {n exp list} {

   upvar 1 $n v
   set r {}; foreach v $list {lappend r [uplevel 1 $exp]}; return $r

}

  1. Data from wikipedia

set x {

   1.47 1.50 1.52 1.55 1.57 1.60 1.63 1.65 1.68 1.70 1.73 1.75 1.78 1.80 1.83

} set y {

   52.21 53.12 54.48 55.84 57.20 58.57 59.93 61.29 63.11 64.47 66.28 68.10
   69.92 72.19 74.46

}

  1. Wikipedia states that fitting up to the square of x[i] is worth it

puts [regressionCoefficients $y [map n {map v {expr {$v**$n}} $x} {0 1 2}]]</lang> Produces this output (a 3-vector of coefficients):

128.81280358170625 -143.16202286630732 61.96032544293041

Ursala

This exact problem is solved by the DGELSD function from the Lapack library [1], which is callable in Ursala like this. <lang Ursala>regression_coefficients = lapack..dgelsd</lang> test program: <lang Ursala>x =

<

  <7.589183e+00,1.703609e+00,-4.477162e+00>,
  <-4.597851e+00,9.434889e+00,-6.543450e+00>,
  <4.588202e-01,-6.115153e+00,1.331191e+00>>

y = <1.745005e+00,-4.448092e+00,-4.160842e+00>

  1. cast %eL

example = regression_coefficients(x,y)</lang> The matrix x needn't be square, and has one row for each data point. The length of y must equal the number of rows in x, and the number of coefficients returned will be the number of columns in x. It would be more typical in practice to initialize x by evaluating a set of basis functions chosen to model some empirical data, but the regression solver is indifferent to the model.

output:

<9.335612e-01,1.101323e+00,1.611777e+00>

A similar method can be used for regression with complex numbers by substituting zgelsd for dgelsd, above.