Multiple regression: Difference between revisions

From Rosetta Code
Content added Content deleted
(Add J code)
(→‎{{header|J}}: add confirmation)
Line 43: Line 43:
</lang>
</lang>
In other words <tt> beta=: y %. X </tt> is the equivalent of:<br>
In other words <tt> beta=: y %. X </tt> is the equivalent of:<br>
<math> \hat\beta = (X'X)^{-1}X'y</math>
<math> \hat\beta = (X'X)^{-1}X'y</math><br>

To confirm:
<lang j>
mp=: +/ .* NB. matrix product
NB. %.X is matrix inverse of X
NB. |:X is transpose of X
((%.(|:X) mp X) mp |:X) mp y
128.814 _143.163 61.9606
</lang>


LAPACK routines are also available via the Addon <tt>math/lapack</tt>.
LAPACK routines are also available via the Addon <tt>math/lapack</tt>.

Revision as of 13:20, 1 September 2009

Task
Multiple regression
You are encouraged to solve this task according to the task description, using any language you may know.

Given a set of data vectors in the following format:

Compute the vector using ordinary least squares regression using the following equation:

You can assume y is given to you as an array, and x is given to you as a two-dimensional array.

Note: This is more general than Polynomial Fitting, which only deals with 2 datasets and only deals with polynomial equations. Ordinary least squares can deal with an arbitrary number of datasets (limited by the processing power of the machine) and can have more advanced equations such as:

J

<lang j>

  NB. Wikipedia data
  x=: 1.47 1.50 1.52 1.55 1.57 1.60 1.63 1.65 1.68 1.70 1.73 1.75 1.78 1.80 1.83
  y=: 52.21 53.12 54.48 55.84 57.20 58.57 59.93 61.29 63.11 64.47 66.28 68.10 69.92 72.19 74.46
  y %. x ^/ i.3   NB. calculate coefficients b1, b2 and b3 for 2nd degree polynomial

128.813 _143.162 61.9603 </lang>

Breaking it down: <lang j>

  X=: x ^/ i.3                  NB. form Design matrix
  X=: (x^0) ,. (x^1) ,. (x^2)   NB. equivalent of previous line
  4{.X                          NB. show first 4 rows of X

1 1.47 2.1609 1 1.5 2.25 1 1.52 2.3104 1 1.55 2.4025

  NB. Where y is a set of observations and X is the design matrix
  NB. y %. X does matrix division and gives the regression coefficients
  y %. X

128.813 _143.162 61.9603 </lang> In other words beta=: y %. X is the equivalent of:

To confirm: <lang j>

  mp=: +/ .*                    NB. matrix product
                                NB. %.X is matrix inverse of X
                                NB. |:X is transpose of X
  
  ((%.(|:X) mp X) mp |:X) mp y

128.814 _143.163 61.9606 </lang>

LAPACK routines are also available via the Addon math/lapack.

Ruby

Using the standard library Matrix class:

<lang ruby>require 'matrix'

def regression_coefficients y, x

 y = Matrix.column_vector y.map { |i| i.to_f }
 x = Matrix.columns x.map { |xi| xi.map { |i| i.to_f }}
 (x.t * x).inverse * x.t * y

end</lang>

Testing: <lang ruby>puts regression_coefficients([1, 2, 3, 4, 5], [ [2, 1, 3, 4, 5] ])</lang> Output:

Matrix[[0.981818181818182]]

Tcl

Uses the

Library: tcllib

linear algebra package.

<lang tcl>package require math::linearalgebra namespace eval multipleRegression {

   namespace export regressionCoefficients
   namespace import ::math::linearalgebra::*
   # Matrix inversion is defined in terms of Gaussian elimination
   # Note that we assume (correctly) that we have a square matrix
   proc invert {matrix} {

solveGauss $matrix [mkIdentity [lindex [shape $matrix] 0]]

   }
   # Implement the Ordinary Least Squares method
   proc regressionCoefficients {y x} {

matmul [matmul [invert [matmul $x [transpose $x]]] $x] $y

   }

} namespace import multipleRegression::regressionCoefficients</lang> Using an example from the Wikipedia page on the correlation of height and weight: <lang tcl># Simple helper just for this example proc map {n exp list} {

   upvar 1 $n v
   set r {}; foreach v $list {lappend r [uplevel 1 $exp]}; return $r

}

  1. Data from wikipedia

set x {

   1.47 1.50 1.52 1.55 1.57 1.60 1.63 1.65 1.68 1.70 1.73 1.75 1.78 1.80 1.83

} set y {

   52.21 53.12 54.48 55.84 57.20 58.57 59.93 61.29 63.11 64.47 66.28 68.10
   69.92 72.19 74.46

}

  1. Wikipedia states that fitting up to the square of x[i] is worth it

puts [regressionCoefficients $y [map n {map v {expr {$v**$n}} $x} {0 1 2}]]</lang> Produces this output (a 3-vector of coefficients):

128.81280358170625 -143.16202286630732 61.96032544293041

Ursala

This exact problem is solved by the DGELSD function from the Lapack library [1], which is callable in Ursala like this. <lang Ursala> regression_coefficients = lapack..dgelsd </lang> test program: <lang Ursala> x =

<

  <7.589183e+00,1.703609e+00,-4.477162e+00>,
  <-4.597851e+00,9.434889e+00,-6.543450e+00>,
  <4.588202e-01,-6.115153e+00,1.331191e+00>>

y = <1.745005e+00,-4.448092e+00,-4.160842e+00>

  1. cast %eL

example = regression_coefficients(x,y) </lang> The matrix x needn't be square, and has one row for each data point. The length of y must equal the number of rows in x, and the number of coefficients returned will be the number of columns in x. It would be more typical in practice to initialize x by evaluating a set of basis functions chosen to model some empirical data, but the regression solver is indifferent to the model.

output:

<9.335612e-01,1.101323e+00,1.611777e+00>

A similar method can be used for regression with complex numbers by substituting zgelsd for dgelsd, above.