Multiple regression: Difference between revisions

From Rosetta Code
Content added Content deleted
m (→‎{{header|Tcl}}: whitespace)
(added Ursala)
Line 71: Line 71:
Produces this output (a 3-vector of coefficients):
Produces this output (a 3-vector of coefficients):
<pre>128.81280358170625 -143.16202286630732 61.96032544293041</pre>
<pre>128.81280358170625 -143.16202286630732 61.96032544293041</pre>

=={{header|Ursala}}==
This exact problem is solved by the DGELSD function from
the Lapack library [http://www.netlib.org/lapack/lug/node27.html],
which is callable in Ursala like this.
<lang Ursala>
regression_coefficients = lapack..dgelsd
</lang>
test program:
<lang Ursala>
x =

<
<7.589183e+00,1.703609e+00,-4.477162e+00>,
<-4.597851e+00,9.434889e+00,-6.543450e+00>,
<4.588202e-01,-6.115153e+00,1.331191e+00>>

y = <1.745005e+00,-4.448092e+00,-4.160842e+00>

#cast %eL

example = regression_coefficients(x,y)
</lang>
The matrix x needn't be square, and has one row for each
data point. The length of y must equal the number of rows in
x, and the number of coefficients returned will be the number
of columns in x. It would be more typical in practice to initialize x by
evaluating a set of basis functions chosen to model some empirical data,
but the regression solver is indifferent to the model.

output:
<pre>
<9.335612e-01,1.101323e+00,1.611777e+00>
</pre>
A similar method can be used for regression with complex numbers by substituting
zgelsd for dgelsd, above.

Revision as of 16:26, 11 August 2009

Task
Multiple regression
You are encouraged to solve this task according to the task description, using any language you may know.

Given a set of data vectors in the following format:

Compute the vector using ordinary least squares regression using the following equation:

You can assume y is given to you as an array, and x is given to you as a two-dimensional array.

Note: This is more general than Polynomial Fitting, which only deals with 2 datasets and only deals with polynomial equations. Ordinary least squares can deal with an arbitrary number of datasets (limited by the processing power of the machine) and can have more advanced equations such as:

Ruby

Using the standard library Matrix class:

<lang ruby>require 'matrix'

def regression_coefficients y, x

 y = Matrix.column_vector y.map { |i| i.to_f }
 x = Matrix.columns x.map { |xi| xi.map { |i| i.to_f }}
 (x.t * x).inverse * x.t * y

end</lang>

Testing: <lang ruby>puts regression_coefficients([1, 2, 3, 4, 5], [ [2, 1, 3, 4, 5] ])</lang> Output:

Matrix[[0.981818181818182]]

Tcl

Uses the

Library: tcllib

linear algebra package.

<lang tcl>package require math::linearalgebra namespace eval multipleRegression {

   namespace export regressionCoefficients
   namespace import ::math::linearalgebra::*
   # Matrix inversion is defined in terms of Gaussian elimination
   # Note that we assume (correctly) that we have a square matrix
   proc invert {matrix} {

solveGauss $matrix [mkIdentity [lindex [shape $matrix] 0]]

   }
   # Implement the Ordinary Least Squares method
   proc regressionCoefficients {y x} {

matmul [matmul [invert [matmul $x [transpose $x]]] $x] $y

   }

} namespace import multipleRegression::regressionCoefficients</lang> Using an example from the Wikipedia page on the correlation of height and weight: <lang tcl># Simple helper just for this example proc map {n exp list} {

   upvar 1 $n v
   set r {}; foreach v $list {lappend r [uplevel 1 $exp]}; return $r

}

  1. Data from wikipedia

set x {

   1.47 1.50 1.52 1.55 1.57 1.60 1.63 1.65 1.68 1.70 1.73 1.75 1.78 1.80 1.83

} set y {

   52.21 53.12 54.48 55.84 57.20 58.57 59.93 61.29 63.11 64.47 66.28 68.10
   69.92 72.19 74.46

}

  1. Wikipedia states that fitting up to the square of x[i] is worth it

puts [regressionCoefficients $y [map n {map v {expr {$v**$n}} $x} {0 1 2}]]</lang> Produces this output (a 3-vector of coefficients):

128.81280358170625 -143.16202286630732 61.96032544293041

Ursala

This exact problem is solved by the DGELSD function from the Lapack library [1], which is callable in Ursala like this. <lang Ursala> regression_coefficients = lapack..dgelsd </lang> test program: <lang Ursala> x =

<

  <7.589183e+00,1.703609e+00,-4.477162e+00>,
  <-4.597851e+00,9.434889e+00,-6.543450e+00>,
  <4.588202e-01,-6.115153e+00,1.331191e+00>>

y = <1.745005e+00,-4.448092e+00,-4.160842e+00>

  1. cast %eL

example = regression_coefficients(x,y) </lang> The matrix x needn't be square, and has one row for each data point. The length of y must equal the number of rows in x, and the number of coefficients returned will be the number of columns in x. It would be more typical in practice to initialize x by evaluating a set of basis functions chosen to model some empirical data, but the regression solver is indifferent to the model.

output:

<9.335612e-01,1.101323e+00,1.611777e+00>

A similar method can be used for regression with complex numbers by substituting zgelsd for dgelsd, above.