Particle swarm optimization: Difference between revisions

From Rosetta Code
Content added Content deleted
(→‎{{header|J}}: use amend to update arrays)
(Update task description to provide more specificity in test cases, including parameter selection)
Line 1: Line 1:
{{draft task}}
{{draft task}}
<p>
<p>
Particle Swarm Optimization (PSO) is an optimization method in which multiple candidate solutions ('particles') migrate through the solution space under the influence of local and global best known positions. PSO does not require that the objective function be differentiable and can optimize over very large problem spaces, but is not guaranteed to converge.
Particle Swarm Optimization (PSO) is an optimization method in which multiple candidate solutions ('particles') migrate through the solution space under the influence of local and global best known positions. PSO does not require that the objective function be differentiable and can optimize over very large problem spaces, but is not guaranteed to converge. The method should be demonstrated by application of the functions recommended below, and possibly other standard or well-known optimization test cases.
</p>
</p>
<p>
<p>
The goal of parameter selection is to ensure that the global minimum is discriminated from any local minima, and that the minimum is accurately determined, and that convergence is achieved with acceptible resource usage. To provide a common basis for comparing implementations, the following test cases and parameter sets are recommended:
The method should be demonstrated by application of the McCormick function, and possibly other standard or well-known optimization test cases.
<ul>
<li> McCormick function - an easier case, with a single minimum;
recommended parameters:
omega = 0, phi p = 0.6, phi g = 0.3, number of particles = 100, number of iterations = 40 </li>
<li> Eggholder function - a challenging case, with multiple similar minima; recommended parameters: TBD ... </li>
</ul>
</p>
</p>
<p>
<p>
Line 10: Line 16:
<ul>
<ul>
<li>[Particle Swarm Optimization[https://en.wikipedia.org/wiki/Particle_swarm_optimization]]</li>
<li>[Particle Swarm Optimization[https://en.wikipedia.org/wiki/Particle_swarm_optimization]]</li>
<li>[McCormick function[http://www.sfu.ca/~ssurjano/mccorm.html]]</li>
<li>[Virtual Library of Optimization Test Functions[http://www.sfu.ca/~ssurjano/optimization.html]]</li>
<li>[Test functions for optimization[https://en.wikipedia.org/wiki/Test_functions_for_optimization]]</li>
</ul>
</ul>
</p>
</p>

=={{header|J}}==
<lang J>load 'format/printf'
<lang J>load 'format/printf'



Revision as of 18:11, 4 August 2015

Particle swarm optimization is a draft programming task. It is not yet considered ready to be promoted as a complete task, for reasons that should be found in its talk page.

Particle Swarm Optimization (PSO) is an optimization method in which multiple candidate solutions ('particles') migrate through the solution space under the influence of local and global best known positions. PSO does not require that the objective function be differentiable and can optimize over very large problem spaces, but is not guaranteed to converge. The method should be demonstrated by application of the functions recommended below, and possibly other standard or well-known optimization test cases.

The goal of parameter selection is to ensure that the global minimum is discriminated from any local minima, and that the minimum is accurately determined, and that convergence is achieved with acceptible resource usage. To provide a common basis for comparing implementations, the following test cases and parameter sets are recommended:

  • McCormick function - an easier case, with a single minimum; recommended parameters: omega = 0, phi p = 0.6, phi g = 0.3, number of particles = 100, number of iterations = 40
  • Eggholder function - a challenging case, with multiple similar minima; recommended parameters: TBD ...

References:

  • [Particle Swarm Optimization[1]]
  • [Virtual Library of Optimization Test Functions[2]]

<lang J>load 'format/printf'

pso_init =: verb define

  'Min Max parameters nParticles' =. y
  'Min: %j\nMax: %j\nomega, phip, phig: %j\nnParticles: %j\n' printf Min;Max;parameters;nParticles
  nDims =. #Min
  pos =. Min +"1 (Max - Min) *"1 (nParticles,nDims) ?@$ 0
  bpos =. pos
  bval =. (#pos) $ _
  vel  =. ($pos) $ 0
  0;_;_;Min;Max;parameters;pos;vel;bpos;bval      NB. initial state

)

pso =: adverb define

  NB. previous state
  'iter gbpos gbval Min Max parameters pos vel bpos0 bval' =. y 
  NB. evaluate
  val    =. u"1 pos
  NB. update
  better =. val < bval
  bpos   =. (better # pos) (I. better)} bpos0
  bval   =. u"1 bpos
  gbval  =. <./ bval
  gbpos  =. bpos {~ (i. <./) bval
  NB. migrate
  'omega phip phig' =. parameters
  rp  =. (#pos) ?@$ 0
  rg  =. ? 0
  vel =. (omega*vel) + (phip * rp * bpos - pos) + (phig * rg * gbpos -"1 pos)
  pos =. pos + vel
  NB. reset out-of-bounds particles
  bad    =. +./"1 (Min >"1 pos) ,. (pos >"1 Max)
  newpos =. Min +"1 (Max-Min) *"1 ((+/bad),#Min) ?@$ 0
  pos    =. newpos (I. bad)} pos
  iter   =. >: iter
  NB. new state
  iter;gbpos;gbval;Min;Max;parameters;pos;vel;bpos;bval

)

reportState=: 'Iteration: %j\nGlobalBestPosition: %j\nGlobalBestValue: %j\n' printf 3&{.</lang> Apply to McCormick Function: <lang J> require 'trig'

  mccormick =: sin@(+/) + *:@(-/) + 1 + _1.5 2.5 +/@:* ]
  state =: pso_init _1.5 _3 ; 4 4 ; 0 0.6 0.3; 100

Min: _1.5 _3 Max: 4 4 omega, phip, phig: 0 0.6 0.3 nParticles: 100

  state =: (mccormick pso)^:40 state
  reportState state

Iteration: 40 GlobalBestPosition: _0.547399 _1.54698 GlobalBestValue: _1.91322</lang>

ooRexx

<lang oorexx>/* REXX ---------------------------------------------------------------

  • Test for McCormick function
  • --------------------------------------------------------------------*/

Numeric Digits 16 Parse Value '-.5 -1.5 1' With x y d fmin=1e9 Call refine x,y Do r=1 To 10

 d=d/5
 Call refine xmin,ymin
 End

Say 'which is better (less) than' Say ' f(-.54719,-1.54719)='f(-.54719,-1.54719) Say 'and differs from published -1.9133' Exit

refine: Parse Arg xx,yy Do x=xx-d To xx+d By d/2

 Do y=yy-d To yy+d By d/2
   f=f(x,y)
   If f<fmin Then Do
     Say x y f
     fmin=f
     xmin=x
     ymin=y
     End
   End
 End

Return

f: Parse Arg x,y res=rxcalcsin(x+y,16,'R')+(x-y)**2-1.5*x+2.5*y+1 Return res

requires rxmath library</lang>
Output:
-1.5 -2.5 -1.243197504692072
-1.0 -2.0 -1.641120008059867
-0.5 -1.5 -1.909297426825682
-0.54 -1.54 -1.913132979507516
-0.548 -1.548 -1.913221840016527
-0.5480 -1.5472 -1.913222034492829
-0.5472 -1.5472 -1.913222954970650
-0.54720000 -1.54719872 -1.913222954973731
-0.54719872 -1.54719872 -1.913222954978670
-0.54719872 -1.54719744 -1.913222954978914
-0.54719744 -1.54719744 -1.913222954981015
-0.5471975424 -1.5471975424 -1.913222954981036
which is better (less) than
        f(-.54719,-1.54719)=-1.913222954882273
and differs from published  -1.9133

REXX

Translation of: ooRexx

This REXX version uses a large   numeric digits   (but only displays 16 digits).

The numeric precision is only limited to the number of decimal digits in the   pi   variable   (in this case, 77). <lang rexx>/*REXX pgm calc. Particle Swarm Optimization as it migrates through a solution*/ numeric digits length(pi()); sDig=16 /*SDIG: the number of displayed digits.*/ parse arg x y d p . /*obtain optional arguments from the CL*/ if x== | x==',' then x= -0.5 /*X not defined? Then use the default.*/ if y== | y==',' then y= -1.5 /*Y " " " " " " */ if d== | d==',' then d= 1 /*D " " " " " " */ if p== | p==',' then p= 1e12 /*P " " " " " " */ minF=p /*P the number of particles: 1 billion*/ say center('X', sDig+3, '═') center('Y', sDig+3, '═') center('D', sDig+3, '═') call refine x,y

               do r=1  for 10; d=d*.5
               call refine minX, minY
               end   /*r*/

say say 'Which is better (less) than the global minimum at:' say ' f(-.54719, -1.54719) ───► ' fmt(f(-.54719, -1.54719)) say 'The published global minimum is: -1.9133' exit /*────────────────────────────────────────────────────────────────────────────*/ refine: parse arg xx,yy; dh=d * 0.5

         do   x=xx-d  to xx+d  by dh
           do y=yy-d  to yy+d  by dh;  f=f(x,y);   if f>=minF  then iterate
           say fmt(x) fmt(y) fmt(f);   minF=f;   minX=x;  minY=y
           end  /*y*/
         end    /*x*/

return /*────────────────────────────────────────────────────────────────────────────*/ fmt: parse arg ?; ?=format(?,,sDig) /*format number with Sdig decimal digs.*/ L=length(?); if pos(.,?)\==0 then ?=strip(strip(?,'T',0),'T',.);return left(?,L) /*────────────────────────────────────────────────────────────────────────────*/ sin: procedure; parse arg x; x=r2r(x); numeric fuzz 5; z=x; _=x; q=x*x

      do k=2  by 2  until p=z; p=z; _=-_*q/(k*(k+1)); z=z+_; end;      return z

/*──────────────────────────────────one─liner subroutines──────────────────────────────────────*/ f: procedure: parse arg a,b; return sin(a+b) + (a-b)**2 - 1.5*a + 2.5*b + 1 pi: pi=3.1415926535897932384626433832795028841971693993751058209749445923078164062862; return pi r2r: return arg(1) // (pi()*2) /*normalize radians ───► a unit circle.*/</lang> output   when using the default inputs:

═════════X═════════ ═════════Y═════════ ═════════D═════════
-1.5                -2.5                -1.2431975046920717
-1                  -2                  -1.6411200080598672
-0.5                -1.5                -1.9092974268256817
-0.5625             -1.5625             -1.912819789818452
-0.5625             -1.546875           -1.9128819293954732
-0.546875           -1.546875           -1.9132227747573614
-0.54736328125      -1.54736328125      -1.9132229074107836

Which is better (less) than the global minimum at:
            f(-.54719, -1.54719)  ───►  -1.9132229548822736
The published global minimum is:        -1.9133