2018年ASOC SCI1区TOP,混合灰狼算法HBBOG,深度解析+性能实测
目录
- 1.摘要
- 2.算法原理回顾
- 3.改进策略
- 4.结果展示
- 5.参考文献
- 6.代码获取
- 7.读者交流
1.摘要
为提升生物地理优化算法(BBO)的通用性与性能,本文提出一种混合灰狼算法(HBBOG)。HBBOG算法首先对BBO去除传统变异算子并引入差分变异机制,提高全局搜索能力;同时采用多重迁移操作,增强局部搜索能力。GWO则引入对立学习策略,以避免陷入局部最优。HBBOG算法通过单维与全维交替策略将改进后的BBO与GWO进行有机融合,兼顾了算法的全局探索与局部开发能力。
2.算法原理回顾
【智能算法】灰狼算法(GWO)原理及实现
BBO:生物地理优化算法
3.改进策略
差分变异策略
差分进化(DE)是一种鲁棒性较强的进化算法,它利用当前种群中的距离和方向信息来引导后续搜索。DE中的差分操作具有良好的全局搜索能力,论文去除了BBO中变异算子,将一种更高效的差分变异操作融合到BBO的迁移算子:
H i ( S I V j ) ⟵ H i ( S I V j ) + α d ∗ ( H b e s t ( S I V j ) − H i ( S I V j ) + H r n 1 ( S I V j ) − H r n 2 ( S I V j ) ) \begin{aligned} & H_{i}(SIV_{j})\longleftarrow H_{i}(SIV_{j})+\alpha_{d}*(H_{best}(SIV_{j})-H_{i}(SIV_{j}) \\ & +H_{rn1}(SIV_{j})-H_{rn2}(SIV_{j})) \end{aligned} Hi(SIVj)⟵Hi(SIVj)+αd∗(Hbest(SIVj)−Hi(SIVj)+Hrn1(SIVj)−Hrn2(SIVj))
差分变异通过引入多个栖息地之间的差分信息,增强了个体的多样性和种群的全局探索能力。个体不仅结合自身信息,还融合了最优解与其他栖息地之间的差分结果,借助加权机制实现更丰富的信息更新。
示例学习方法
为提升迁移操作的效率与引导性,论文不采用BBO中的轮盘赌策略进行选择,示例学习方法依据个体适应度(HSI)对种群进行排序,将每个个体前面适应度更优的个体视为其示例。
S I = c e i l ( ( i − 1 ) ∗ r a n d ) SI=ceil((i-1)*rand) SI=ceil((i−1)∗rand)
多重迁移算子
为了在增强BBO全局搜索能力的同时实现探索与开发的平衡,还需提升其局部搜索能力。在BBO的迁移算子中引入了多重迁移操作,以替代原有的迁移机制:
H i ( S I V j ) ⟵ H S I ( S I V j ) + α h ∗ ( 0.5 − r a n d ) ∗ ( H i ( S I V j ) − H S I ( S I V j ) ) H_i(SIV_j)\longleftarrow H_{SI}(SIV_j)+\alpha_h*(0.5-rand)*(H_i(SIV_j)-H_{SI}(SIV_j)) Hi(SIVj)⟵HSI(SIVj)+αh∗(0.5−rand)∗(Hi(SIVj)−HSI(SIVj))
α h = 2 t / M a x D T \alpha_h=2\sqrt{t/MaxDT} αh=2t/MaxDT
单维和全维交替策略
在大多数进化算法中,更新通常采用全维度方式,具备较快的收敛速度,适合局部搜索。在人工蜂群算法及其变种中,采用的是单维度更新方式,能更有效地保持种群多样性,适用于全局搜索。单维和全维交替策略通过随机维度选择与维度循环机制灵活切换。
流程图
4.结果展示
5.参考文献
[1] Zhang X, Kang Q, Cheng J, et al. A novel hybrid algorithm based on biogeography-based optimization and grey wolf optimizer[J]. Applied Soft Computing, 2018, 67: 197-214.
6.代码获取
function [Alpha_score,Cbest,Alpha_pos]=HBBOG(f,a,b,d,M,N)%混合差分扰动和趋优迁移的BBO算法
% Hybrid Biogeography Based Optimization with Grey wolf optimization(HBBOG)Alpha_pos=zeros(1,d);
Alpha_score=inf; %change this to -inf for maximization problems
Beta_pos=zeros(1,d);
Beta_score=inf; %change this to -inf for maximization problems
Delta_pos=zeros(1,d);
Delta_score=inf; %change this to -inf for maximization problems
lambdaLower = 0.0; % lower bound for immigration probabilty per gene
lambdaUpper = 1; % upper bound for immigration probabilty per gene
Ib = 1; % max immigration rate for each island
Eb = 1; % max emigration rate, for each island
P = N; % max species count, for each island
dn=d+1;
X=rand(N,d+1);
aa=repmat(a,N,1);
bb=repmat(b,N,1);
X(:,1:d)=aa+(bb-aa).*X(:,1:d);
X(:,d+1)=feval(f, X(:,1:d));fit=X(:,d+1);
X = PopSort(X);
for i=1:Nif fit(i)<Alpha_scoreAlpha_score=fit(i); % Update alphaAlpha_pos=X(i,1:d);endif fit(i)>Alpha_score && fit(i)<Beta_scoreBeta_score=fit(i); % Update betaBeta_pos=X(i,1:d);endif fit(i)>Alpha_score && fit(i)>Beta_score && fit(i)<Delta_scoreDelta_score=fit(i); % Update deltaDelta_pos=X(i,1:d);end
endSpeciesCount=zeros(1);
lambda=zeros(1);
mu=zeros(1);
for j = 1 : Nif X(j,d+1) < infSpeciesCount(j) = P - j;elseSpeciesCount(j) = 0;endlambda(j) = Ib * (1 - SpeciesCount(j) / P);mu(j) = Eb * SpeciesCount(j) / P;
end
lambdaMin = min(lambda);
lambdaMax = max(lambda);
lambdaScale = lambdaLower + (lambdaUpper - lambdaLower) * (lambda - lambdaMin) / (lambdaMax - lambdaMin);
Cbest=zeros(1,M);
Ped=0.5;a1=4;a2=2;
l=0;
while l<M p=sqrt(l/M);Positions=X(:,1:d);as=1-l*((1-0)/M);if l<1*M/2for k = 1 : Nalfa=rand^1;j=ceil(rand*d); if rand < lambdaScale(k)% Pick a habitat from which to obtain a featureSelectIndex=ceil((k-1)*rand); if rand>pif rand<0.5Positions(k,j) =X(SelectIndex,j);elsecnum=ceil(rand*d);Positions(k,j) =X(SelectIndex,cnum); endelse Positions(k,j) =X(SelectIndex,j)+2*p*(0.5-rand)*(X(k,j)-X(SelectIndex,j));endelsernum=ceil(N*rand);while rnum==k,rnum=ceil(N*rand);endrnum1=ceil(N*rand);while rnum1==k ||rnum1==rnum,rnum1=ceil(N*rand);end Positions(k,j) = X(k,j)+alfa*(X(1,j)-X(k,j)+X(rnum1,j)-X(rnum,j)); end end elseif rand<PedNum=ceil(N*rand); for i=1:Nif i==Num Positions(i,:)=a+(b-Alpha_pos);elseif rand<=0.5k=ceil(rand*d);X1=Alpha_pos(k)-a1*as*(2*rand-1)*abs(2*rand*Alpha_pos(k)-Positions(i,k));X2=Beta_pos(k)-a1*as*(2*rand-1)*abs(2*rand*Beta_pos(k)-Positions(i,k));X3=Delta_pos(k)-a1*as*(2*rand-1)*abs(2*rand*Delta_pos(k)-Positions(i,k));Positions(i,k)=(X1+X2+X3)/3;% Equation (3.7)else X1=Alpha_pos-a2*as*(2*rand(1,d)-1).*abs(2*rand(1,d).*Alpha_pos-Positions(i,:));X2=Beta_pos-a2*as*(2*rand(1,d)-1).*abs(2*rand(1,d).*Beta_pos-Positions(i,:));X3=Delta_pos-a2*as*(2*rand(1,d)-1).*abs(2*rand(1,d).*Delta_pos-Positions(i,:));Positions(i,:)=(X1+X2+X3)/3;endendendelse for k = 1 : Nalfa=rand^1;for j = 1 : dif rand < lambdaScale(k)% Pick a habitat from which to obtain a featureSelectIndex=ceil((k-1)*rand); if rand>pif rand<0.5cnum=ceil(rand*d);Positions(k,j) =X(SelectIndex,cnum);elsePositions(k,j) =X(SelectIndex,j);endelsePositions(k,j) =X(SelectIndex,j)+2*as*(0.5-rand)*(X(k,j)-X(SelectIndex,j));endelsernum=ceil(N*rand);while rnum==k,rnum=ceil(N*rand);endrnum1=ceil(N*rand);while rnum1==k ||rnum1==rnum,rnum1=ceil(N*rand);end Positions(k,j) = X(k,j)+alfa*(X(1,j)-X(k,j)+X(rnum1,j)-X(rnum,j)); endendend end end Positions=simplebounds(Positions,aa,bb); fit=feval(f,Positions); for i=1:N fitness=fit(i);if X(i,d+1)>fitnessX(i,d+1)=fitness;X(i,1:d)=Positions(i,:);end % Update Alpha, Beta, and Deltaif fitness<Alpha_scoreAlpha_score=fitness; % Update alphaAlpha_pos=Positions(i,:);endif fitness>Alpha_score && fitness<Beta_scoreBeta_score=fitness; % Update betaBeta_pos=Positions(i,:);endif fitness>Alpha_score && fitness>Beta_score && fitness<Delta_scoreDelta_score=fitness; % Update deltaDelta_pos=Positions(i,:);end endX = PopSort(X);l=l+1;Cbest(l)=X(1,dn);
end
Alpha_score=X(1,dn);
Alpha_pos=X(1,1:d);function s=simplebounds(s,Lb,Ub)
% Apply the lower bound
ns_tmp=s;
I=ns_tmp<Lb;
ns_tmp(I)=Lb(I);% Apply the upper bounds
J=ns_tmp>Ub;
ns_tmp(J)=Ub(J);
% Update this new move
s=ns_tmp;