Abstract


Optimization today is used in almost every area, from routing trucks on roads to saving energy in power grids, and even training AI models. Classic algorithms like GA, PSO, and DE have worked well in many cases. But when the problem becomes very large or complex, they start struggling. Many times, they get trapped in local solutions. They also lose balance where exploration becomes weak, and exploitation turns slow. To improve performance, researchers started mixing methods. One algorithm can give global jumps, another can refine locally. Together they become stronger. This is the core idea of hybrid metaheuristics. Early days saw simple mixes like GA with SA. Later PSO joined with DE to balance speed and accuracy. In recent years, many fusions have come like Whale Optimization with Teaching- Learning, Honey-Badger with Beluga Whale, and even modern hybrids where reinforcement learning or large language models act like a brain to guide the search. This survey reviews these new hybridfusion methods of recent years. It explains the taxonomy: operator-level mixing, phase-level switching, and populationlevel fusion. It also shows the synergy tricks: reinforcement-learning switchers, adaptive weight sharing, ensemble voting, and surrogate models. Benchmarks and fair comparison methods are also discussed. As a result, hybrids prove to be more robust and converge faster than single algorithms. However, several challenges still remain such as too many parameters, missing convergence proofs, heavy computational cost and poor reproducibility. Therefore, hybrids are the road forward. The future is to make them lean, adaptive, and truly useful across many types of problems.




Keywords


Hybrid Metaheuristics, Fusion Algorithms, Global Optimization, Exploration- Exploitation Balance, Reinforcement Learning