MetaMuse tackles LLM bias toward generic algorithms – The study finds large language models tend to suggest well‑known designs, limiting creative leaps needed for discontinuous solution spaces. To overcome this, the authors propose a new framework called MetaMuse that guides ideation through structured self‑reflection. The approach is detailed in a paper presented at ICLR [0].
Three self‑reflection principles guide MetaMuse’s ideation – First, solution diversity and usefulness are measured in concrete performance metrics rather than abstract idea space. Second, external stimuli steer the generation process instead of relying on internal randomness. Third, waypoint reasoning builds executable solutions step‑by‑step rather than free‑form chain‑of‑thought. These principles aim to produce more practical algorithms [0].
MetaMuse improves cache replacement at a global cloud provider – In experiments, the framework generated a cache‑replacement policy that reduced cache misses by up to 35.76 % compared with baseline heuristics. This demonstrates the system’s ability to create high‑performing solutions for real‑world infrastructure challenges [0].
MetaMuse also optimizes online bin packing, cutting bin usage – The generated algorithm lowered bin usage by up to 30.93 % in online bin‑packing tasks, another critical problem for large‑scale cloud operations. The results highlight MetaMuse’s versatility across distinct optimization problems [0].
The research was published in ICLR 2026 by Liang and Gao – The paper, titled “Algorithm Generation via Creative Ideation,” appears in the International Conference on Learning Representations proceedings. It was posted online on 2026‑04‑23 and is accessible through Microsoft Research [0].