Graduated student Lishuang Wang and others won the GECCO 2024 Best Paper Award

Graduated student Lishuang Wang and others won the GECCO 2024 Best Paper Award

On July 18, 2024, at the Genetic and Evolutionary Computation Conference (GECCO 2024) held in Melbourne, Australia, Prof. Ran Cheng’s research group was honored with the Best Paper Award. The paper, titled “Tensorized NeuroEvolution of Augmenting Topologies for GPU Acceleration,” lists graduated student Lishuang Wang as the first author, with undergraduate students Mengfei Zhao and Enyu Liu as the second and third authors, graduated student Kebin Sun as the fourth author, and Prof. Ran Cheng as the corresponding author.

Award Certificate

The Genetic and Evolutionary Computation Conference (GECCO), organized by ACM SIGEVO, is one of the most prestigious and influential international conferences in the field of computational intelligence. GECCO brings together the world’s leading researchers and scholars each year to exchange and showcase the latest research achievements in evolutionary computation. Since its inception in 1999, GECCO has become a flagship conference in the field of evolutionary computation, recognized internationally for its high academic standards and significant impact.

The NeuroEvolution of Augmenting Topologies (NEAT) algorithm, proposed by Kenneth Stanley and Risto Miikkulainen in 2002, has had a significant impact on fields such as artificial intelligence, robotic control, and autonomous driving. However, the traditional NEAT algorithm’s computational efficiency limitations have become apparent when dealing with large-scale problems. To address this challenge, Prof. Ran Cheng’s team developed the TensorNEAT algorithm library. Utilizing tensorization techniques, NEAT and its derivative algorithms (including CPPN and HyperNEAT) can fully support GPU acceleration.

Tensorization Method for NEAT Algorithm

Tensorization, a technique that converts data structures and operators into tensor forms, is particularly suitable for efficient parallel computation on GPUs. TensorNEAT converts the diverse network topologies in the NEAT algorithm into tensor forms, enabling parallel execution of key operations across the entire population, significantly enhancing the algorithm’s computational efficiency. Experimental results show that compared to the traditional NEAT algorithm, TensorNEAT achieves speed improvements of over 500 times across different tasks and hardware. TensorNEAT is now open-sourced on GitHub: https://github.com/EMI-Group/tensorneat.

发表回复

您的电子邮箱地址不会被公开。 必填项已用*标注