Finding graph chromatic number through deep learning

Автор: Kholkin Sergei Denisovich, Filimonov Andrey Viktorovich

Журнал: Проблемы информатики @problem-info

Рубрика: Теоретическая и системная информатика

Статья в выпуске: 1 (54), 2022 года.

Бесплатный доступ

Deep learning in recent years has become state-of-the-art type of algorithms in various spheres such as computer vision and natural language processing. Number of deep learning architectures is growing and so is field of processing graphs through these architectures. Graph Neural Network (GNN) is deep learning architecture that have an ability to reflect graph structure and learn various patterns that are effectively used in graph related domains such as drug discovery or traffic forecasting. GNN embeds vertices as vectors that reflect vertex related features that are updated by function that depends on vertex neighbors, in some ways that can be interpreted as message passing between vertices, also update function is permutation invariant what gives GNN unique ability not to depend on vertices order or vertices number, graph of any size can be fed into particular GNN. Many studies on applying deep learning technics for combinatorial optimization is underway and since many combinatorial problems either originally represented in terms of graph theory or can be converted to such, GNN seems to be a natural choice for finding their approximate solutions. Recent studies confirm this by showing great results in solving Travelling Salesman Problem or Boolean Satisfiability Problem. In this paper we train GNN to find chromatic number of graph or to solve decision version of Graph Coloring Problem. Neural Network is trained in supervised manner as a binary classifier, given graph and number of colors model should return a probability that graph can or cannot be colored by given number of colors. Should be noticed that GNN finds chromatic number but doesn’t find vertices coloring itself, hence GNN can predict lower chromatic. Model architecture reflect color-vertex relationship by assigning vector embedding to each color and passing them into update embeddings function so they are treated like vertices embeddings. GNN is built in a recurrent manner with adjustable number of timesteps that reflect depth of message passing between vertices. At every timestep of recurrency model updates vertex and colors embeddings through aggregating information from its neighbors and/or colors through recurrent unit that stands as an update function (Long Short-Term Memory network with LaverNorm in our case). For additional expressivity MLP layers were added before recurrent unit at each timestep. After embedding updates vertices embeddings are being fed into classifier MLP to get a probability that graph is able to be colored by given number of colors. For training GNN in supervised manner labeled data is needed. Natural datasets related to graph coloring are unsuitable for training and validation through heterogeneity and small size. Data was generated following phase transition paradigm, for each graph instance in dataset we have a paired one that differs only by one edge but have higher chromatic number and that pair reflects hard to color pair of graphs. Because of this data generation paradigm GNN optimization objective becomes more difficult to reach and hopefully makes GNN more knowledgeable. Two datasets were generated CHROM_40_60 and CHROM_35_45 with 40 to 60 or 35 to 45 number of vertices in graph instance respectively and with 3 to 8 chromatic numbers both. Instances of CHROM_35_45 dataset are supposed to be easier to color than CHROM_40_60 ones. GNN was trained using that data and compared to heuristics: 43 tabucol and greedy. Percentage of correctly predicted graphs chromatic numbers were taken as a comparison metric. Testing on CHROM_40_60 and CHROM_35_45 datasets showed that GNN have slightly better accuracy metric than tabucol: GNN correctly predict 69 % and 77 % of graph chromatic numbers, when tabucol correctly predict 66 % and 76 % of graph chromatic numbers, greedy is far behind. To test GNN generalization ability cross dataset experiments were performed, GNN trained on CHROM_40_60 dataset is validated on CHROM_35_45 dataset and vice versa. These experiments show that model trained on CHROM_40_60 harder instances show decent accuracy on CHROM_35_45 easier instances while model trained on CHROM_35_45 easier instances perform weaker on CHROM_40_60 hard instances but still keeps some accuracy hence we can say that model has an ability to generalize to graphs with other number of vertices. Testing on natural instances of COLOR02/03/04 dataset which instances differ a lot from generated train/test data was performed. GNN trained on CHROM_40_60 shows accuracy similar to tabucol and much better than greedy. GNN behaves unstable on instances with chromatic number higher that was seen in training dataset.

Еще

Neural network, deep learning, chromatic number, combinatorial optimization

Короткий адрес: https://sciup.org/143179068

IDR: 143179068   |   DOI: 10.24412/2073-0667-2022-1-42-54

Статья научная