Optimizing placement of control devices on channels in monitoring networks
Автор: Kalney Artyom Maksimovich
Журнал: Проблемы информатики @problem-info
Рубрика: Теоретическая и системная информатика
Статья в выпуске: 4 (57), 2022 года.
Бесплатный доступ
The main task of any information control in the network is to analyze the behavior of the network and simulate options for its development based on real information. The more complex the network topology and configuration, the more information is required to conduct its adequate analysis. When analyzing or designing large monitoring networks, the problem of chosen control devices (b-nodes) for gathering information is often arised. After some pre-processing or directly b-nodes transfer information to central node (c-nodc) by reliable channels. One of the main indices of quality for such networks is a size of area that is under reliable monitoring, which may be estimated by MENC - a mathematical expectation of a number of nodes that are connected with one special node. Earlier this problem of optimal placement of control devices was described in [1]. But model was limited by graph only and no algorithm was offered for optimization. In present paper random hypernet with unreliable branches are considered. Simulated annealing algorithm was applied to optimize the cost-consuming placement of b-nodes. As need in adequate presentation of multi-layered networks is urgent, many researchers propose their models, mainly while solving special tasks. Yet there are some papers that consider the problem in general terms. Most known are such models as sandwiching graphs [2], graphs with different kinds of edges [3], and layered complex networks (LCN) [4]. All these models lacks fundamental and systematical mathematical description. At the same time, more than 30 years ago Russian scientist Vladimir Popkov invented conception of hypernets that allows description of multi-layered networks of different kinds using common basics. Unfortunately, first mention of this model in English occurs in 2005 only [12]. Later some papers had been published concerning solving some special problems based on the hypernet model: the task of optimal placement of the monitoring devices on channels of communication network [6]; design of utility network [7-8]; optimizing transport network [9]. Last two use random and fuzzy hypernet, respectively. The task of accurately calculating the connectivity probability of a random network with unreliable elements belongs to the class of NP-hard ones; therefore, various approximate methods were previously used in practice, while exact calculation methods were mostly of purely academic interest. However, the development of computer technology has led to a revival of interest in the use of exact methods in practice, since it became possible in a reasonable time to calculate the reliability of small and medium-sized networks (up to tens of nodes). Of the exact methods for determining the probability of a network being connected with unreliable elements, the factorization or Moore-Shannon method is most widely known. This method consists in recursively breaking a hypernet into several simpler branches, respectively, where the vertex is “reliable” (the probability of presence becomes equal to one) and where it is removed. Recursion continues until a reliable path connecting the selected vertices is obtained, or until an unconnected secondary network is obtained; the recursion also ends when a two- vertex hypernet is received. Due to the fact that the number of recursions grows exponentially with the number of vertices, additional techniques are required to accelerate this method. The following methods are used: before calling a recursion, edges in hanging trees and in connected components that do not contain selected vertices in the secondary network are deleted, when the vertex is deleted, the network connectivity is checked and if both selected vertices are in the same connected component, then the reliability is calculated in this component, if the selected vertices lie in different components, then we get a disconnected network. In both of the above methods, the number of elements of the hypernet decreases, the reliability indicator does not change. Classic cost-reliability model is considered for optimization. Each placement of b-node on branch changes hypernet elements. First we get b-node that split branch and also creates new edges in secondary network. It is required to design a hypernet taking throughputs of channels and route length into account. For solving this problem we search the shortest route for each pair of nodes from X on branches of a primary network. Breadth-first search algorithm is used. By solving this problem the embedding of the secondary network into the primary is obtained, and therefore hypernet is obtained. Various heuristic algorithms have been considered. Simulated annealing (SA) algorithm seems quite appropriate because it does not require too much calculation MENC in condition (3).
Network reliability, hypernets, optimizing placement of control devices
Короткий адрес: https://sciup.org/143179898
IDR: 143179898 | DOI: 10.24412/2073-0667-2022-4-28-38