Статьи журнала - International Journal of Information Technology and Computer Science

Все статьи: 1278

Managing Data Diversity on the Internet of Medical Things (IoMT)

Managing Data Diversity on the Internet of Medical Things (IoMT)

Iram Mehmood, Sidra Anwar, Aneeza Dilawar, Isma Zulfiqar, Raja Manzar Abbas

Статья научная

In the healthcare industry, the Internet of Medical Services (IOMT) plays a vital role throughout the increasing performance, reliability, and efficiency of an electronic device. Healthcare is also characterized as being complicated due to its highly diverse and large number of shareholders. Data diversity refers to the continuum of various types of elements in the data. The integration of data is difficult where different sources can adopt different identification for the same entity, but there is no explicit connection. Researches are contributing to a digitized Health care system through interconnections available medical resources and health care services. This Research presents the contribution of IoT to people in the field of Healthcare, highlighting the issues in different data integration, analysis of the existing algorithms and models, applications, and future challenges of IoT in terms of healthcare medical services. Big data analytics that incorporates millions of fragmented, organized, and unstructured sources of data will play a key role in how health care will be delivered in the future.

Бесплатно

Markov Models Applications in Natural Language Processing: A Survey

Markov Models Applications in Natural Language Processing: A Survey

Talal Almutiri, Farrukh Nadeem

Статья научная

Markov models are one of the widely used techniques in machine learning to process natural language. Markov Chains and Hidden Markov Models are stochastic techniques employed for modeling systems that are dynamic and where the future state relies on the current state. The Markov chain, which generates a sequence of words to create a complete sentence, is frequently used in generating natural language. The hidden Markov model is employed in named-entity recognition and the tagging of parts of speech, which tries to predict hidden tags based on observed words. This paper reviews Markov models' use in three applications of natural language processing (NLP): natural language generation, named-entity recognition, and parts of speech tagging. Nowadays, researchers try to reduce dependence on lexicon or annotation tasks in NLP. In this paper, we have focused on Markov Models as a stochastic approach to process NLP. A literature review was conducted to summarize research attempts with focusing on methods/techniques that used Markov Models to process NLP, their advantages, and disadvantages. Most NLP research studies apply supervised models with the improvement of using Markov models to decrease the dependency on annotation tasks. Some others employed unsupervised solutions for reducing dependence on a lexicon or labeled datasets.

Бесплатно

Mask R-CNN for Geospatial Object Detection

Mask R-CNN for Geospatial Object Detection

Dalal AL-Alimi, Yuxiang Shao, Ahamed Alalimi, Ahmed Abdu

Статья научная

Geospatial imaging technique has opened a door for researchers to implement multiple beneficial applications in many fields, including military investigation, disaster relief, and urban traffic control. As the resolution of geospatial images has increased in recent years, the detection of geospatial objects has attracted a lot of researchers. Mask R-CNN had been designed to identify an object outlines at the pixel level (instance segmentation), and for object detection in natural images. This study describes the Mask R-CNN model and uses it to detect objects in geospatial images. This experiment was prepared an existing dataset to be suitable with object segmentation, and it shows that Mask R-CNN also has the ability to be used in geospatial object detection and it introduces good results to extract the ten classes dataset of Seg-VHR-10.

Бесплатно

Mathematics and Software for Coordinated Planning Using Aggregated Linear Volume-time Models of Discrete Manufacturing Systems

Mathematics and Software for Coordinated Planning Using Aggregated Linear Volume-time Models of Discrete Manufacturing Systems

Alexander Pavlov, Kateryna Lishchuk, Oleg Melnikov, Mykyta Kyselov, Cennuo Hu

Статья научная

The problems of managing modern complex organizational and manufacturing systems, such as international production corporations, regional economies, sectoral ministries, etc., in conditions of fierce competition are primarily related to the need to consider the activity of organizational and manufacturing objects that make up a multi-level manufacturing system, that is, the ability to efficiently solve the problem of coordinating interests. This problem cannot be solved efficiently without the use of modern scientific achievements and appropriate software. As an example, we can cite the active systems theory pioneered by Prof. V. M. Burkov and his students, which successfully claims to be a constructive implementation of the idea of coordinated planning. This paper proposes new models and methods of coordinated planning of two-level organizational and manufacturing systems. Our models and methods use original compromise criteria and the corresponding constructive algorithms. The original aggregated volume-time models are used as models of organizational and manufacturing objects. We present a well-founded software structure for the proposed methods of coordinated planning. It contains an intelligent interface for using the presented results in solving applied problems.

Бесплатно

Measurement Based Admission Control Methods in IP Networks

Measurement Based Admission Control Methods in IP Networks

Erik Chromy, Tomas Behul

Статья научная

Trends in telecommunications show that customers require still more and more bandwidth. If the telecommunication operators want to be successful, they must invest a lot of money to their infrastructure and they must ensure required quality of service. The telecommunication operators would devote to development in this area. The article deals with quality of service in IP networks. Problems of quality of service can be solved through admission control methods based on measurements. These admission control methods take care of control of incoming traffic load. New flow can be accepted only if needed quality of service is ensured for it and without quality of service breach causing of already accepted flows. In the article were made description of simulations and results of simulations for Voice over IP, constant bit rate and video sources. Simulations were realized in Network simulator 2 environment. These simulations were evaluated on the base of some parameters such as: estimated bandwidth, utilization and loss rate.

Бесплатно

Measurement of Usability of Office Application Using a Fuzzy Multi-Criteria Technique

Measurement of Usability of Office Application Using a Fuzzy Multi-Criteria Technique

Sanjay Kumar Dubey, Sumit Pandey

Статья научная

Software Quality is very important aspect for any software development company. Software quality measurement is also a major concern for improving the software applications in software development processes in these companies. The quantification of various quality factors and integrate them into various software quality models is very important to analyze the quality of software system. Software usability is one of the important quality factors now days due to the increasing demand of interactive and user friendly software systems. In this paper, an attempt has been made to quantifying the usability of Ms-Excel 2007 and Ms-Excel 2010 application software using ISO/IEC 9126 model and compare the numeric value of usability for both version of Ms-Excel 2007 and Ms-Excel 2010. Due to the random nature of the usability attributes, the fuzzy multi criteria decision technique has been used to evolve the usability of the software office application. The present method will be helpful to analyze and enhance the quality of interactive software system.

Бесплатно

Measuring Cognitive Distortions: A KPI-based Approach to Understanding Faulty Information Processing

Measuring Cognitive Distortions: A KPI-based Approach to Understanding Faulty Information Processing

Laxmi Jayannavar, T.N.R. Kumar, Shreekant Jere

Статья научная

Cognitive distortion refers to the patterns of negative thinking which can distort a person’s perception of reality. These distorted thoughts lead to unhealthy behaviors, emotional distress, and mental health issues, like depression and anxiety. In order to detect cognitive distortion, Deep Learning (DL) techniques are employed; however, these approaches lead to a high error rate and poor performance. This is mainly because they fail to understand the hierarchical semantics, subtle emotional tones, and long-range dependencies within the text. Hence, a new model termed Hierarchical Attention Neural Harmonic Fusion Network (HAN-HFNet) is exploited for cognitive distortion detection from text. Initially, the input sentence is passed to Bidirectional Encoder Representations from Transformers (BERT) tokenization, which generates context-aware embeddings capable of capturing subtle emotional nuances, long-range dependencies, and hierarchical semantics critical for identifying cognitive distortions in text. Next, various Key Performance Indicators (KPIs), like Severity of Cognitive Distortions (SCD), Frequency of Cognitive Distortion (FCD), Correlation Between Cognitive Distortions and Depression Severity, Cognitive Behavioral Therapy (CBT), self-reports of cognitive distortions from individuals, Long-Term Monitoring of Cognitive Distortions (LT-MCD), and impact on daily functioning is considered. Lastly, the cognitive distortion is detected utilizing HAN-HFNet, which is obtained by integrating Hierarchical Deep Learning for Text classification (HDLTex) and Deep High-order Attention neural Network (DHA-Net) using harmonic analysis. This fusion enables the model to learn both coarse and fine-grained features, enhancing contextual understanding and reducing error. Moreover, the performance of the HAN-HFNet is evaluated using the Faulty Information Processing Dataset (FIPD), and it computed a minimum classification error of 0.072, and maximum recall, accuracy, precision, and F1-score of 94.756%, 92.754%, 91.866%, and 93.289%. Furthermore, the model is suitable for integration into real-world mental health support systems, offering scalability and potential deployment in online therapy platforms, clinical decision-making tools, and cognitive behavioral assessment frameworks.

Бесплатно

Measuring Complexity, Development Time and Understandability of a Program: A Cognitive Approach

Measuring Complexity, Development Time and Understandability of a Program: A Cognitive Approach

Amit Kumar Jakhar, Kumar Rajnish

Статья научная

One of the central problems in software engineering is the inherent complexity. Since software is the result of human creative activity and cognitive informatics plays an important role in understanding its fundamental characteristics. This paper models one of the fundamental characteristics of software complexity by examining the cognitive weights of basic software control structures. Cognitive weights are the degree of the difficulty or relative time and effort required for comprehending a given piece of software, which satisfy the definition of complexity. Based on this approach a new concept of New Weighted Method Complexity (NWMC) of software is developed. Twenty programs are distributed among 5 PG students and development time is noted of all of them and mean is considered as the actual time needed time to develop the programs and Understandability (UA) is also measured of all the programs means how much time needed to understand the code. This paper considers Jingqiu Shao et al Cognitive Functional Size (CFS) of software for study. In order to validate the new complexity metrics we have calculated the correlation between proposed metric and CFS with respect to actual development time and performed analysis of NWMC with CFS with Mean Relative Error (MRE) and Standard Deviation (Std.). Finally, the authors found that the accuracy to estimate the development time with proposed measure is far better than CFS.

Бесплатно

Measuring the information security maturity of enterprises under uncertainty using fuzzy AHP

Measuring the information security maturity of enterprises under uncertainty using fuzzy AHP

Adel A. Nasser, Abdualmajed A. Al-Khulaidi, Mijahed N. Aljober

Статья научная

Generally, measuring the Information Security maturity(ISM) is the first step to build a new knowledge information security management system in an organization. Knowing the ISM level helps organizations decide the type of protection strategies and policies will be taken and their priorities to strengthen their competitive ability. One of the possible ways to solve the problem is a using multiple criteria decision-making (MCDM) methodology. Analytic hierarchy process (AHP) is one of the most commonly used MCDM methods, which combines subjective and personal preferences in the information security assessment process. However, the AHP involves human subjectivity, which introduces vagueness type of uncertainty and requires the use of decision-making under those uncertainties. In this paper, the IS maturity is based on hierarchical multilevel information security gap analysis model for ISO 27001:2013 security standard. The concept of fuzzy set is applied to Analytic Hierarchical Process (AHP) to propose a model for measuring organizations IS maturity under uncertain environment. Using fuzzy AHP approach helps determine more efficiently importance weights of factors and indicators, especially deal with imprecise and uncertain expert comparison judgments. A case study is used to illustrate the better new method for IS evaluation.

Бесплатно

Medevice: a mobile – based diagnosis of common human illnesses using neuro – fuzzy expert system

Medevice: a mobile – based diagnosis of common human illnesses using neuro – fuzzy expert system

Johaira U. Lidasan, Martina P. Tagacay

Статья научная

Fever is a sign that the body is trying to fight infection. It is usually accompanied by various sicknesses or symptoms that signal another illness or disease. Diagnosing it ahead of time is essential because it has to do with human life and to determine what to do to get well. MeDevice is a mobile-based application that runs in Android devices that allows the user to enter the levels of his/her symptoms and diagnoses the disease either as influenza, dengue, chicken pox, malaria, typhoid fever, measles, Hepatitis A and pneumonia together with its details and its first aid treatment. It aims at providing an efficient decision support platform to aid people with fever in diagnosing their disease and whether or not to seek medical attention especially in developing countries like the Philippines. This application is engineered with the knowledge base and the inference method of fuzzy logic and expert system with the help of Gradient Descent optimization algorithm and back propagation neural network to achieve the optimum value of the error rate. This is essential to provide the application with a high accuracy rate which shows during the conduct of testing of the application.

Бесплатно

Medical image encryption using chaotic map improved advanced encryption standard

Medical image encryption using chaotic map improved advanced encryption standard

Ranvir Singh Bhogal, Baihua Li, Alastair Gale, Yan Chen

Статья научная

Under the Digital Image and Communication in Medicine (DICOM) standard, the Advanced Encryption Standard (AES) is used to encrypt medical image pixel data. This highly sensitive data needs to be transmitted securely over networks to prevent data modification. Therefore, there is ongoing research into how well encryption algorithms perform on medical images and whether they can be improved. In this paper, we have developed an algorithm using a chaotic map combined with AES and tested it against AES in its standard form. This comparison allowed us to analyse how the chaotic map affected the encryption quality. The developed algorithm, CAT-AES, iterates through Arnold’s cat map before encryption a certain number of times whereas, the standard AES encryption does not. Both algorithms were tested on two sets of 16-bit DICOM images: 20 brain MRI and 26 breast cancer MRI scans, using correlation coefficient and histogram uniformity for evaluation. The results showed improvements in the encryption quality. When encrypting the images with CAT-AES, the histograms were more uniform, and the absolute correlation coefficient was closer to zero for the majority of images tested on.

Бесплатно

Meta-Population Modelling and Simulation of the Dynamic of Malaria Transmission with Influence of Climatic Factors

Meta-Population Modelling and Simulation of the Dynamic of Malaria Transmission with Influence of Climatic Factors

Justin-Herve NOUBISSI, Jean Claude Kamgang, Eric Ramat, Januarius Asongu, Christophe Cambier

Статья научная

We model the dynamic of malaria transmission taking into account climatic factors and the migration between Douala and Yaounde´, Yaounde´ and Ngaounde´re´, three cities of Cameroon country. We show how variations of climatic factors such as temperature and relative humidity affect the malaria spread. We propose a meta-population model of the dynamic transmission of malaria that evolves in space and time and that takes into account temperature and relative humidity and the migration between Douala and Yaounde´, Yaounde´ and Ngaounde´re´. More, we integrate the variation of environmental factors as events also called mathematical impulsion that can disrupt the model evolution at any time. Our modelling has been done using the Discrete EVents System Specification (DEVS) formalism. Our implementation has been done on Virtual Laboratory Environment (VLE) that uses DEVS formalism and abstract simulators for coupling models by integrating the concept of DEVS.

Бесплатно

Metal Artifact Reduction from Computed Tomography (CT) Images using Directional Restoration Filter

Metal Artifact Reduction from Computed Tomography (CT) Images using Directional Restoration Filter

Mithun Kumar PK, Mohammad Motiur Rahman

Статья научная

Computed tomography angiography (CTA) is a stabilized tool for vessel imaging in the medical image processing field. High-intense structures in the contrast image can seriously hamper luminal visualization. Metal artifacts are an extensive problem in computed tomography (CT) images. We proposed directional restoration filtering process with Fuzzy logic in order to reduce metal artifact from CT images. We create two sets by iteration process and these sets will be sorted in ascending order. After sorting we take two elements from two data sets and the tracking both elements will be selected from the second position of those sorting arrays. Intersection Fuzzy logic will be executed between two selected elements and Gaussian convolution operation will be performed in the entire images because of enhancement the artifact affected CT images. In this paper, we investigated a fully automated intensity-based filter and it depends on the gray level variation rating. This results in a better visualization of the vessel lumen, also of the smaller vessels, allowing a faster and more accurate inspection of the whole vascular structures.

Бесплатно

Method for Object Motion Characteristic Estimation Based on Wavelet Multi-Resolution Analysis: MRA

Method for Object Motion Characteristic Estimation Based on Wavelet Multi-Resolution Analysis: MRA

Kohei Arai

Статья научная

Method for object motion characteristic estimation based on wavelet Multi-Resolution Analysis: MRA is proposed. With moving pictures, the motion characteristics, direction of translation, roll/pitch/yaw rotations can be estimated by MRA with an appropriate support length of the base function of wavelet. Through simulation study, method for determination of the appropriate support length of Daubechies base function is clarified. Also it is found that the proposed method for object motion characteristics estimation is validated.

Бесплатно

Methods of Increasing the Efficiency of Data Consistency in Information Systems

Methods of Increasing the Efficiency of Data Consistency in Information Systems

Nikitin Valerii, Krylov Ievgen, Anikin Volodymyr

Статья научная

The article is devoted to special methods for distributed databases that allow to accelerate data reconciliation in information systems, such as IoT, heterogeneous multi-computer systems, analytical administrative management systems, financial systems, scientific management systems, etc. A method for ensuring data consistency using a transaction clock is proposed and the results of experimental research for the developed prototype of a financial system are demonstrated. The transaction clock receives transactions from client applications and stores them in appropriate queues. The queues are processed based on the transaction priority. The highest priority queue is processed before the lowest priority queue. This allows you to determine which important data (such as financial transactions) should be processed first. The article justifies the replacement of the Merkle tree with a hashing algorithm and the use of the Bloom spectral filter to improve the Active Anti-Entropy method to accelerate eventual consistency. For its effective use, the filter generation algorithm is modified, which allowed to increase the speed of its generation and maintain a sufficient level of collision resistance.

Бесплатно

Mimicking Nature: Analysis of Dragonfly Pursuit Strategies Using LSTM and Kalman Filter

Mimicking Nature: Analysis of Dragonfly Pursuit Strategies Using LSTM and Kalman Filter

Mehedi Hassan Zidan, Rayhan Ahmed, Khandakar Anim Hassan Adnan, Tajkurun Zannat Mumu, Md. Mahmudur Rahman, Debajyoti Karmaker

Статья научная

Pursuing prey by a predator is a natural phenomenon. This is an event when a predator targets and chases prey for consuming. The motive of a predator is to catch its prey whereas the motive of a prey is to escape from the predator. Earth has many predator species with different pursuing strategies. Some of them are sneaky again some of them are bolt. But their chases fail every time. A successful hunt depends on the strategy of pursuing one. Among all the predators, the Dragonflies, also known as natural drones, are considered the best predators because of their higher rate of successful hunting. If their strategy of pursuing a prey can be extracted for analysis and make an algorithm to apply on Unmanned arial vehicles, the success rate will be increased, and it will be more efficient than that of a dragonfly. We examine the pursuing strategy of a dragonfly using LSTM to predict the speed and distance between predator and prey. Also, The Kalman filter has been used to trace the trajectory of both Predator and Prey. We found that dragonflies follow distance maintenance strategy to pursue prey and try to keep its velocity constant to maintain the safe (mean) distance. This study can lead researchers to enhance the new and exciting algorithm which can be applied on Unmanned arial vehicles (UAV).

Бесплатно

Minimax Estimation of the Parameter of Exponential Distribution based on Record Values

Minimax Estimation of the Parameter of Exponential Distribution based on Record Values

Lanping Li

Статья научная

Bayes estimators of the parameter of exponential distribution are obtained with non-informative quasi-prior distribution based on record values under three loss functions. These functions are weighted squared error loss, square log error loss and entropy loss functions. Finally the minimax estimators of the parameter are obtained by using Lehmann’s theorem. Comparisons in terms of risks with the estimators of parameter under three loss functions are also studied.

Бесплатно

Minimizing Power Consumption by Personal Computers: A Technical Survey

Minimizing Power Consumption by Personal Computers: A Technical Survey

P. K. Gupta, G. Singh

Статья научная

Recently, the demand of “Green Computing”, which represents an environmentally responsible way of reducing power consumption, and involves various environmental issues such as waste management and greenhouse gases is increasing explosively. We have laid great emphasis on the need to minimize power consumption and heat dissipation by computer systems, as well as the requirement for changing the current power scheme options in their operating systems (OS). In this paper, we have provided a comprehensive technical review of the existing, though challenging, work on minimizing power consumption by computer systems, by utilizing various approaches, and emphasized on the software approach by making use of dynamic power management as it is used by most of the OSs in their power scheme configurations, seeking a better understanding of the power management schemes and current issues, and future directions in this field. Herein, we review the various approaches and techniques, including hardware, software, the central processing unit (CPU) usage and algorithmic approaches for power economy. On the basis of analysis and observations, we found that this area still requires a lot of work, and needs to be focused towards some new intelligent approaches so that human inactivity periods for computer systems could be reduced intelligently.

Бесплатно

Minimizing Separability: A Comparative Analysis of Illumination Compensation Techniques in Face Recognition

Minimizing Separability: A Comparative Analysis of Illumination Compensation Techniques in Face Recognition

Chollette C. Olisah

Статья научная

Feature extraction task are primarily about making sense of the discriminative features/patterns of facial information and extracting them. However, most real world face images are almost always intertwined with imaging modality problems of which illumination is a strong factor. The compensation of the illumination factor using various illumination compensation techniques has been of interest in literatures with few emphasis on the adverse effect of the techniques to the task of extracting the actual discriminative features of a sample image for recognition. In this paper, comparative analyses of illumination compensation techniques for extraction of meaningful features for recognition using a single feature extraction method is presented. More also, enhancing red, green, blue gamma encoding (rgbGE) in the log domain so as to address the separability problem within a person class that most techniques incur is proposed. From experiments using plastic surgery sample faces, it is evident that the effect illumination compensation techniques have on face images after pre-processing is highly significant to recognition accuracy.

Бесплатно

Mining Frequent Itemsets with Weights over Data Stream Using Inverted Matrix

Mining Frequent Itemsets with Weights over Data Stream Using Inverted Matrix

Long Nguyen Hung, Thuy Nguyen Thi Thu

Статья научная

In recent years, the mining research over data stream has been prominent as they can be applied in many alternative areas in the real worlds. In this paper, we have proposed an algorithm called MFIWDSIM for mining frequent itemsets with weights over a data stream using Inverted Matrix [10]. The main idea is moving data stream to an inverted matrix saved in the computer disks so that the algorithms can mine on it many times with different support thresholds as well as alternative minimum weights. Moreover, this inverted matrix can be accessed to mine in different times for user's requirements without recalculation. By analyzing and evaluating, the MFIWDSIM can be seen as the better algorithm compared to WSWFP-stream [9] for mining frequent itemsets with weights over data stream.

Бесплатно

Журнал