Статьи журнала - International Journal of Information Technology and Computer Science

Все статьи: 1227

Efficient Algorithm for Destabilization of Terrorist Networks

Efficient Algorithm for Destabilization of Terrorist Networks

Nisha Chaurasia, Akhilesh Tiwari

Статья научная

The advisory feasibility of Social Network Analysis (SNA) to study social networks have encouraged the law enforcement and security agencies to investigate the terrorist network and its behavior along with key players hidden in the web. The study of the terrorist network, utilizing SNA approach and Graph Theory where the network is visualized as a graph, is termed as Investigative Data Mining or in general Terrorist Network Mining. The SNA defined centrality measures have been successfully incorporated in the destabilization of terrorist network by deterring the dominating role(s) from the network. The destabilizing of the terrorist group involves uncovering of network behavior through the defined hierarchy of algorithms. This paper concerning the destabilization of terrorist network proposes a pioneer algorithm which seems to replace the already available hierarchy of algorithms. This paper also suggests use of the two influential centralities, PageRank Centrality and Katz Centrality, for effectively neutralizing of the network.

Бесплатно

Efficient Analysis of Pattern and Association Rule Mining Approaches

Efficient Analysis of Pattern and Association Rule Mining Approaches

Thabet Slimani, Amor Lazzez

Статья научная

The process of data mining produces various patterns from a given data source. The most recognized data mining tasks are the process of discovering frequent itemsets, frequent sequential patterns, frequent sequential rules and frequent association rules. Numerous efficient algorithms have been proposed to do the above processes. Frequent pattern mining has been a focused topic in data mining research with a good number of references in literature and for that reason an important progress has been made, varying from performant algorithms for frequent itemset mining in transaction databases to complex algorithms, such as sequential pattern mining, structured pattern mining, correlation mining. Association Rule mining (ARM) is one of the utmost current data mining techniques designed to group objects together from large databases aiming to extract the interesting correlation and relation among huge amount of data. In this article, we provide a brief review and analysis of the current status of frequent pattern mining and discuss some promising research directions. Additionally, this paper includes a comparative study between the performance of the described approaches.

Бесплатно

Efficient Networks Communication Routing Using Swarm Intelligence

Efficient Networks Communication Routing Using Swarm Intelligence

Koushal Kumar

Статья научная

As demonstrated by natural biological swarm’s collective intelligence has an abundance of desirable properties for problem-solving like in network routing. The focus of this paper is in the applications of swarm based intelligence in information routing for communication networks. As we know networks are growing and adopting new platforms as new technologies comes. Also according to new demands and requirements networks topologies and its complexity is increasing with time. Thus it is becoming very difficult to maintain the quality of services and reliability of the networks using current Networks routing algorithms. Thus Swarm intelligence (SI) is the collective behavior of decentralized self-organized systems, natural or artificial. The concept is employed in work on artificial intelligence. A new class of algorithms, inspired by swarm intelligence is currently being developed that can potentially solve numerous problems of modern communications networks. These algorithms rely on the interaction of a multitude of simultaneously interacting agents. In this paper we give disadvantages of previously used network routing algorithms and how we can apply swarm intelligence to overcome these problems.

Бесплатно

Efficient Sensor-Cloud Communication using Data Classification and Compression

Efficient Sensor-Cloud Communication using Data Classification and Compression

Md. Tanvir Rahman, Md. Sifat Ar Salan, Taslima Ferdaus Shuva, Risala Tasin Khan

Статья научная

Wireless Sensor Network, a group of specialized sensors with a communication infrastructure for monitoring and controlling conditions at diverse locations, is a recent technology which is getting popularity day by day. Besides, cloud computing is a type of high-performance computing that uses a network of remote servers which simultaneously provides the service to store, manage and process data rather than a local server or personal computer. An architecture called sensor-cloud is also providing good services by combining the capabilities from both ends. In order to provide such services, a large volume of sensor network data needs to be transported to cloud gateway with a high amount of bandwidth and time requirement. In this paper, we have proposed an efficient sensor-cloud communication approach that minimizes the enormous bandwidth and time requirement by using statistical classification based on machine learning as well as compression using deflate algorithm with a minimal loss of information. Experimental results describe the overall efficiency of the proposed method over the traditional and related research.

Бесплатно

Electronic police ambush system via vehicles/drivers safety authentication system

Electronic police ambush system via vehicles/drivers safety authentication system

Hussam Elbehiery

Статья научная

The suggested system aims to develop the level of security for the Egyptian police ambushes and to decrease the overcrowding because the traditional police ambushes was depressing in the last decade in Egypt due to the actions of terrorism and the threatening of our nation security, which was causing a waste of human casualties and will decrease the income of the tourism and reputation of our nations. The direct interaction with the vehicles inside the police ambushes may cause an unnecessary overcrowding. The introduced system try to minimize the human element in the police ambush through utilizing RFID technology by specifying an RFID card for each vehicle and vehicle driver which will be like the electronic passport at the RFID Sensing check point before the ambush. After checking for the information saved on the RFID card supposed inside the vehicle and also registered previously on the system server results in giving the vehicle the “PASS” signal or “ARRESSTED” signal which required a capture photo shot for the vehicle. So that system aims to increase the security level and reduce casualties of police ambushes anywhere even if it is fixed or non-fixed.

Бесплатно

Empirical Estimation of Hybrid Model: A Controlled Case Study

Empirical Estimation of Hybrid Model: A Controlled Case Study

Sadaf Un Nisa, M. Rizwan Jameel Qureshi

Статья научная

Scrum and Extreme Programming (XP) are frequently used models among all agile models whereas Rational Unified Process (RUP) is one of the widely used conventional plan driven software development models. The agile and plan driven approaches both have their own strengths and weaknesses. Although RUP model has certain drawbacks, such as tendency to be over budgeted, slow in adaptation to rapidly changing requirements and reputation of being impractical for small and fast paced projects. XP model has certain drawbacks such as weak documentation and poor performance for medium and large development projects. XP has a concrete set of engineering practices that emphasizes on team work where managers, customers and developers are all equal partners in collaborative teams. Scrum is more concerned with the project management. It has seven practices namely Scrum Master, Scrum teams, Product Backlog, Sprint, Sprint Planning Meeting, Daily Scrum Meeting and Sprint Review. Keeping above mentioned context in view, this paper intends to propose a hybrid model naming SPRUP model by combining strengths of Scrum, XP and RUP by eliminating their weaknesses to produce high quality software. The proposed SPRUP model is validated through a controlled case study.

Бесплатно

Empirical Study of an Improved Component Based Software Development Model using Expert Opinion Technique

Empirical Study of an Improved Component Based Software Development Model using Expert Opinion Technique

Asif Irshad Khan, Md. Mottahir Alam, Noor-ul-Qayyum, Usman Ali Khan

Статья научная

IT industry in the present market situation faces high demand for performance and burgeoning user expectations; with the pressure manifesting itself in three forms – Development Cost, Time-to-market and Product Quality. Researchers have proposed several techniques to effectively deal with these conflicting scenarios and draw optimized output. One of the relevant techniques in this context is Component Based Software Development (CBSD) with a targeted and discriminative approach influencing all phases of development. Although, CBSD proposes a multi-faceted approach in complex scenarios, its prime focus lies in “write once and reuse multiple times” methodology with either no or minor modifications. The model has been markedly successful in large enterprise applications with companies deriving benefits from shorter development time, increased productivity and better quality product. This research paper focuses and discusses Empirical Study of an Improved Component Based Software Development (ICBD) Model using Expert Opinion Technique which covers both component based software development as well as Component development phases. ICBD Model tries to overcome some of the issues in the contemporary CBD Models. A case study was conducted to investigate and evaluate our model by experienced professionals working in the IT industry. Results have shown that our improved model registers significant improvement over previous models suggested by other researchers.

Бесплатно

Empirical and theoretical validation of a use case diagram complexity metric

Empirical and theoretical validation of a use case diagram complexity metric

Sangeeta Sabharwal, Preeti Kaur, Ritu Sibal

Статья научная

A key artifact produced during object oriented requirements analysis is Use Case Diagram. Functional requirements of the system under development and relationship of the system and the external world are displayed with the help of Use Case Diagram. Therefore, the quality aspect of the artifact Use Case Diagram must be assured in order to build good quality software. Use Case Diagram quality is assessed by metrics that have been proposed in the past by researchers, based on Use Case Diagram countable features such as the number of actors, number of scenarios per Use Case etc., but they have not considered Use Case dependency relations for metric calculation. In our previous paper, we had proposed a complexity metric. This metric was defined considering association relationships and dependency prevailing in the Use Case Diagram. The key objective in this paper is to validate this complexity metric theoretically by using Briand’s Framework and empirically by performing a Controlled experiment. The results show that we are able to perform the theoretical and empirical validation successfully.

Бесплатно

Energy Detection Performance of Spectrum Sensing in Cognitive Radio

Energy Detection Performance of Spectrum Sensing in Cognitive Radio

Md. Shamim Hossain, Md. Ibrahim Abdullah, Mohammad Alamgir Hossain

Статья научная

Spectrum sensing is a challenging task for cognitive radio. Energy detection is one of the popular spectrum sensing technique for cognitive radio. In this paper we analyze the performance of energy detection technique to detect primary user (PU). Simulation results show that the probability of detection increases significantly when signal to noise ratio increases. It is also observed that the detection probability decreases when the bandwidth factor increases.

Бесплатно

Energy efficiency analysis by fine grained modification in link state routing protocol

Energy efficiency analysis by fine grained modification in link state routing protocol

Masuduzzaman, Istiak Ahmed, Allin Arzoo, Tanuka Sharmin

Статья научная

A major concern in modern explosive development of information and communication technology, especially in wired networking is to reduce of unnecessary energy consumption. It is needed because of expected environmental impact and potential economic benefits. These issues are usually referred as “Green networking”. It is related to increase the awareness of energy consuming in routing protocols, in the devices, including the structure of the whole networking system. Here different issues will be described which is topically related and the possible impact in green networking field. Also, special attention will be monitored in the routing protocol and algorithm especially link state routing protocol which is generally used for calculating the shortest path. This research paper tries to find out a new technique or modified routing technique that will be more effective for every router in terms of energy efficiency.

Бесплатно

Energy efficient routing protocol for maximum lifetime in wireless sensor networks

Energy efficient routing protocol for maximum lifetime in wireless sensor networks

Ademola P. Abidoye

Статья научная

Wireless sensor networks (WSNs) have become a popular research area that is widely gaining the attraction from both the researchers and the practitioner communities due to their wide area of applications. These include real time sensing for audio delivery, imaging, video streaming, environmental monitoring, industrial applications and remote monitoring. WSNs are constrained with limited energy due to their physical size. In order to maximize network lifetime, efficient use of limited sensor nodes energy resources is important. Energy efficient routing protocol for maximum lifetime in wireless sensor networks (EERPM) is proposed. Sensor nodes lifetime optimization models are formulated subject to energy consumption constraint, data flow conservation constraint, maximum data rate constraint and link capacity constraint. The models are used to solve mathematical models for the maximum lifetime routing problems. Sensor nodes transmit their data packets based on the link capacity that is inference free among the sets of links. Moreover, algorithms are developed for coverage of sensor nodes and maximization of lifetime for sensor nodes. Simulation results show that EERPM performs better than MLCS, MLCAL and AEEC protocols. It can reduce data gathering latency and achieve load balancing. Finally, the proposed method extends network lifetime compared to the related selected protocols.

Бесплатно

Energy-Efficient PSO and Latency Based Group Discovery Algorithm in Cloud Scheduling

Energy-Efficient PSO and Latency Based Group Discovery Algorithm in Cloud Scheduling

Nandhini A., Saravana Balaji B.

Статья научная

Cloud computing is a large model change of computing system. It provides high scalability and flexibility among an assortment of on-demand services. To imporve the performance of the multi-cloud environment in distributed application might require less energy efficiency and minimal inter-node latency correspondingly. The major problem is that the energy efficiency of the cloud computing data center is less if the number of server is low, else it increases. To overcome the energy efficiency and network latency problem a novel energy-efficient particle swarm optimization representation for multi-job scheduling and Latency representation for the grouping of nodes with respect to network latency is proposed. The scheduling procedure is through on the basis of network latency and energy efficiency. Scheduling schema is the main part of Cloud Scheduler component, which helps the scheduler in scheduling decision on the base of dissimilar criterion. It also works well with incomplete latency information and performs intelligent grouping on the basis of both network latency and energy efficiency. Design a realistic particle swarm optimization algorithm for the cloud servers and construct an overall energy competence based on the purpose of the servers and calculation of fitness value for each cloud servers. Also, in order to speed up the convergent speed and improve the probing aptitude of our algorithm, a local search operative is introduced. Finally, the experiment demonstrates that the proposed algorithm is effectual and well-organized.

Бесплатно

Energy-efficient Secure Directed Diffusion Protocol for Wireless Sensor Networks

Energy-efficient Secure Directed Diffusion Protocol for Wireless Sensor Networks

Malika BELKADI, Rachida AOUDJIT, Mehammed DAOUI, Mustapha LALAM

Статья научная

In wireless sensor networks, it is crucial to design and employ energy-efficient communication protocols, since nodes are battery-powered and thus their lifetimes are limited. Such constraints combined with a great number of applications used in these networks, pose many challenges (limited energy, low security…) to the design and management of wireless sensor networks. These challenges necessitate a great attention. In this paper, we present a new version of Directed Diffusion routing protocol which provides both security and energy efficiency together in wireless sensor networks.

Бесплатно

Enhanced Dynamic Algorithm of Genome Sequence Alignments

Enhanced Dynamic Algorithm of Genome Sequence Alignments

Arabi E. keshk

Статья научная

The merging of biology and computer science has created a new field called computational biology that explore the capacities of computers to gain knowledge from biological data, bioinformatics. Computational biology is rooted in life sciences as well as computers, information sciences, and technologies. The main problem in computational biology is sequence alignment that is a way of arranging the sequences of DNA, RNA or protein to identify the region of similarity and relationship between sequences. This paper introduces an enhancement of dynamic algorithm of genome sequence alignment, which called EDAGSA. It is filling the three main diagonals without filling the entire matrix by the unused data. It gets the optimal solution with decreasing the execution time and therefore the performance is increased. To illustrate the effectiveness of optimizing the performance of the proposed algorithm, it is compared with the traditional methods such as Needleman-Wunsch, Smith-Waterman and longest common subsequence algorithms. Also, database is implemented for using the algorithm in multi-sequence alignments for searching the optimal sequence that matches the given sequence.

Бесплатно

Enhanced Initial Centroids for K-means Algorithm

Enhanced Initial Centroids for K-means Algorithm

Aleta C. Fabregas, Bobby D. Gerardo, Bartolome T. Tanguilig III

Статья научная

This paper focuses on the enhanced initial centroids for the K-means algorithm. The original k-means is using the random choice of initial seeds which is a major limitation of the original K-means algorithm because it produces less reliable result of clustering the data. The enhanced method of the k-means algorithm includes the computation of the weighted mean to improve the centroids initialization. This paper shows the comparison between K-Means and the enhanced K-Means algorithm, and it proves that the new method of selecting initial seeds is better in terms of mathematical computation and reliability.

Бесплатно

Enhanced PROBCONS for multiple sequence alignment in cloud computing

Enhanced PROBCONS for multiple sequence alignment in cloud computing

Eman M. Mohamed, Hamdy M. Mousa, Arabi E. keshk

Статья научная

Multiple protein sequence alignment (MPSA) intend to realize the similarity between multiple protein sequences and increasing accuracy. MPSA turns into a critical bottleneck for large scale protein sequence data sets. It is vital for existing MPSA tools to be kept running in a parallelized design. Joining MPSA tools with cloud computing will improve the speed and accuracy in case of large scale data sets. PROBCONS is probabilistic consistency for progressive MPSA based on hidden Markov models. PROBCONS is an MPSA tool that achieves the maximum expected accuracy, but it has a time-consuming problem. In this paper firstly, the proposed approach is to cluster the large multiple protein sequences into structurally similar protein sequences. This classification is done based on secondary structure, LCS, and amino acids features. Then PROBCONS MPSA tool will be performed in parallel to clusters. The last step is to merge the final PROBCONS of clusters. The proposed algorithm is in the Amazon Elastic Cloud (EC2). The proposed algorithm achieved the highest alignment accuracy. Feature classification understands protein sequence, structure and function, and all these features affect accuracy strongly and reduce the running time of searching to produce the final alignment result.

Бесплатно

Enhancement of Single Document Text Summarization using Reinforcement Learning with Non-Deterministic Rewards

Enhancement of Single Document Text Summarization using Reinforcement Learning with Non-Deterministic Rewards

K. Karpagam, A. Saradha, K. Manikandan, K. Madusudanan

Статья научная

A text summarization system generates short and brief summaries of original document for given user queries. The machine generated summaries uses information retrieval techniques for searching relevant answers from large corpus. This research article proposes a novel framework for generating machine generated summaries using reinforcement learning techniques with Non-deterministic reward function. Experiments have exemplified with ROUGE evaluation metrics with DUC 2001, 20newsgroup data. Evaluation results of proposed system with hypothesis of automatic summarization from given datasets prove that statistically significant improvement for answering complex questions with f- actual vs. critical values.

Бесплатно

Enhancing Big Data Value Using Knowledge Discovery Techniques

Enhancing Big Data Value Using Knowledge Discovery Techniques

Mai Abdrabo, Mohammed Elmogy, Ghada Eltaweel, Sherif Barakat

Статья научная

The world has been drowned by floods of data due to technological development. Consequently, the Big Data term has gotten the expression to portray the gigantic sum. Different sorts of quick data are doubling every second. We have to profit from this enormous surge of data to convert it to knowledge. Knowledge Discovery (KDD) can enhance detecting the value of Big Data based on some techniques and technologies like Hadoop, MapReduce, and NoSQL. The use of Big Data value is critical in different fields. This survey discusses the expansion of data that led the world to Big Data expression. Big Data has distinctive characteristics as volume, variety, velocity, value, veracity, variability, viscosity, virality, ambiguity, and complexity. We will describe the connection between Big Data and KDD techniques to reach data value. Big Data applications that are applied by big organizations will be discussed. Characteristics of big data will be introduced, which represent a significant challenge for Big Data management. Finally, some of the important future directions in Big Data field will be presented.

Бесплатно

Enhancing Brain Tumor Classification in MRI: Leveraging Deep Convolutional Neural Networks for Improved Accuracy

Enhancing Brain Tumor Classification in MRI: Leveraging Deep Convolutional Neural Networks for Improved Accuracy

Shourove Sutradhar Dip, Md. Habibur Rahman, Nazrul Islam, Md. Easin Arafat, Pulak Kanti Bhowmick, Mohammad Abu Yousuf

Статья научная

Brain tumors are among the deadliest forms of cancer, and there is a significant death rate in patients. Identifying and classifying brain tumors are critical steps in understanding their functioning. The best way to treat a brain tumor depends on its type, size, and location. In the modern era, Radiologists utilize Brain tumor locations that can be determined using magnetic resonance imaging (MRI). However, manual tests and MRI examinations are time-consuming and require skills. In addition, misdiagnosis of tumors can lead to inappropriate medical therapy, which could reduce their chances of living. As technology advances in Deep Learning (DL), Computer Assisted Diagnosis (CAD) as well as Machine Learning (ML) technique has been developed to aid in the detection of brain tumors, radiologists can now more accurately identify brain tumors. This paper proposes an MRI image classification using a VGG16 model to make a deep convolutional neural network (DCNN) architecture. The proposed model was evaluated with two sets of brain MRI data from Kaggle. Considering both datasets during the training at Google Colab, the proposed method achieved significant performance with a maximum overall accuracy of 96.67% and 97.67%, respectively. The proposed model was reported to have worked well during the training period and been highly accurate. The proposed model's performance criteria go beyond existing techniques.

Бесплатно

Enhancing Drug Recommender System for Peptic Ulcer Treatment

Enhancing Drug Recommender System for Peptic Ulcer Treatment

Theresa O. Omodunbi, Grace E. Alilu, Kennedy O. Obohwemu, Rhoda N. Ikono

Статья научная

Drug Recommender Systems (DRS) streamline prescription process and contribute to better healthcare. Hence, this study developed a DRS that recommends appropriate drug(s) for the treatment of an ailment using Peptic Ulcer Disease (PUD) as a case study. Patients’ and drug data were elicited from MIMIC-IV and Drugs.com, respectively. These data were analysed and used in the design of the DRS model, which was based on the hybrid recommendation approach (combining the clustering algorithm, the Collaborative Filtering approach (CF), and the Knowledge-Based Filtering approach (KBF)). The factors that were considered in recommending appropriate drugs were age, gender, body weight, allergies, and drug interactions. The model designed was implemented in Python programming language with the Flask framework for web development and Visual Studio Code as the Integrated Development Environment. The performance of the system was evaluated using Precision, Recall, Accuracy, Root Mean Squared Error (RMSE) and usability test. The evaluation was carried out in two phases. Firstly, the CF component was evaluated by splitting the dataset from MIMIV-IV into a 70% (60,018) training set and a 30% (25,722) test set. This resulted in a precision score of 85.48%, a recall score of 85.58%, and a RMSE score of 0.74. Secondly, the KBF component was evaluated using 30 different cases. The evaluation for this was computed manually by comparing the recommendation results from the system with those of an expert. This resulted in a precision of 77%, a recall of 83%, an accuracy of 81% and an RMSE of 0.24. The results from the usability test showed a high percentage of performance of the system. The addition of the KBF reduced the error rate between actual recommendations and predicted recommendations. So, the system had a high ability to recommend appropriate drug(s) for PUD.

Бесплатно

Журнал