Статьи журнала - International Journal of Information Technology and Computer Science

Все статьи: 1195

32Still Image Compression Algorithm Based on Directional Filter Banks

32Still Image Compression Algorithm Based on Directional Filter Banks

Chunling Yang, Duanwu Cao, Li Ma

Статья научная

Hybrid wavelet and directional filter banks (HWD) is an effective multi-scale geometrical analysis method. Compared to wavelet transform, it can better capture the directional information of images. But the ringing artifact, which is caused by the coefficient quantization in transform domain, is the biggest drawback of image compression algorithms in HWD domain. In this paper, by researching on the relationship between directional decomposition and ringing artifact, an improved decomposition approach of HWD(IHWD) is roposed to reduce the ringing artifact. In addition, the IHWD algorithm and directional weighting model is applied into the JPEG2000 coding framework, and a new still magecompression algorithm IJPEG2000 is proposed. The experimental results show that IJPEG2000 has better performance than JPEG2000 whether on objective evaluation method or on subjective visual feeling.

Бесплатно

A CV parser model using entity extraction process and big data tools

A CV parser model using entity extraction process and big data tools

Papiya Das, Manjusha Pandey, Siddharth Swarup Rautaray

Статья научная

Private organizations like offices, libraries, hospi-tals make use of computers for computerized database, when computers became a most cost-effective device.After than E.F Codd introduced relational database model i.e conventional database. Conventional database can be enhanced to temporal database. Conventional or traditional databases are structured in nature. But always we dont have the pre-organized data. We have to deal with different types of data. That data is huge and in large amount i.e Big data. Big data mostly emphasized into internal data sources like transaction, log data, emails etc. From these sources high-enriched information is extracted by the means of process text data mining or text analytics. Entity Extraction is a part of Text Analysis. An entity can be anything like people, companies, places, money, any links, phone number etc. Text documents, bLogposts or any long articles contain large number of entities in many forms. Extracting those entities to gain the valuable information is the main target. Extraction of entities is possible in natural language processing(NLP) with R language. In this research work we will briefly discuss about text analysis process and how to extract entities with different big data tools.

Бесплатно

A Clientless Endpoint Authentication SchemeBased on TNC

A Clientless Endpoint Authentication SchemeBased on TNC

Kun Wu, Zhongying Bai

Статья научная

Trusted Network Connect (TNC) proposes a hierarchical and scalable architecture to securely and efficiently control endpoints` admission to the trusted computing platform to implement message passing and resource sharing. But, not all endpoints support or run a functional TNC client performing integrity checking, which represents a security risk in lots of environments. We have to consider the problem how to make these "clientless endpoints" access to trusted networks. It is of significance for improving the TNC mechanism. To solve the problem above, under the framework of TNC, this paper comes up with a clientless endpoint authentication scheme named CEAS. CEAS designs five enforcement mechanisms and the related message format to authenticate and authorize clientless endpoints. Furthermore, after the endpoints have connected to the networks, their initial determinations may be dynamically modified according to the updated circumstances. The experiment results prove that CEAS has the capability of effectively and flexibly making clientless endpoints access to trusted networks in a controlled and secure manner.

Бесплатно

A Cluster Based Job Scheduling Algorithm for Grid Computing

A Cluster Based Job Scheduling Algorithm for Grid Computing

Reza Fotohi, Mehdi Effatparvar

Статья научная

Grid computing enables sharing, selection and aggregation of computing resources for solving complex and large-scale scientific problems. The resources making up a grid need to be managed to provide a good quality of service. Grid scheduling is a vital component of a Computational Grid infrastructure. This paper presents a dynamic cluster based job scheduling algorithm for efficient execution of user jobs. This paper also includes the comparative performance analysis of our proposed job scheduling algorithm along with other well-known job scheduling algorithms considering the parameters like average waiting time, average turnaround time, average response time and average total completion time. The result has shown also exhibit that Our proposed scheduling algorithms (CHS ) has shown the best average waiting times, average turnaround times, average response times and average total completion times compared to other job scheduling approaches.

Бесплатно

A Clustering-based Offline Signature Verification System for Managing Lecture Attendance

A Clustering-based Offline Signature Verification System for Managing Lecture Attendance

Laruba Adama, Hamza O. Salami

Статья научная

Attendance management in the classroom is important because in many educational institutions, sufficient number of class attendance is a requirement for earning a regular grade in a course. Automatic signature verification is an active research area from both scientific and commercial points of view as signatures are the most legally and socially acceptable means of identification and authorization of an individual. Different approaches have been developed to achieve accurate verification of signatures. This paper proposes a novel automatic lecture attendance verification system based on unsupervised learning. Here, lecture attendance verification is addressed as an offline signature verification problem since signatures are recorded offline on lecture attendance sheets. The system involved three major phases: preprocessing, feature extraction and verification phases. In the feature extraction phase, a novel set of features based on distribution of black pixels along columns of signatures images is also proposed. A mean square error of 0.96 was achieved when the system was used to predict the number of times students attended lectures for a given course.

Бесплатно

A Comparative Analysis of Algorithms for Heart Disease Prediction Using Data Mining

A Comparative Analysis of Algorithms for Heart Disease Prediction Using Data Mining

Snigdho Dip Howlader, Tushar Biswas, Aishwarjyo Roy, Golam Mortuja, Dip Nandi

Статья научная

Heart disease is very common in today’s day and age, with death rates climbing up the numbers every year. Prediction of heart disease cases is a topic that has been around in the world of data and medical science for many years. The study conducted in this paper makes comparison of the different algorithms that have been used in pattern analysis and prediction of heart diseases. Among the algorithms that have been used in the past included a combination of machine learning and data mining concepts that essentially are derived from statistical analysis and relevant approaches. There are a lot of factors that can be considered when attempting to analytically predict instances of heart diseases, such as age, gender, resting blood pressure etc. Eight such factors have been taken into consideration for carrying out this qualitative comparison. As this study uses a particular data set for extracting results from, the output may vary when implemented over different data sets. The research includes comparisons of Naive Bayes, Decision Tree, Random Forest and Logistic Regression. After multiple implementations, the accuracy in training and testing are obtained and listed down. The observations from implementation of these algorithms over the same dataset indicates that Random Forest and Decision Tree have the highest accuracy in prediction of heart disease based on the dataset that we have provided. Similarly, Naive Bayes has the least accurate results for this scenario under the given contexts.

Бесплатно

A Comparative Analysis of Tools for Testing the Security Protocols

A Comparative Analysis of Tools for Testing the Security Protocols

Reham Abdellatif Abouhogail

Статья научная

In this paper, Analysis and comparison of two popular security verification tools namely Automated Validation of Internet Security Protocols and Applications (AVISPA) and Burrows-Abadi-Needham (BAN) logic are presented in terms of the usability, complexity, and other properties of the selected tools. The comparison shows the benefits and the drawbacks for the two tools. As a case study, two previously proposed security protocols, which were tested before by BAN logic only are evaluated and proved using the automated verification tool AVISPA to ensure that these protocols satisfy the other main security measures.

Бесплатно

A Comparative Model for Blurred Text Detection in Wild Scene Using Independent Component Analysis (ICA) and Enhanced Genetic Algorithm (Using a Bird Approach) with Classifiers

A Comparative Model for Blurred Text Detection in Wild Scene Using Independent Component Analysis (ICA) and Enhanced Genetic Algorithm (Using a Bird Approach) with Classifiers

Nwufoh C.V., Sakpere W.

Статья научная

The advent of the study of Scene Text Detection and Recognition has exposed some significant challenges text recognition faces, such as blurred text detection. This study proposes a comparative model for detecting blurred text in wild scenes using independent component analysis (ICA) and enhanced genetic algorithm (E-GA) with support vector machine (SVM) and k-nearest neighbors (KNN) as classifiers. The proposed model aims to improve the accuracy of blurred text detection in challenging environments with complex backgrounds, noise, and illumination variations. The proposed model consists of three main stages: preprocessing, feature extraction, and classification. In the preprocessing stage, the input image is first preprocessed to remove noise and enhance edges using a median filter and a Sobel filter, respectively. Then, the blurred text regions are extracted using the Laplacian of Gaussian (LoG) filter. In the feature extraction stage, ICA is used to extract independent components from the blurred text regions. The extracted components are then fed into an E-GA-based feature selection algorithm to select the most discriminative features. The E-GA simply fine tunes the selection functionalities of the traditional GA using a bird approach. The selected features are then normalized and fed into the SVM and KNN classifiers. Experimental results on a benchmarking dataset (ICDAR 2019 LSVT) shows that the model outperforms state-of-the-art methods in terms of detection accuracy, precision, recall, and F1-score. The proposed model achieves an overall accuracy of 95.13% for SVM and 88.69% for KNN, which is significantly higher than the already existing methods which for SVM is 93%. In conclusion, the proposed model provides a promising approach for detecting blurred text in wild scenes. The combination of ICA, E-GA, and SVM/KNN classifiers enhances the robustness and accuracy of the detection system, which can be beneficial for a wide range of applications, such as text recognition, document analysis, and security systems.

Бесплатно

A Comparative Performance Analysis of Low Power Bypassing Array Multipliers

A Comparative Performance Analysis of Low Power Bypassing Array Multipliers

Nirlakalla Ravi, S. Venkateswarlu, T. Jayachandra Prasad, Thota Subba Rao

Статья научная

Low power design of VLSI circuits has been identified as vital technology in battery powered portable electronic devices and signal processing applications such as Digital Signal Processors (DSP). Multiplier has an important role in the DSPs. Without degrading the performance of the processor, low power parallel multipliers are needed to be design. Bypassing is the widely used technique in the DSPs when the input operand of the multiplier is zero. A Row based Bypassing Multiplier with compressor at the final addition of the ripple carry adder (RCA) is designed to focus on low power and high speed. The proposed bypassing multiplier with compressor shows high performance and energy efficiency than Kuo multiplier with Carry Save Adder (CSA) at the final RCA.

Бесплатно

A Comparative Study on the Performance of Fuzzy Rule Base and Artificial Neural Network towards Classification of Yeast Data

A Comparative Study on the Performance of Fuzzy Rule Base and Artificial Neural Network towards Classification of Yeast Data

Shrayasi Datta, J. Paulchoudhury

Статья научная

Classification of yeast data plays an important role in the formation of medicines and in various chemical components. If the type of yeast can be recognized at the primary stage based on the initial characteristics of it, a lot of technical procedure can be avoided in the preparation of chemical and medical products. In this paper, the performance two classifying methodologies namely artificial neural network and fuzzy rule base has been compared, for the classification of proteins. The objective of this work is to classify the protein using the selected classifying methodology into their respective cellular localization sites based on their amino acid sequences. The yeast dataset has been chosen from UCI machine learning repository which has been used for this purpose. The results have shown that the classification using artificial neural network gives better prediction than that of fuzzy rule base on the basis of average error.

Бесплатно

A Comparison Study on Selective Traffic Models with Handoff Management Scheme for Wireless Mobile Network Infrastructure

A Comparison Study on Selective Traffic Models with Handoff Management Scheme for Wireless Mobile Network Infrastructure

Biswajit Bhowmik

Статья научная

Wireless and Mobile Network Infrastructure services generally examine how different infrastructure providers are positioning themselves to compete in wireless services. These services analyze the opportunities and challenges facing by the infrastructure providers selling to wireless and mobile network operators worldwide. The opportunities and challenges are the review of strategies, market positioning, and future direction in the market for wireless and mobile network infrastructure. The service covers all generations of wireless and mobile network infrastructure: 2G, 2.5G, 3G, and 4G so on global in scope. Majority of the users are mobile while availing these services. Mobility is the most important feature of any wireless and mobile network. Usually, continuous service is achieved by supporting handoff from one cell to another. Poorly designed handoff schemes tend to generate very heavy signaling traffic and, thereby, lead to a dramatic decrease in quality of service. In this paper we make a comparison study on some selective traffic models with priority handoff scheme. The study shows how a handoff procedure has a significant impact on wireless mobile network performance on the basis of two primary criteria – Blocking probability of originating calls and the forced termination probability of ongoing calls. In addition to this the study shows arrival rate of handoff calls, and approximate number of mobile users that get serviced by underlying base station.

Бесплатно

A Comparison of Missing Value Imputation Techniques on Coupon Acceptance Prediction

A Comparison of Missing Value Imputation Techniques on Coupon Acceptance Prediction

Rahin Atiq, Farzana Fariha, Mutasim Mahmud, Sadman S. Yeamin, Kawser I. Rushee, Shamsur Rahim

Статья научная

The In-Vehicle Coupon Recommendation System is a type of coupon used to represent an idea of different driving scenarios to users. Basically, with the help of presenting the scenarios, the people’s opinion is taken on whether they will accept the coupon or not. The coupons offered in the survey were for Bar, Coffee Shop, Restaurants, and Take Away. The dataset consists of various attributes that capture precise information about the clients to give a coupon recommendation. The dataset is significant to shops to determine whether the coupons they offer are benefi-cial or not, depending on the different characteristics and scenarios of the users. A major problem with this dataset was that the dataset was imbalanced and mixed with missing values. Handling the missing values and imbalanced class problems could affect the prediction results. In the paper, we analysed the impact of four different imputation techniques (Frequent value, mean, KNN, MICE) to replace the missing values and use them to create prediction mod-els. As for models, we applied six classifier algorithms (Naive Bayes, Deep Learning, Logistic Regression, Decision Tree, Random Forest, and Gradient Boosted Tree). This paper aims to analyse the impact of the imputation techniques on the dataset alongside the outcomes of the classifiers to find the most accurate model among them. So that shops or stores that offer coupons or vouchers would get a real idea about their target customers. From our research, we found out that KNN imputation with Deep Learning classifier gave the most accurate outcome for prediction and false-negative rate.

Бесплатно

A Comprehensive Survey on Cloud Computing

A Comprehensive Survey on Cloud Computing

Md. Imran Alam, Manjusha Pandey, Siddharth S Rautaray

Статья научная

Cloud Computing is an innovation ideas that helps in reducing the computing cost. Cloud Computing offers better computing through improved utilization and reduced administration and infrastructure costs. Cloud computing is the long-held dream of computing as a utility. Cloud Computing is the combination of Software as a Service (SaaS) and Utility Computing. Cloud computing shares characteristics with autonomic computing, peer to peer, grid computing, client server model, mainframe computer and utility computing. It has various open source resources which gives different platform for better computing utilization. Cloud computing are managed by Cloud Management tools, loaded and tested by various other software testing tools. Cloud computing modelling and simulation is done by CloudSim or SPECI or GroundSim or DCSim on the basis of testing benchmark. The application of Cloud Computing is discussed.

Бесплатно

A Compression & Encryption Algorithm on DNA Sequences Using Dynamic Look up Table and Modified Huffman Techniques

A Compression & Encryption Algorithm on DNA Sequences Using Dynamic Look up Table and Modified Huffman Techniques

Syed Mahamud Hossein, S.Roy

Статья научная

Storing, transmitting and security of DNA sequences are well known research challenge. The problem has got magnified with increasing discovery and availability of DNA sequences. We have represent DNA sequence compression algorithm based on Dynamic Look Up Table (DLUT) and modified Huffman technique. DLUT consists of 43(64) bases that are 64 sub-stings, each sub-string is of 3 bases long. Each sub-string are individually coded by single ASCII code from 33(!) to 96(`) and vice versa. Encode depends on encryption key choose by user from four base pair {a,t.g and c}and decode also require decryption key provide by the encoded user. Decoding must require authenticate input for encode the data. The sub-strings are combined into a Dynamic Look up Table based pre-coding routine. This algorithm is tested on reverse; complement & reverse complement the DNA sequences and also test on artificial DNA sequences of equivalent length. Speed of encryption and security levels are two important measurements for evaluating any encryption system. Due to proliferate of ubiquitous computing system, where digital contents are accessible through resource constraint biological database security concern is very important issue. A lot of research has been made to find an encryption system which can be run effectively in those biological databases. Information security is the most challenging question to protect the data from unauthorized user. The proposed method may protect the data from hackers. It can provide the three tier security, in tier one is ASCII code, in tier two is nucleotide (a,t,g and c) choice by user and tier three is change of label or change of node position in Huffman Tree. Compression of the genome sequences will help to increase the efficiency of their use. The greatest advantage of this algorithm is fast execution, small memory occupation and easy implementation. Since the program to implement the technique have been written originally in the C language, (Windows XP platform, and TC compiler) it is possible to run in other microcomputers with small changes (depending on platform and Compiler used). The execution is quite fast, all the operations are carried out in fraction of seconds, depending on the required task and on the sequence length. The technique can approach an effective compression ratio of 1.98 bits/base and even lower. When a user searches for any sequence for an organism, an encrypted compressed sequence file can be sent from the data source to the user. The encrypted compressed file then can be decrypted & decompressed at the client end resulting in reduced transmission time over the Internet. An encrypt compression algorithm that provides a moderately high compression with encryption rate with minimal decryption with decompression time.

Бесплатно

A Concave Hull Based Algorithm for Object Shape Reconstruction

A Concave Hull Based Algorithm for Object Shape Reconstruction

Zahrah Yahya, Rahmita W Rahmat, Fatimah Khalid, Amir Rizaan, Ahmad Rizal

Статья научная

Hull algorithms are the most efficient and closest methods to be redesigned for connecting vertices for geometric shape reconstruction. The vertices are the input points representing the original object shape. Our objective is to reconstruct the shape and edges but with no information on any pattern, it is challenging to reconstruct the lines to resemble the original shape. By comparing our results to recent concave hull based algorithms, two performance measures were conducted to evaluate the accuracy and time complexity of the proposed method. Besides achieving the most acceptable accuracy which is 100%, the time complexity of the proposed algorithm is evaluated to be O(wn). All results have shown a competitive and more effective algorithm compared to the most efficient similar ones. The algorithm is shown to be able to solve the problems of vertices connection in an efficient way by devising a new approach.

Бесплатно

A Cost-Aware Resource Selection for Dataintensive Applications in Cloud-oriented Data Centers

A Cost-Aware Resource Selection for Dataintensive Applications in Cloud-oriented Data Centers

Wei Liu, Feiyan Shi, Wei Du, Hongfeng Li

Статья научная

As a kind of large-scale user-oriented dataintensive computing, cloud computing allows users to utilize on-demand computation, storage, data and services from around the world in a pay-as-you-go model. In cloud environment, applications need access to mass datasets that may each be replicated on different resources (or data centers). Mass data moving influences the execution efficiency of application to a large extent, while the economic cost of each replica itself can never be overlooked in such a model of business computing. Based on the above two considerations, how to select appropriate data centers for accessing replicas and creating a virtual machine(VM for short) to execute applications to make execution efficiency high and access cost low as far as possible simultaneously is a challenging and urgent problem. In this paper, a cost-aware resource selection model based on Weighted Set Covering Problem (WSCP) is proposed, according to the principle of spatial locality of data access. For the model, we apply a Weighted Greedy heuristic to produce an approximately optimal resource set for each task. Finally, verifies the validity of the model in simulation environment, and evaluate the performance of the algorithm presented. The result shows that WSCP-based heuristic can produce an approximately optimal solution in most cases to meet both execution efficiency and economic demands simultaneously, compared to other two strategies.

Бесплатно

A Data Mining-Based Response Model for Target Selection in Direct Marketing

A Data Mining-Based Response Model for Target Selection in Direct Marketing

Eniafe Festus Ayetiran, Adesesan Barnabas Adeyemo

Статья научная

Identifying customers who are more likely to respond to new product offers is an important issue in direct marketing. In direct marketing, data mining has been used extensively to identify potential customers for a new product (target selection). Using historical purchase data, a predictive response model with data mining techniques was developed to predict a probability that a customer in Ebedi Microfinance bank will respond to a promotion or an offer. To achieve this purpose, a predictive response model using customers’ historical purchase data was built with data mining techniques. The data were stored in a data warehouse to serve as management decision support system. The response model was built from customers’ historic purchases and demographic dataset. Bayesian algorithm precisely Naïve Bayes algorithm was employed in constructing the classifier system. Both filter and wrapper feature selection techniques were employed in determining inputs to the model. The results obtained shows that Ebedi Microfinance bank can plan effective marketing of their products and services by obtaining a guiding report on the status of their customers which will go a long way in assisting management in saving significant amount of money that could have been spent on wasteful promotional campaigns.

Бесплатно

A Design of MIMO System Based on Y-Shaped with QSCS for UWB Applications

A Design of MIMO System Based on Y-Shaped with QSCS for UWB Applications

Nada M. Khalil Al-Ani, Oras A. Shareef Al-Ani, Mahmood F. Mosleh, Read A. Abd-Alhameed

Статья научная

The multi-path fading environment is a major challenge of UWB devices. So, a MIMO system is one of the importance techniques which exploited to mitigate such problems. In this research, a MIMO system with eight ports consists of four antenna elements has proposed. Y-shaped of patch microstrip has chosen to design each element to enhance the bandwidth of the proposed system. In order to achieve a good isolation, the geometry of the ground layer of the proposed antenna element has based on quasi-self-complementary structure. The proposed model has a compact size because it facilitates with dual polarized ports which increase the capacity and maintaining an acceptable size. The results are show that a bandwidth of 2.06 GHz has obtained with operating frequency of 8.73 GHz for single elements and the integrated MIMO system when excited simultaneously.

Бесплатно

A Differentiated QoS Supporting PPS Solution Based on CICQ

A Differentiated QoS Supporting PPS Solution Based on CICQ

Li Xiuqin, Yang Xiliang, Lan Julong

Статья научная

Based on the study of the existing PPS scheduling mechanism, we propose a novel PPS scheme for Differentiated QoS based on CICQ. The scheduling mechanism adopt in order queuing way to solve sequence preserving problem and set a small amount of buffer in the multiplexer to achieve differentiated QoS guarantee for different traffics which Ensure switch can provide effective support for different traffics in high-level. The simulation result indicates that the throughput of the mechanism can reach up to 99.97% at full load, and can according to the reservation bandwidth to allocate the bandwidth of output links, and ensure packets have an upper delay bound, and can balance the load to the exchange planes. Compared to the current mainstream PPS design, the mechanism is simple and easily implemented in hardware.

Бесплатно

A Distributed Fault Tolerance Global Coordinator Election Algorithm in Unreliable High Traffic Distributed Systems

A Distributed Fault Tolerance Global Coordinator Election Algorithm in Unreliable High Traffic Distributed Systems

Danial Rahdari, Amir Masoud Rahmani, Niusha Aboutaleby, Ali Sheidaei Karambasti

Статья научная

Distributed systems consist of several management sites which have different resource sharing levels. Resources can be shared among inner site and outer site processes at first and second level respectively. Global coordinator should exist in order to coordinate access to multi site’s shared resources. Moreover; some other coordinators should manage access to inner site’s shared resources so that exerting appropriate coordinator election algorithms in each level is crucial to achieve most efficient system. In this paper a hierarchical distributed election algorithm is proposed which eliminates single point of failure of election launcher. Meanwhile traffic is applied to network at different times and the number of election messages is extremely decreased as well which applies more efficiency especially in high traffic networks. A standby system between coordinators and their first alternative is considered to induct less wait time to processes which want to communicate with coordinator.

Бесплатно

Журнал