Статьи журнала - International Journal of Intelligent Systems and Applications

Все статьи: 1126

A Risk-Aware Application Scheduling Model in Cloud Computing Scenarios

A Risk-Aware Application Scheduling Model in Cloud Computing Scenarios

Ala Arman

Статья научная

Cloud users usually have different preferences over their applications that outsource to the cloud, based on the financial profit of each application's execution. Moreover, various types of virtual machines are offered by a cloud service provider with distinct characteristics, such as rental prices, availability levels, each with a dif-ferent probability of occurrence and a penalty, which is paid to the user in case the virtual machine is not availa-ble. Therefore, the problem of application scheduling in cloud computing environments, considering the risk of financial loss of application-to-VM assignment becomes a challenging issue. In this paper, we propose a risk-aware scheduling model, using risk analysis to allocate the applications to the virtual machines, so that, the ex-pected total pay-off of an application is maximized, by taking into account of the priority of applications. A run-ning example is used through the paper to better illustrate the model and its application to improve the efficiency of resource assignment in cloud computing scenarios.

Бесплатно

A Rough Sets-based Agent Trust Management Framework

A Rough Sets-based Agent Trust Management Framework

Sadra Abedinzadeh, Samira Sadaoui

Статья научная

In a virtual society, which consists of several autonomous agents, trust helps agents to deal with the openness of the system by identifying the best agents capable of performing a specific task, or achieving a special goal. In this paper, we introduce ROSTAM, a new approach for agent trust management based on the theory of Rough Sets. ROSTAM is a generic trust management framework that can be applied to any types of multi agent systems. However, the features of the application domain must be provided to ROSTAM. These features form the trust attributes. By collecting the values for these attributes, ROSTAM is able to generate a set of trust rules by employing the theory of Rough Sets. ROSTAM then uses the trust rules to extract the set of the most trusted agents and forwards the user’s request to those agents only. After getting the results, the user must rate the interaction with each trusted agent. The rating values are subsequently utilized for updating the trust rules. We applied ROSTAM to the domain of cross-language Web search. The resulting Web search system recommends to the user the set of the most trusted pairs of translator and search engine in terms of the pairs that return the results with the highest precision of retrieval.

Бесплатно

A Simplified Efficient Technique for the Design of Combinational Logic Circuits

A Simplified Efficient Technique for the Design of Combinational Logic Circuits

Vijayakumari C. K, Mythili. P, Rekha K James

Статья научная

A new Genetic Algorithm based approach to the design of combinational logic circuits which uses only 2-1 multiplexers as the basic design unit has been proposed. To realize a function of n variables, conventional design needs 2n-1 units and n levels. Property of a multiplexer tree is that all the units in a level share the same control signal. In this paper, flexibility has been made in selecting the control signals so that units in the same level need not use the same select signal. Control signals can be any of the variables or functions derived from the immediate preceding level. Once a 100 % fit circuit is evolved, check for redundancy of units is made and redundant units are eliminated so that the circuit generated is optimal. It has been observed that the circuits evolved by this approach are superior to the circuits by conventional design in terms of area, power and delay. As power dissipation is an important metric in VLSI design, power loss can be minimized by eliminating unnecessary transitions/switching of idle multiplexers using a specific controller to select appropriate control signals. But in the proposed design power loss can be reduced without any additional device and hence these circuits can be recommended for low power devices.

Бесплатно

A Soft Computing Technique for Improving the Fidelity of Thumbprints Based Identification Systems

A Soft Computing Technique for Improving the Fidelity of Thumbprints Based Identification Systems

Kamta Nath Mishra, Anupam Agrawal

Статья научная

With the advent of new thumbprint identification techniques, accurate personal identification is now easy and cheaper with approximately zero false acceptance rates. This paper focuses on developing an advance feature for thumbprint based identification systems with the help of soft computing and 2D transformation which makes the technique more flexible and Fidel. The thumbprint images of individuals were scanned with the help of H3 T&A terminal for collecting self generated datasets. The thumbprints of self generated and standard datasets were trained to form a refined set which includes linear and angular displacements of thumbprint images. The new obtained features of refined datasets were stored in the database for further identification. In the proposed technique, the minutiae coordinates and orientation angles of the thumbprint of a person to be identified are computed and merged together for comparison. The minutia coordinates and orientation angles of a person are compared with the minutiae trained set values stored in the database at different linear and angular rotations for identity verification. The proposed technique was tested on fifty persons self generated and standard datasets of FVC2002, FVC2004 and CASIA databases. In the experimentation and result analysis we observed that the proposed technique accurately identifies a person on the basis of minutiae features of a thumbprint with low FNMR (False Non-Match Rate) values.

Бесплатно

A Solution for Android Device Misplacement through Bluetooth-enabled Technology

A Solution for Android Device Misplacement through Bluetooth-enabled Technology

Kaven Raj S/O Manoharan, Siew-Chin Chong, Kuok-Kwee Wee

Статья научная

The number of smartphone users and mobile application offerings are growing rapidly nowadays. A mobile device is currently considered as the most powerful and most needed device of this modern century. Every day new mobile applications are developed with their own compatibility, making sure to serve correctly to a particular smartphone model and its specifications. The goal of this project is to develop a self-help Android application namely “Dont Forget Me”, which is user friendly and well defined to solve the problem of misplaced or lost smartphone devices. This missing phone prevention alert application practically pairs with another device using Bluetooth connection. The Bluetooth connection is established in order to prevent the user from forgetting to bring along the device or being stolen by someone. If the Bluetooth connection between the paired devices is disconnected hence an alarm and message notification are triggered to notify the user that the device was not brought along with them. A website application is also developed purposely in serving the user to track, locate and lock the missing device.

Бесплатно

A Stochastic Prediction Interface for Urdu

A Stochastic Prediction Interface for Urdu

Qaiser Abbas

Статья научная

This work lays down a foundation for text prediction of an inflected and under-resourced language Urdu. The interface developed is not limited to a T9 (Text on 9 keys) application used in embedded devices, which can only predict a word after typing initial characters. It is capable of predicting a word like T9 and also a sequence of word after a word in a continuous manner for fast document typing. It is based on N-gram language model. This stochastic interface deals with three N-gram levels from unary to ternary independently. The uni-gram mode is being in use for applications like T9, while the bi-gram and tri-gram modes are being in use for sentence prediction. The measures include a percentage of keystrokes saved, keystrokes until completion and a percentage of time saved during the typing. Two different corpora are merged to build a sufficient amount of data. The test data is divided into a test and a held out data equally for an experimental purpose. This whole exercise enables the QASKU system outperforms the FastType with almost 15% more saved keystrokes.

Бесплатно

A Stroke Shape and Structure Based Approach for Off-line Chinese Handwriting Identification

A Stroke Shape and Structure Based Approach for Off-line Chinese Handwriting Identification

Jun Tan, Jian-Huang Lai, Chang-Dong Wang, Ming-Shuai Feng

Статья научная

Handwriting identification is a technique of automatic person identification based on the personal handwriting. It is a hot research topic in the field of pattern recognition due to its indispensible role in the biometric individual identification. Although many approaches have emerged, recent research has shown that off-line Chinese handwriting identification remains a challenge problem. In this paper, we propose a novel method for off-line Chinese handwriting identification based on stroke shapes and structures. To extract the features embedded in Chinese handwriting characters, two special structures have been explored according to the trait of Chinese handwriting characters. These two structures are the bounding rectangle and the TBLR quadrilateral. Sixteen features are extracted from the two structures, which are used to compute the unadjusted similarity, and the other four commonly used features are also computed to adjust the similarity adaptively. The final identification is performed on the similarity. Experimental results on the SYSU and HanjaDB1 databases have validated the effectiveness of the proposed method.

Бесплатно

A Study on the formation of the gravitational Model based on Point-mass Method

A Study on the formation of the gravitational Model based on Point-mass Method

Jianqiang Wang, Zhiqi Yu

Статья научная

The virtual point-mass method has been widely used in dealing with the approximation of the local gravity field which is a difficult problem in internal currently. In this paper, the approximation theory of point-mass model is briefly introduced, and the characteristics of the elements in the coefficient matrix for the model construction are analyzed by numerical calculation. The observations of gravity anomaly is simulated from EGM2008 with degree and order 720 and the approximated region is 32~34Nand 103~105E. A four-tier point-mass model which is on the base of the geopotential model with degree and order 36 from low frequency to high frequency is applied to approximate the local earth’s gravity field. The results of the experiments show that the truncation error of gravity disturbance created by using the point-mass model is less than 2 mGal on the radial direction and there is an optimal truncation error for some certain spectrum gravity field in the space.

Бесплатно

A Survey Work on Optimization Techniques Utilizing Map Reduce Framework in Hadoop Cluster

A Survey Work on Optimization Techniques Utilizing Map Reduce Framework in Hadoop Cluster

Bibhudutta Jena, Mahendra Kumar Gourisaria, Siddharth Swarup Rautaray, Manjusha Pandey

Статья научная

Data is one of the most important and vital aspect of different activities in today's world. Therefore vast amount of data is generated in each and every second. A rapid growth of data in recent time in different domains required an intelligent data analysis tool that would be helpful to satisfy the need to analysis a huge amount of data. Map Reduce framework is basically designed to process large amount of data and to support effective decision making. It consists of two important tasks named as map and reduce. Optimization is the act of achieving the best possible result under given circumstances. The goal of the map reduce optimization is to minimize the execution time and to maximize the performance of the system. This survey paper discusses a comparison between different optimization techniques used in Map Reduce framework and in big data analytics. Various sources of big data generation have been summarized based on various applications of big data.The wide range of application domains for big data analytics is because of its adaptable characteristics like volume, velocity, variety, veracity and value .The mentioned characteristics of big data are because of inclusion of structured, semi structured, unstructured data for which new set of tools like NOSQL, MAPREDUCE, HADOOP etc are required. The presented survey though provides an insight towards the fundamentals of big data analytics but aims towards an analysis of various optimization techniques used in map reduce framework and big data analytics.

Бесплатно

A Survey on Cooperative Communication in Wireless Networks

A Survey on Cooperative Communication in Wireless Networks

A. F. M. Shahen Shah, Md. Shariful Islam

Статья научная

Cooperative communication in wireless networks has become more and more attractive recently since it could mitigate the particularly severe channel impairments arising from multipath propagation. Here the greater benefits gained by exploiting spatial diversity in the channel. In this paper, an overview on cooperative communication in wireless networks is presented. We inscribe the benefits of cooperative transmission than traditional non – cooperative communication. Practical issues and challenges in cooperative communication are identified. In particular, we present a study on the advantages, applications and different routing strategies for cooperative mesh networks, Ad hoc networks and wireless sensor networks.

Бесплатно

A Survey on Speech Enhancement Methodologies

A Survey on Speech Enhancement Methodologies

Ravi Kumar. K, P.V. Subbaiah

Статья научная

Speech enhancement is a technique which processes the noisy speech signal. The aim of speech enhancement is to improve the perceived quality of speech and/or to improve its intelligibility. Due to its vast applications in mobile telephony, VOIP, hearing aids, Skype and speaker recognition, the challenges in speech enhancement have grown over the years. It is more challenging to suppress back ground noise that effects human communication in noisy environments like airports, road works, traffic, and cars. The objective of this survey paper is to outline the single channel speech enhancement methodologies used for enhancing the speech signal which is corrupted with additive background noise and also discuss the challenges and opportunities of single channel speech enhancement. This paper mainly focuses on transform domain techniques and supervised (NMF, HMM) speech enhancement techniques. This paper gives frame work for developments in speech enhancement methodologies.

Бесплатно

A Survey on Statistical Based Single Channel Speech Enhancement Techniques

A Survey on Statistical Based Single Channel Speech Enhancement Techniques

Sunnydayal. V, N. Sivaprasad, T. Kishore Kumar

Статья научная

Speech enhancement is a long standing problem with various applications like hearing aids, automatic recognition and coding of speech signals. Single channel speech enhancement technique is used for enhancement of the speech degraded by additive background noises. The background noise can have an adverse impact on our ability to converse without hindrance or smoothly in very noisy environments, such as busy streets, in a car or cockpit of an airplane. Such type of noises can affect quality and intelligibility of speech. This is a survey paper and its object is to provide an overview of speech enhancement algorithms so that enhance the noisy speech signal which is corrupted by additive noise. The algorithms are mainly based on statistical based approaches. Different estimators are compared. Challenges and Opportunities of speech enhancement are also discussed. This paper helps in choosing the best statistical based technique for speech enhancement.

Бесплатно

A Survey on Various Compression Methods for Medical Images

A Survey on Various Compression Methods for Medical Images

S.Sridevi M.E, V.R.Vijayakuymar, R.Anuja

Статья научная

Medical image compression plays a key role as hospitals move towards filmless imaging and go completely digital. Image compression will allow Picture Archiving and Communication Systems (PACS) to reduce the file sizes on their storage requirements while maintaining relevant diagnostic information. Lossy compression schemes are not used in medical image compression due to possible loss of useful clinical information and as operations like enhancement may lead to further degradations in the lossy compression. Medical imaging poses the great challenge of having compression algorithms that reduce the loss of fidelity as much as possible so as not to contribute to diagnostic errors and yet have high compression rates for reduced storage and transmission time. This paper outlines the comparison of compression methods such as Shape-Adaptive Wavelet Transform and Scaling Based ROI,JPEG2000 Max-Shift ROI Coding, JPEG2000 Scaling-Based ROI Coding, Discrete Cosine Transform, Discrete Wavelet Transform and Subband Block Hierarchical Partitioning on the basis of compression ratio and compression quality.

Бесплатно

A Type-2 Fuzzy Logic Based Framework for Function Points

A Type-2 Fuzzy Logic Based Framework for Function Points

Anupama Kaushik, A.K. Soni, Rachna Soni

Статья научная

Software effort estimation is very crucial in software project planning. Accurate software estimation is very critical for a project success. There are many software prediction models and all of them utilize software size as a key factor to estimate effort. Function Points size metric is a popular method for estimating and measuring the size of application software based on the functionality of the software from the user’s point of view. While there is a great advancement in software development, the weight values assigned to count standard FP remains the same. In this paper the concepts of calibrating the function point weights using Type-2 fuzzy logic framework is provided whose aim is to estimate a more accurate software size for various software applications and to improve the effort estimation of software projects. Evaluation experiments have shown the framework to be promising.

Бесплатно

A Unified Model of Clustering and Classification to Improve Students’ Employability Prediction

A Unified Model of Clustering and Classification to Improve Students’ Employability Prediction

Pooja Thakar, Anil Mehta, Manisha

Статья научная

Data Mining is gaining immense popularity in the field of education due to its predictive capabilities. But, most of the prior effort in this area is only directed towards prediction of performance in academic results only. Nowadays, education has become employment oriented. Very little attempt is made to predict students’ employability. Precise prediction of students’ performance in campus placements at an early stage can identify students, who are at the risk of unemployment and proactive actions can be taken to improve their performance. Existing researches on students’ employability prediction are either based upon only one type of course or on single University/Institute; thus is not scalable from one context to another. With this necessity, the conception of a unified model of clustering and classification is proposed in this paper. With the notion of unification, data of professional courses namely Engineering and Masters in Computer Applications students are collected from various universities and institutions pan India. Data is large, multivariate, incomplete, heterogeneous and unbalanced in nature. To deal with such a data, a unified predictive model is built by integrating clustering and classification techniques. Two- Level clustering (k-means kernel) with chi-square analysis is applied at the pre-processing stage for the automated selection of relevant attributes and then ensemble vote classification technique with a combination of four classifiers namely k-star, random tree, simple cart and the random forest is applied to predict students’ employability. Proposed framework provides a generalized solution for student employability prediction. Comparative results clearly depict model performance over various classification techniques. Also, when the proposed model is applied up to the level of the state, classification accuracy touches 96.78% and 0.937 kappa value.

Бесплатно

A community based reliable trusted framework for collaborative filtering

A community based reliable trusted framework for collaborative filtering

Satya Keerthi Gorripati, M. Kamala Kumari, Anupama Angadi

Статья научная

Recommender Systems are a primary component of online service providers, formulating plenty of information produced by users’ histories (e.g., their procurements, ratings of products, activities, browsing patterns). Recommendation algorithms use this historical information and their contextual data to offer a list of likely items for each user. Traditional recommender algorithms are built on the similarity between items or users.(e.g., a user may purchase the identical items as his nearest user). In the process of reducing limitations of traditional approaches and to improve the quality of recommender systems, a reliability based community method is introduced.This method comprises of three steps: The first step identifies the trusted relations of the current user by allowing trust propagation in the trust network. In next step, the ratings of selected trusted neighborhood are used for predicting the unrated item of current user. The prediction relies only on items that belong to candidate items’ community. Finally the reliability metric is computed to assess the worth of prediction rating. Experimental results confirmed that the proposed framework attained higher accuracy matched to state-of-the-art recommender system approaches.

Бесплатно

A comparative analysis of firefly and Fuzzy-Firefly based kernelized hybrid C-Means algorithms

A comparative analysis of firefly and Fuzzy-Firefly based kernelized hybrid C-Means algorithms

B.K. Tripathy, Anmol Agrawal, A. Jayaram Reddy

Статья научная

In most of the clustering algorithms, the assignment of initial centroids is performed randomly, which affects both the final outcome and the number of iterations required. Another aspect of the approaches in clustering algorithms is the use of Euclidean distance as the measure of similarity between data points, which is handicapped by linear separability of input data. The purpose of this paper is to combine suitable techniques so that both the above problems can be handled suitably leading to efficient algorithms. For the initial assignment of centroids we use Firefly and Fuzzy Firefly algorithms. We replace the Euclidean distance by Kernels (Gaussian and Hyper-tangent) leading to hybridized versions. For experimental analysis we use five different images from different domains as input. Two efficiency measures; Davis Bouldin index (DB) and Dunn index (D) are used for comparison. The tabular values, their graphical representations and output images are generated to support the claims. The analysis proves the superiority of the optimized algorithms over their existing counterparts. We also find that Hyper-tangent kernel with Rough Intuitionistic Fuzzy C-Means algorithm using Fuzzy Firefly algorithm produces the best results and has a much faster convergence rate. The analysis of medical, satellite or geographical images can be done more efficiently using the proposed optimized algorithms. It is supposed to play an important role in image segmentation and analysis.

Бесплатно

A cyclic attribution technique feature selection method for human activity recognition

A cyclic attribution technique feature selection method for human activity recognition

Win Win Myo, Wiphada Wettayaprasit, Pattara Aiyarak

Статья научная

Feature selection is a technique of selecting the most important features for predictive model construction. It is a key component in machine learning for many pattern recognition applications. The primary objective of this paper is to create a more precise system for Human Activity Recognition (HAR) by identifying the most appropriate features. We propose a Cyclic Attribution Technique (CAT) feature selection technique for recognition of human activity based on group theory and the fundamental properties of the cyclic group. We tested our model on UCI-HAR dataset focusing on six activities. With the proposed method, 561 features could be reduced to 63. Using an Artificial Neural Network (ANN), we compared performances of our new dataset with selected features and the original dataset classifier. Results showed that the model could provide an excellent overall accuracy of 96.7%. The proposed CAT technique can specify high-quality features to the success of HAR with ANN classifier. Two benefits support this technique by reducing classification overfitting and training time.

Бесплатно

A formal model for legacy system understanding

A formal model for legacy system understanding

A.Sivagnana Ganesan, T.Chithralekha, M. Rajapandian

Статья научная

Migration of legacy system is not a single step activity but a process that comprises of several phases of which Legacy System Understanding (LSU) is the first step. The intent of this work is to carry out a detailed study on the Legacy System Understanding in terms of Techniques and Tools used and to identify the potential gaps in them. The understanding of the legacy system has to be at the perspective of system level rather than the code level which has a narrow perspective, because the migration at code level may have a cascading impact to different aspects of the Legacy System. These findings have enabled us to formulate a process for building up an artefact repository and artefact dependency repository. These repositories along with the legacy system have aided us in understanding the legacy system in a comprehensive manner in terms of migrating artefacts in the context of migration of legacy systems. A formal mathematical model for representing the status of LSU and application of the same on a case study has been presented.

Бесплатно

A fuzzy-based adaptive energy efficient load distribution scheme in ad-hoc networks

A fuzzy-based adaptive energy efficient load distribution scheme in ad-hoc networks

Varun Kumar Sharma, Lal Pratap Verma, Mahesh Kumar

Статья научная

The traditional energy aware routing policies are not capable enough to keep up with dynamic properties of mobile ad-hoc network (e.g., mobility, quick topology changes, link-layer contentions etc.) and do not offer adequate performance in high congested situations. In past decades, authors have expressed their concerns towards smart routing paradigms concerning lesser energy consumption. However, many of these proposals are not able to offer significant performance concerning the quality of service. Consequently, the pattern of interest shifts towards cross-layer energy optimization schemes. These proposals did use of lower layers’ special information and provide significant performance enhancements. Still, many of the issues are associated with these proposals. Moreover, many of the proposals consider idle and sleep power consumption which too causes a considerable amount of energy consumption. Nevertheless, these methods require complex synchronization and efficient coordination which is too inefficient for extremely variable networks (MANETs). To address these issues, we propose an effective fuzzy- based energy efficient load distribution scheme which takes care of energy consumption considering congestion as a parameter. In comparison with some of the existing energy aware routing strategies, proposed method offers substantial improvements in terms of total energy consumption, network lifetime, total number of dead nodes, and average throughput.

Бесплатно

Журнал