Статьи журнала - International Journal of Intelligent Systems and Applications

Все статьи: 1159

A Review on Gravitational Search Algorithm and its Applications to Data Clustering & Classification

A Review on Gravitational Search Algorithm and its Applications to Data Clustering & Classification

Yugal kumar, G. Sahoo

Статья научная

Natural phenomenon’s and swarms behavior are the warm area of research among the researchers. A large number of algorithms have been developed on the account of natural phenomenon’s and swarms behavior. These algorithms have been implemented on the various computational problems for the sake of solutions and provided significant results than conventional methods but there is no such algorithm which can be applied for all of the computational problems. In 2009, a new algorithm was developed on the behalf of theory of gravity and was named gravitational search algorithm (GSA) for continuous optimization problems. In short span of time, GSA algorithm gain popularity among researchers and has been applied to large number of problems such as clustering, classification, parameter identification etc. This paper presents the compendious survey on the GSA algorithm and its applications as well as enlightens the applicability of GSA in data clustering & classification.

Бесплатно

A Review on Large Scale Graph Processing Using Big Data Based Parallel Programming Models

A Review on Large Scale Graph Processing Using Big Data Based Parallel Programming Models

Anuraj Mohan, Remya G

Статья научная

Processing big graphs has become an increasingly essential activity in various fields like engineering, business intelligence and computer science. Social networks and search engines usually generate large graphs which demands sophisticated techniques for social network analysis and web structure mining. Latest trends in graph processing tend towards using Big Data platforms for parallel graph analytics. MapReduce has emerged as a Big Data based programming model for the processing of massively large datasets. Apache Giraph, an open source implementation of Google Pregel which is based on Bulk Synchronous Parallel Model (BSP) is used for graph analytics in social networks like Facebook. This proposed work is to investigate the algorithmic effects of the MapReduce and BSP model on graph problems. The triangle counting problem in graphs is considered as a benchmark and evaluations are made on the basis of time of computation on the same cluster, scalability in relation to graph and cluster size, resource utilization and the structure of the graph.

Бесплатно

A Revisit to Probability - Possibility Consistency Principles

A Revisit to Probability - Possibility Consistency Principles

Mamoni Dhar

Статья научная

In this article, our main intention is to highlight the fact that the probable links between probability and possibility which were established by different authors at different point of time on the basis of some well known consistency principles cannot provide the desired result. That is why the paper discussed some prominent works for transformations between probability and possibility and finally aimed to suggest a new principle because none of the existing principles because none of them found the unique transformation. The new consistency principle which is suggested hereby would in turn replace all others that exist in the literature references by providing a reliable estimate of consistency between the two.Furthermore some properties of entropy of fuzzy numbers are also presented in this article.

Бесплатно

A Risk-Aware Application Scheduling Model in Cloud Computing Scenarios

A Risk-Aware Application Scheduling Model in Cloud Computing Scenarios

Ala Arman

Статья научная

Cloud users usually have different preferences over their applications that outsource to the cloud, based on the financial profit of each application's execution. Moreover, various types of virtual machines are offered by a cloud service provider with distinct characteristics, such as rental prices, availability levels, each with a dif-ferent probability of occurrence and a penalty, which is paid to the user in case the virtual machine is not availa-ble. Therefore, the problem of application scheduling in cloud computing environments, considering the risk of financial loss of application-to-VM assignment becomes a challenging issue. In this paper, we propose a risk-aware scheduling model, using risk analysis to allocate the applications to the virtual machines, so that, the ex-pected total pay-off of an application is maximized, by taking into account of the priority of applications. A run-ning example is used through the paper to better illustrate the model and its application to improve the efficiency of resource assignment in cloud computing scenarios.

Бесплатно

A Rough Sets-based Agent Trust Management Framework

A Rough Sets-based Agent Trust Management Framework

Sadra Abedinzadeh, Samira Sadaoui

Статья научная

In a virtual society, which consists of several autonomous agents, trust helps agents to deal with the openness of the system by identifying the best agents capable of performing a specific task, or achieving a special goal. In this paper, we introduce ROSTAM, a new approach for agent trust management based on the theory of Rough Sets. ROSTAM is a generic trust management framework that can be applied to any types of multi agent systems. However, the features of the application domain must be provided to ROSTAM. These features form the trust attributes. By collecting the values for these attributes, ROSTAM is able to generate a set of trust rules by employing the theory of Rough Sets. ROSTAM then uses the trust rules to extract the set of the most trusted agents and forwards the user’s request to those agents only. After getting the results, the user must rate the interaction with each trusted agent. The rating values are subsequently utilized for updating the trust rules. We applied ROSTAM to the domain of cross-language Web search. The resulting Web search system recommends to the user the set of the most trusted pairs of translator and search engine in terms of the pairs that return the results with the highest precision of retrieval.

Бесплатно

A Self-driving Car Controller Module Based on Rule Base Reasoner

A Self-driving Car Controller Module Based on Rule Base Reasoner

Anik Kumar Saha, Md. Abdur Razzaque

Статья научная

The rapid improvement of sensing and recognition technology has had an impact on the vehicle sector that led to the development of self-driving cars. Thus, vehicles are capable of driving themselves without human interaction, mostly relying on cameras, various sensors technology, and advanced algorithms for navigation. In this research, a controller module of a self-driving car project is proposed using a rule baser reasoner that is capable to drive a car considering the health condition of the driver, the road lanes, traffic signs, obstacles created by other vehicles. A number of sensors including Global Positioning System module, camera, compass, ultrasonic sensor, physiological sensor (heartbeat, blood pressure, body temperature, etc.) are involved while reasoning. The proposed controller consists of several modules: sensor module, lane detection module, road sign and human detection module, reasoning module, Instruction execution module. According to the experimental results the proposed system is able to make correct decisions with a success rate of about 90-95%.

Бесплатно

A Simplified Efficient Technique for the Design of Combinational Logic Circuits

A Simplified Efficient Technique for the Design of Combinational Logic Circuits

Vijayakumari C. K, Mythili. P, Rekha K James

Статья научная

A new Genetic Algorithm based approach to the design of combinational logic circuits which uses only 2-1 multiplexers as the basic design unit has been proposed. To realize a function of n variables, conventional design needs 2n-1 units and n levels. Property of a multiplexer tree is that all the units in a level share the same control signal. In this paper, flexibility has been made in selecting the control signals so that units in the same level need not use the same select signal. Control signals can be any of the variables or functions derived from the immediate preceding level. Once a 100 % fit circuit is evolved, check for redundancy of units is made and redundant units are eliminated so that the circuit generated is optimal. It has been observed that the circuits evolved by this approach are superior to the circuits by conventional design in terms of area, power and delay. As power dissipation is an important metric in VLSI design, power loss can be minimized by eliminating unnecessary transitions/switching of idle multiplexers using a specific controller to select appropriate control signals. But in the proposed design power loss can be reduced without any additional device and hence these circuits can be recommended for low power devices.

Бесплатно

A Soft Computing Technique for Improving the Fidelity of Thumbprints Based Identification Systems

A Soft Computing Technique for Improving the Fidelity of Thumbprints Based Identification Systems

Kamta Nath Mishra, Anupam Agrawal

Статья научная

With the advent of new thumbprint identification techniques, accurate personal identification is now easy and cheaper with approximately zero false acceptance rates. This paper focuses on developing an advance feature for thumbprint based identification systems with the help of soft computing and 2D transformation which makes the technique more flexible and Fidel. The thumbprint images of individuals were scanned with the help of H3 T&A terminal for collecting self generated datasets. The thumbprints of self generated and standard datasets were trained to form a refined set which includes linear and angular displacements of thumbprint images. The new obtained features of refined datasets were stored in the database for further identification. In the proposed technique, the minutiae coordinates and orientation angles of the thumbprint of a person to be identified are computed and merged together for comparison. The minutia coordinates and orientation angles of a person are compared with the minutiae trained set values stored in the database at different linear and angular rotations for identity verification. The proposed technique was tested on fifty persons self generated and standard datasets of FVC2002, FVC2004 and CASIA databases. In the experimentation and result analysis we observed that the proposed technique accurately identifies a person on the basis of minutiae features of a thumbprint with low FNMR (False Non-Match Rate) values.

Бесплатно

A Solution for Android Device Misplacement through Bluetooth-enabled Technology

A Solution for Android Device Misplacement through Bluetooth-enabled Technology

Kaven Raj S/O Manoharan, Siew-Chin Chong, Kuok-Kwee Wee

Статья научная

The number of smartphone users and mobile application offerings are growing rapidly nowadays. A mobile device is currently considered as the most powerful and most needed device of this modern century. Every day new mobile applications are developed with their own compatibility, making sure to serve correctly to a particular smartphone model and its specifications. The goal of this project is to develop a self-help Android application namely “Dont Forget Me”, which is user friendly and well defined to solve the problem of misplaced or lost smartphone devices. This missing phone prevention alert application practically pairs with another device using Bluetooth connection. The Bluetooth connection is established in order to prevent the user from forgetting to bring along the device or being stolen by someone. If the Bluetooth connection between the paired devices is disconnected hence an alarm and message notification are triggered to notify the user that the device was not brought along with them. A website application is also developed purposely in serving the user to track, locate and lock the missing device.

Бесплатно

A Stochastic Prediction Interface for Urdu

A Stochastic Prediction Interface for Urdu

Qaiser Abbas

Статья научная

This work lays down a foundation for text prediction of an inflected and under-resourced language Urdu. The interface developed is not limited to a T9 (Text on 9 keys) application used in embedded devices, which can only predict a word after typing initial characters. It is capable of predicting a word like T9 and also a sequence of word after a word in a continuous manner for fast document typing. It is based on N-gram language model. This stochastic interface deals with three N-gram levels from unary to ternary independently. The uni-gram mode is being in use for applications like T9, while the bi-gram and tri-gram modes are being in use for sentence prediction. The measures include a percentage of keystrokes saved, keystrokes until completion and a percentage of time saved during the typing. Two different corpora are merged to build a sufficient amount of data. The test data is divided into a test and a held out data equally for an experimental purpose. This whole exercise enables the QASKU system outperforms the FastType with almost 15% more saved keystrokes.

Бесплатно

A Stroke Shape and Structure Based Approach for Off-line Chinese Handwriting Identification

A Stroke Shape and Structure Based Approach for Off-line Chinese Handwriting Identification

Jun Tan, Jian-Huang Lai, Chang-Dong Wang, Ming-Shuai Feng

Статья научная

Handwriting identification is a technique of automatic person identification based on the personal handwriting. It is a hot research topic in the field of pattern recognition due to its indispensible role in the biometric individual identification. Although many approaches have emerged, recent research has shown that off-line Chinese handwriting identification remains a challenge problem. In this paper, we propose a novel method for off-line Chinese handwriting identification based on stroke shapes and structures. To extract the features embedded in Chinese handwriting characters, two special structures have been explored according to the trait of Chinese handwriting characters. These two structures are the bounding rectangle and the TBLR quadrilateral. Sixteen features are extracted from the two structures, which are used to compute the unadjusted similarity, and the other four commonly used features are also computed to adjust the similarity adaptively. The final identification is performed on the similarity. Experimental results on the SYSU and HanjaDB1 databases have validated the effectiveness of the proposed method.

Бесплатно

A Study on the formation of the gravitational Model based on Point-mass Method

A Study on the formation of the gravitational Model based on Point-mass Method

Jianqiang Wang, Zhiqi Yu

Статья научная

The virtual point-mass method has been widely used in dealing with the approximation of the local gravity field which is a difficult problem in internal currently. In this paper, the approximation theory of point-mass model is briefly introduced, and the characteristics of the elements in the coefficient matrix for the model construction are analyzed by numerical calculation. The observations of gravity anomaly is simulated from EGM2008 with degree and order 720 and the approximated region is 32~34Nand 103~105E. A four-tier point-mass model which is on the base of the geopotential model with degree and order 36 from low frequency to high frequency is applied to approximate the local earth’s gravity field. The results of the experiments show that the truncation error of gravity disturbance created by using the point-mass model is less than 2 mGal on the radial direction and there is an optimal truncation error for some certain spectrum gravity field in the space.

Бесплатно

A Survey Work on Optimization Techniques Utilizing Map Reduce Framework in Hadoop Cluster

A Survey Work on Optimization Techniques Utilizing Map Reduce Framework in Hadoop Cluster

Bibhudutta Jena, Mahendra Kumar Gourisaria, Siddharth Swarup Rautaray, Manjusha Pandey

Статья научная

Data is one of the most important and vital aspect of different activities in today's world. Therefore vast amount of data is generated in each and every second. A rapid growth of data in recent time in different domains required an intelligent data analysis tool that would be helpful to satisfy the need to analysis a huge amount of data. Map Reduce framework is basically designed to process large amount of data and to support effective decision making. It consists of two important tasks named as map and reduce. Optimization is the act of achieving the best possible result under given circumstances. The goal of the map reduce optimization is to minimize the execution time and to maximize the performance of the system. This survey paper discusses a comparison between different optimization techniques used in Map Reduce framework and in big data analytics. Various sources of big data generation have been summarized based on various applications of big data.The wide range of application domains for big data analytics is because of its adaptable characteristics like volume, velocity, variety, veracity and value .The mentioned characteristics of big data are because of inclusion of structured, semi structured, unstructured data for which new set of tools like NOSQL, MAPREDUCE, HADOOP etc are required. The presented survey though provides an insight towards the fundamentals of big data analytics but aims towards an analysis of various optimization techniques used in map reduce framework and big data analytics.

Бесплатно

A Survey on Cooperative Communication in Wireless Networks

A Survey on Cooperative Communication in Wireless Networks

A. F. M. Shahen Shah, Md. Shariful Islam

Статья научная

Cooperative communication in wireless networks has become more and more attractive recently since it could mitigate the particularly severe channel impairments arising from multipath propagation. Here the greater benefits gained by exploiting spatial diversity in the channel. In this paper, an overview on cooperative communication in wireless networks is presented. We inscribe the benefits of cooperative transmission than traditional non – cooperative communication. Practical issues and challenges in cooperative communication are identified. In particular, we present a study on the advantages, applications and different routing strategies for cooperative mesh networks, Ad hoc networks and wireless sensor networks.

Бесплатно

A Survey on Speech Enhancement Methodologies

A Survey on Speech Enhancement Methodologies

Ravi Kumar. K, P.V. Subbaiah

Статья научная

Speech enhancement is a technique which processes the noisy speech signal. The aim of speech enhancement is to improve the perceived quality of speech and/or to improve its intelligibility. Due to its vast applications in mobile telephony, VOIP, hearing aids, Skype and speaker recognition, the challenges in speech enhancement have grown over the years. It is more challenging to suppress back ground noise that effects human communication in noisy environments like airports, road works, traffic, and cars. The objective of this survey paper is to outline the single channel speech enhancement methodologies used for enhancing the speech signal which is corrupted with additive background noise and also discuss the challenges and opportunities of single channel speech enhancement. This paper mainly focuses on transform domain techniques and supervised (NMF, HMM) speech enhancement techniques. This paper gives frame work for developments in speech enhancement methodologies.

Бесплатно

A Survey on Statistical Based Single Channel Speech Enhancement Techniques

A Survey on Statistical Based Single Channel Speech Enhancement Techniques

Sunnydayal. V, N. Sivaprasad, T. Kishore Kumar

Статья научная

Speech enhancement is a long standing problem with various applications like hearing aids, automatic recognition and coding of speech signals. Single channel speech enhancement technique is used for enhancement of the speech degraded by additive background noises. The background noise can have an adverse impact on our ability to converse without hindrance or smoothly in very noisy environments, such as busy streets, in a car or cockpit of an airplane. Such type of noises can affect quality and intelligibility of speech. This is a survey paper and its object is to provide an overview of speech enhancement algorithms so that enhance the noisy speech signal which is corrupted by additive noise. The algorithms are mainly based on statistical based approaches. Different estimators are compared. Challenges and Opportunities of speech enhancement are also discussed. This paper helps in choosing the best statistical based technique for speech enhancement.

Бесплатно

A Survey on Various Compression Methods for Medical Images

A Survey on Various Compression Methods for Medical Images

S.Sridevi M.E, V.R.Vijayakuymar, R.Anuja

Статья научная

Medical image compression plays a key role as hospitals move towards filmless imaging and go completely digital. Image compression will allow Picture Archiving and Communication Systems (PACS) to reduce the file sizes on their storage requirements while maintaining relevant diagnostic information. Lossy compression schemes are not used in medical image compression due to possible loss of useful clinical information and as operations like enhancement may lead to further degradations in the lossy compression. Medical imaging poses the great challenge of having compression algorithms that reduce the loss of fidelity as much as possible so as not to contribute to diagnostic errors and yet have high compression rates for reduced storage and transmission time. This paper outlines the comparison of compression methods such as Shape-Adaptive Wavelet Transform and Scaling Based ROI,JPEG2000 Max-Shift ROI Coding, JPEG2000 Scaling-Based ROI Coding, Discrete Cosine Transform, Discrete Wavelet Transform and Subband Block Hierarchical Partitioning on the basis of compression ratio and compression quality.

Бесплатно

A Type-2 Fuzzy Logic Based Framework for Function Points

A Type-2 Fuzzy Logic Based Framework for Function Points

Anupama Kaushik, A.K. Soni, Rachna Soni

Статья научная

Software effort estimation is very crucial in software project planning. Accurate software estimation is very critical for a project success. There are many software prediction models and all of them utilize software size as a key factor to estimate effort. Function Points size metric is a popular method for estimating and measuring the size of application software based on the functionality of the software from the user’s point of view. While there is a great advancement in software development, the weight values assigned to count standard FP remains the same. In this paper the concepts of calibrating the function point weights using Type-2 fuzzy logic framework is provided whose aim is to estimate a more accurate software size for various software applications and to improve the effort estimation of software projects. Evaluation experiments have shown the framework to be promising.

Бесплатно

A Unified Model of Clustering and Classification to Improve Students’ Employability Prediction

A Unified Model of Clustering and Classification to Improve Students’ Employability Prediction

Pooja Thakar, Anil Mehta, Manisha

Статья научная

Data Mining is gaining immense popularity in the field of education due to its predictive capabilities. But, most of the prior effort in this area is only directed towards prediction of performance in academic results only. Nowadays, education has become employment oriented. Very little attempt is made to predict students’ employability. Precise prediction of students’ performance in campus placements at an early stage can identify students, who are at the risk of unemployment and proactive actions can be taken to improve their performance. Existing researches on students’ employability prediction are either based upon only one type of course or on single University/Institute; thus is not scalable from one context to another. With this necessity, the conception of a unified model of clustering and classification is proposed in this paper. With the notion of unification, data of professional courses namely Engineering and Masters in Computer Applications students are collected from various universities and institutions pan India. Data is large, multivariate, incomplete, heterogeneous and unbalanced in nature. To deal with such a data, a unified predictive model is built by integrating clustering and classification techniques. Two- Level clustering (k-means kernel) with chi-square analysis is applied at the pre-processing stage for the automated selection of relevant attributes and then ensemble vote classification technique with a combination of four classifiers namely k-star, random tree, simple cart and the random forest is applied to predict students’ employability. Proposed framework provides a generalized solution for student employability prediction. Comparative results clearly depict model performance over various classification techniques. Also, when the proposed model is applied up to the level of the state, classification accuracy touches 96.78% and 0.937 kappa value.

Бесплатно

A community based reliable trusted framework for collaborative filtering

A community based reliable trusted framework for collaborative filtering

Satya Keerthi Gorripati, M. Kamala Kumari, Anupama Angadi

Статья научная

Recommender Systems are a primary component of online service providers, formulating plenty of information produced by users’ histories (e.g., their procurements, ratings of products, activities, browsing patterns). Recommendation algorithms use this historical information and their contextual data to offer a list of likely items for each user. Traditional recommender algorithms are built on the similarity between items or users.(e.g., a user may purchase the identical items as his nearest user). In the process of reducing limitations of traditional approaches and to improve the quality of recommender systems, a reliability based community method is introduced.This method comprises of three steps: The first step identifies the trusted relations of the current user by allowing trust propagation in the trust network. In next step, the ratings of selected trusted neighborhood are used for predicting the unrated item of current user. The prediction relies only on items that belong to candidate items’ community. Finally the reliability metric is computed to assess the worth of prediction rating. Experimental results confirmed that the proposed framework attained higher accuracy matched to state-of-the-art recommender system approaches.

Бесплатно

Журнал