Skip to content

Volume 2 Issue 6 June 2010

Efficient Region-Based Image Querying Full-Text ]
S. Sadek, A. Al-Hamadi, B. Michaelis and U. Sayed

Retrieving images from large and varied repositories using visual contents has been one of major research items, but a challenging task in the image management community. In this paper we present an efficient approach for region-based image classification and retrieval using a fast multi-level neural network model. The advantages of this neural model in image classification and retrieval domain will be highlighted. The proposed approach accomplishes its goal in three main steps. First, with the help of a mean-shift based segmentation algorithm, significant regions of the image are isolated. Secondly, color and texture features of each region are extracted by using color moments and 2D wavelets decomposition technique. Thirdly the multi-level neural classifier is trained in order to classify each region in a given image into one of five predefined categories, i.e., ”Sky”, ”Building”, ”SandnRock”, ”Grass” and ”Water”. Simulation results show that the proposed method is promising in terms of classification and retrieval accuracy results. These results compare favorably with the best published results obtained by other state-of-the-art image retrieval techniques.

——————————————————————————————————————————————————————————————————–

Studies on Relevance, Ranking and Results Display Full-Text ]
J. Gelernter, D. Cao and J. Carbonell

This study considers the extent to which users with the same query agree as to what is relevant, and how what is considered relevant may translate into a retrieval algorithm and results display.  To combine user perceptions of relevance with algorithm rank and to present results, we created a prototype digital library of scholarly literature.  We confine studies to one population of scientists (paleontologists), one domain of scholarly scientific articles (paleo-related), and a prototype system (PaleoLit) that we built for the purpose.  Based on the principle that users do not pre-suppose answers to a given query but that they will recognize what they want when they see it, our system uses a rules-based algorithm to cluster results into fuzzy categories with three relevance levels.  Our system matches at least 1/3 of our participants’ relevancy ratings 87% of the time.  Our subsequent usability study found that participants trusted our uncertainty labels but did not value our color-coded horizontal results layout above a standard retrieval list.  We posit that users make such judgments in limited time, and that time optimization per task might help explain some of our findings.

——————————————————————————————————————————————————————————————————–

Analysis of Microprocessor Based Protective Re-lay’s (MBPR) Differential Equation Algorithms Full-Text ]
Bruno Osorno

This paper analyses and explains from the systems point of view, microprocessor based protective relay (MBPR) systems with emphasis on differential equation algorithms. Presently, the application of protective relaying in power systems, using MBPR systems, based on the differential equation algorithm is valued more than the protection relaying based on any other type of algorithm, because of advantages in accuracy and implementation. MBPR differential equation approach can tolerate some errors caused by power system abnormality such as DC offset. This paper shows that the algorithm is a system description based and it is immune from distortions such as DC-offset. Differential equation algorithms implemented in MBPR are widely used in the protection of transmission and distribution lines, transformers, buses, motors, etc. The parameters from the system, utilized in these algorithms, are obtained from the power system current i(t) or voltage v(t), which are abnormal values under fault or distortion situations. So, an error study for the algorithm is considered necessary.

——————————————————————————————————————————————————————————————————–

Examining Web Application by Clumping and Orienting User Session Data Full-Text ]
T. Deenadayalan, V. Kavitha and S. Rajarajeswari

The increasing demand for reliable Web applications gives a central role to Web testing. Most of the existing works are focused on the definition of novel testing techniques, specifically tailored to the Web. However, no attempt was carried out so far to understand the specific nature of Web faults. This paper presents a user session based testing technique that clusters user sessions based on the service profile and selects a set of representative user sessions from each cluster and  tailored by augmentation with additional requests to cover the dependence relationships between web pages. The created suite not only can significantly reduce the size of the collected user sessions, also viable to exercise fault sensitive paths. The results demonstrate that our approach consistently detected the majority of known faults using a relatively small number of test cases and will be a powerful system when more and more user sessions are being clustered.

——————————————————————————————————————————————————————————————————–

Computational Analysis of .NET Remoting and Mobile agent in Distributed Enviroment Full-Text ]
Vivek Tiwari, Shailendra G., Renu Tiwari and Malam K.

A mobile agent is a program that is not bound to the system on which it began execution, but rather travels amongst the hosts in the network with its code and current execution state (i.e. Distributed Environment).The implementation of distributed applications can be based on a multiplicity of technologies, e.g. plain sockets, Remote Procedure Call (RPC), Remote Method Invocation (RMI), Java Message Service (JMS), .NET Remoting, or Web Services. These technologies differ widely in complexity, interoperability, standardization, and ease of use.  The Mobile Agent technology is emerging as an alternative to build a smart generation of highly distributed systems. . In this work, we investigate the performance aspect of agent-based technologies for information retrieval. We present a comparative performance evaluation model of Mobile Agents versus .Net remoting  by means of an analytical approach.  A  quantitative     measurements  are  performed to  compare  .Net remoting  and  mobile  agents  using  communication time, code size(agent code ),Data size, number of node as performance parameters in this research work. The results  depict that Mobile Agent paradigm offers a superior performance compared to .Net remoting  paradigm, offers fast computational speed; procure lower invocation cost by making local invocations instead of remote invocations over the network, thereby reducing network bandwidth.

——————————————————————————————————————————————————————————————————–

A Study of User’s Performance and Satisfaction on the Web Based Photo Annotation with Speech InteractionFull-Text ]
Siti Azura Ramlan and Nor Azman Ismail

This paper reports on empirical evaluation study of users’ performance and satisfaction with prototype of Web Based speech photo annotation with speech interaction. Participants involved consist of Johor Bahru citizens from various background. They have completed two parts of annotation task; part A involving PhotoASys; photo annotation system with proposed speech interaction and part B involving Microsoft Microsoft Vista Speech Interaction style. They have completed eight tasks for each part including system login and selection of album and photos. Users’ performance was recorded using computer screen recording software. Data were captured on the task completion time and subjective satisfaction. Participants need to complete a questionnaire on the subjective satisfaction when the task was completed. The performance data show the comparison between proposed speech interaction and Microsoft Vista Speech interaction applied in photo annotation system, PhotoASys. On average, the reduction in annotation performance time due to using proposed speech interaction style was 64.72% rather than using speech interaction Microsoft Vista style. Data analysis were showed in different statistical significant in annotation performance and subjective satisfaction for both styles of interaction. These results could be used for the next design in related software which involves personal belonging management.

——————————————————————————————————————————————————————————————————–

A Novel Rough Set Reduct Algorithm for Medical Domain Based on Bee Colony Optimization Full-Text ]
N. Suguna and K. Thanushkodi

Feature selection refers to the problem of selecting relevant features which produce the most predictive outcome. In particular, feature selection task is involved in datasets containing huge number of features. Rough set theory has been one of the most successful methods used for feature selection. However, this method is still not able to find optimal subsets. This paper proposes a new feature selection method based on Rough set theory hybrid with Bee Colony Optimization (BCO) in an attempt to combat this. This proposed work is applied in the medical domain to find the minimal reducts and experimentally compared with the Quick Reduct, Entropy Based Reduct, and other hybrid Rough Set methods such as Genetic Algorithm (GA), Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO).

——————————————————————————————————————————————————————————————————–

Algorithm and Implementation of the Blog-Post Supervision Process Full-Text ]
Kamanashis Biswas, Md. Liakat Ali and S. A. M. Harun

A web log or blog in short is a trendy way to share personal entries with others through website.  A typical blog may consist of texts, images, audios and videos etc. Most of the blogs work as personal online diaries, while others may focus on specific interest such as photographs (photoblog), art (artblog), travel (tourblog), IT (techblog) etc. Another type of blogging called microblogging is also very well known now-a-days which contains very short posts. Like the developed countries, the users of blogs are gradually increasing in the developing countries e.g. Bangladesh. Due to the nature of open access to all users, some people misuse it to spread fake news to achieve individual or political goals. Some of them also post vulgar materials that make an embarrass situation for other bloggers. Even, sometimes it indulges the reputation of the victim. The only way to overcome this problem is to bring all the posts under supervision of the blog moderator. But it totally contradicts with blogging concepts. In this paper, we have implemented an algorithm that would help to prevent the offensive entries from being posted. These entries would go through a supervision process to justify themselves as legal posts. From the analysis of the result, we have shown that this approach can eliminate the chaotic situations in blogosphere at a great extent. Our experiment shows that about 90% of offensive posts can be detected and stopped from being published using this approach.

——————————————————————————————————————————————————————————————————–

Enhancing and Analyzing Search performance in Unstructured  Peer to Peer Networks Using Enhanced Guided search protocol (EGSP) Full-Text ]
Anusuya.R, Kavitha.V and Golden Julie.E

Peer-to-peer (P2P) networks establish loosely coupled application-level overlays on top of the Internet to facilitate efficient sharing of resources. It can be roughly classified as either structured or unstructured networks. Without stringent constraints over the network topology, unstructured P2P networks can be constructed very efficiently and are therefore considered suitable to the Internet environment. However, the random search strategies adopted by these networks usually perform poorly with a large network size. To enhance the search performance in unstructured P2P networks through exploiting users’ common interest patterns captured within a probability-theoretic framework termed the user interest model (UIM). A search protocol and a routing table updating protocol are further proposed in order to expedite the search process through self organizing the P2P network into a small world. Both theoretical and experimental analyses are conducted and demonstrated the effectiveness and efficiency of the approach.

——————————————————————————————————————————————————————————————————–

Human Disease Diagnosis Using a Fuzzy Expert System Full-Text ]
Mir Anamul Hasan, Khaja Md. Sher-E-Alam and Ahsan Raja Chowdhury

Human disease diagnosis is a complicated process and requires high level of expertise. Any attempt of developing a web-based expert system dealing with human disease diagnosis has to overcome various difficulties. This paper describes a project work aiming to develop a web-based fuzzy expert system for diagnosing human diseases. Now a days fuzzy systems are being used successfully in an increasing number of application areas; they use linguistic rules to describe systems. This research project focuses on the research and development of a web-based clinical tool designed to improve the quality of the exchange of health information between health care professionals and patients. Practitioners can also use this web-based tool to corroborate diagnosis. The proposed system is experimented on various scenarios in order to evaluate it’s performance. In all the cases, proposed system exhibits satisfactory results.

——————————————————————————————————————————————————————————————————–

CORMEN: Coding-Aware Opportunistic Routing in Wireless Mess Network Full-Text ]
Jeherul Islam and P K Singh

These Network Coding improves the network operation beyond the traditional routing or store-and-forward, by mixing of data stream within a network. Network coding techniques explicitly minimizes the total no of transmission in wireless network. The Coding-aware routing maximizes the coding opportunity by finding the coding possible path for every packet in the network. Here we propose CORMEN: a new coding-aware routing mechanism based on opportunistic routing. In CORMEN, every node independently can take the decision whether to code packets or not and forwarding of packets is based on the coding opportunity available.

——————————————————————————————————————————————————————————————————–

Effectiveness of Intrusion Prevention Systems (IPS) in Fast Networks Full-Text ]
Muhammad Imran Shafi, Muhammad Akram, Sikandar Hayat, and Imran Sohail

Computer systems are facing biggest threat in the form of malicious data which causing denial of service, information theft, financial and credibility loss etc. No defense technique has been proved successful in handling these threats. Intrusion Detection and Prevention Systems (IDPSs) being best of available solutions. These techniques are getting more and more attention. Although Intrusion Prevention Systems (IPSs) show a good level of success in detecting and preventing intrusion attempts to networks, they show a visible deficiency in their performance when they are employed on fast networks. In this paper we have presented a design including quantitative and qualitative methods to identify improvement areas in IPSs. Focus group is used for qualitative analysis and experiment is used for quantitative analysis. This paper also describes how to reduce the responding time for IPS when an intrusion occurs on network, and how can IPS be made to perform its tasks successfully without effecting network speed negatively.

——————————————————————————————————————————————————————————————————–

New Quantitative Study for Dissertations Repository System Full-Text ]
Fahad H. Alshammari, Rami Alnaqeib, M. A. Zaidan, Ali K. Hmood, B. B. Zaidan and A. A. Zaidan

In the age of technology, the information communication technology becomes very important especially in education field. Students must be allowed to learn anytime, anywhere and at their own place. The facility of library in the university should be developed. In this paper we are going to present new Quantitative Study for Dissertations Repository System and also recommend future application of the approach.

——————————————————————————————————————————————————————————————————–

Gender Based Emotion Recognition System for Telugu Rural Dialects Using Hidden Markov Models                                                                                                                                                                                                 [ Full-Text ]
Prasad Reddy P.V.G.D, Prasad A, Srinivas Y and Brahmaiah P

Automatic emotion recognition in speech is a research area with a wide range of applications in human interactions. The basic mathematical tool used for emotion recognition is Pattern recognition which involves three operations, namely, pre-processing, feature extraction and classification. This paper introduces a procedure for emotion recognition using Hidden Markov Models (HMM), which is used to divide five emotional states: anger, surprise, happiness, sadness and neutral state. The approach is based on standard speech recognition technology using hidden continuous markov model by selection of low level features and the design of the recognition system. Emotional Speech Database from Telugu Rural Dialects of Andhra Pradesh (TRDAP) was designed using several speaker’s voices comprising the emotional states. The accuracy of recognizing five different emotions for both genders of classification is 80% for anger-emotion which is achieved by using the best combination of 39-dimensioanl feature vector for every frame (13 MFCCs, 13 Delta Coefficients and 13 Acceleration Coefficients) and a classifier using HMM. This outcome very much matches with that acquired with the same database with subjective evaluation by human judges. Both gender-dependent and gender-independent experiments are conducted on TRDAP emotional speech database.

——————————————————————————————————————————————————————————————————–

Stronger Enforcement of Security Using AOP & Spring AOP Full-Text ]
Kotrappa Sirbi and Prakash Jayanth Kulkarni

An application security has two primary goals: first, it is intended to prevent unauthorised personnel from accessing information at higher classification than their authorisation. Second, it is intended to prevent personnel from declassifying information. Using an object oriented approach to implementing application security results not only with the problem of code scattering and code tangling, but also results in weaker enforcement of security. This weaker enforcement of security could be due to the inherent design of the system or due to a programming error. Aspect Oriented Programming (AOP) complements Object-Oriented Programming (OOP) by providing another way of thinking about program structure. The key unit of modularity in OOP is the class, whereas in AOP the unit of modularity is the aspect. The goal of the paper is to present that Aspect Oriented Programming AspectJ integrated with Spring AOP provides very powerful mechanisms for stronger enforcement of security.Aspect-oriented programming (AOP) allows weaving a security aspect into an application providing additional security functionality or introducing completely new security mechanisms.Implementation of security with AOP is a flexible method to develop separated, extensible and reusable pieces of code called aspects.In this comparative study paper, we argue that Spring AOP provides stronger enforcement of security than AspectJ.We have shown both Spring AOP and AspectJ strive to provide a comprehensive AOP solutions and complements each other.

——————————————————————————————————————————————————————————————————–

Vagueness of Linguistic variable Full-Text ]
Supriya Raheja and Smita Rajpal

In the area of computer science focusing on creating machines that can engage on behaviors that humans consider intelligent. The ability to create intelligent machines has intrigued humans since ancient times and today with the advent of the computer and 50 years of research into various programming techniques, the dream of smart machines is becoming a reality. Researchers are creating systems which can mimic human thought, understand speech, beat the best human chessplayer, and countless other feats never before possible. Ability of the human to estimate the information is most brightly shown in using of natural languages. Using words of a natural language for valuation qualitative attributes, for example, the person pawns uncertainty in form of vagueness in itself estimations. Vague sets, vague judgments, vague conclusions takes place there and then, where and when the reasonable subject exists and also is interested in something. The vague sets theory has arisen as the answer to an illegibility of language the reasonable subject speaks. Language of a reasonable subject is generated by vague events which are created by the reason and which are operated by the mind. The theory of vague sets represents an attempt to find such approximation of vague grouping which would be more convenient, than the classical theory of sets in situations where the natural language plays a significant role. Such theory has been offered by known American mathematician Gau and Buehrer .In our paper we are describing how vagueness of linguistic variables can be solved by using the vague set theory.This paper is mainly designed for one of directions of the eventology (the theory of the random vague events), which has arisen within the limits of the probability theory and which pursue the unique purpose to describe eventologically a movement of reason.

——————————————————————————————————————————————————————————————————–

Evolution of Biped Walking Using Neural Oscillators Controller and Harmony Search Algorithm Optimizer                                                                                                                                                                                     Full-Text ]
Ebrahim Yazdi and Abolfazl Toroghi Haghighat

In this paper, a simple Neural controller has been used to achieve stable walking in a NAO biped robot, with 22 degrees of freedom that implemented in a virtual physics-based simulation environment of Robocup soccer simulation environment. The algorithm uses a Matsuoka base neural oscillator to generate control signal for the biped robot. To find the best angular trajectory and optimize network parameters, a new population-based search algorithm, called the Harmony Search (HS) algorithm, has been used. The algorithm conceptualized a group of musicians together trying to search for better state of harmony. Simulation results demonstrate that the modification of the step period and the walking motion due to the sensory feedback signals improves the stability of the walking motion.

——————————————————————————————————————————————————————————————————–

Law-Aware Access Control and its Information Model Full-Text ]
Michael Stieghahn and Thomas Engel

Cross-border access to a variety of data such as market information, strategic information, or customer-related information defines the daily business of many global companies, including financial institutions. These companies are obliged by law to keep a data processing legal for all offered services. They need to fulfill different security objectives specified by the legislation. Therefore, they control access to prevent unauthorized users from using data. Those security objectives, for example confidentiality or secrecy, are often defined in the eXtensible Access Control Markup Language that promotes interoperability between different systems.
In this paper, we show the necessity of incorporating the requirements of legislation into access control. Based on the work flow in a banking scenario we describe a variety of available contextual information and their interrelations. Different from other access control systems our main focus is on law-compliant cross-border data access. By including legislation directly into access decisions, this lawfulness can be ensured. We also decribe our information model to demonstrate how these policies can be implemented into an existing network and how the components and contextual information interrelate. Finally, we outline an event flow for a request made from a remote user exemplifying how such a system decides about access.

——————————————————————————————————————————————————————————————————–

A New Energy Efficient Routing Algorithm Based on a New Cost Function in Wireless Ad hoc Networks                                                                                                                                                                                         [ Full-Text ]
Mehdi Lotfi, Sam Jabbehdari and Majid Asadi Shahmirzadi

Wireless ad hoc networks are power constrained since nodes operate with limited battery energy. Thus, energy consumption is crucial in the design of new ad hoc routing protocols. In order to maximize the lifetime of ad hoc networks, traffic should be sent via a route that can be avoid nodes with low energy. In addition, considering that the nodes of ad hoc networks are mobile, it is possible that a created path is broken because of nodes mobility and establishment of a new path would be done again. This is because of sending additional control packets, accordingly, energy consumption increases. Also, it should avoid nodes which have more buffered packets. Maybe, because of long queue, some of these packets are dropped and transmitted again. This is the reason for wasting of energy. In this paper we propose a new energy efficient algorithm, that uses a new cost function and avoid nodes with characteristics which mentioned above .We show that this algorithm improves the network energy consumption by using this new cost function.

——————————————————————————————————————————————————————————————————–

Search Engine Optimization Techniques Practiced in Organizations: A Study of Four Organizations                                                                                                                                                                                                 [ Full-Text ]
Muhammad Akram, Imran Sohail, Sikandar Hayat, M. Imran Shafi and Umer Saeed

Web spammers used Search Engine Optimization (SEO) techniques to increase search-ranking of web sites. In this paper we have study the essentials SEO techniques, such as; directory submission, keyword generation and link exchanges. The impact of SEO techniques can be applied as marketing technique and to get top listing in major search engines like Google, Yahoo, and MSN. Our study focuses on these techniques from four different companies’ perspectives of United Kingdom and Pakistan. According to the these companies, these techniques are low cost and high impacts in profit, because mostly customers focus on major search engine to find different products on internet, so SEO technique provides best opportunity to grow their business. This paper also describes the pros and cons of using these searh engine optimization techniques in above four companies. We have concluded that these techniques are essential to increase their business profit and minimize their marketing cost.

——————————————————————————————————————————————————————————————————–

Analytical Study on Internet Banking System Full-Text ]
Fadhel.S. AlAbdullah, Fahad H. Alshammari, Rami Alnaqeib, Hamid A. Jalab, A. A. Zaidan and B. B. Zaidan

The Internet era is a period in the information age in which communication and commerce via the Internet became a central focus for businesses, consumers, government, and the media. The Internet era also marks the convergence of the computer and communications industries and their associated services and products. Nowadays, the availability of the Internet make it widely used for everyday life. In order to led business to success, the business and specially the services should provide comfort use to its costumer. The bank system is one of the most important businesses who may use the website. The using for the web-based systems should contain special requirements to achieve the business goal. Since that the paper will present the functional and non-functional for the web-based banking system.

——————————————————————————————————————————————————————————————————–

An Efficient Technique for Similarity Identification between Ontologies Full-Text ]
Amjad Farooq, Syed Ahsan and Abad Shah

Ontologies usually suffer from the semantic heterogeneity when simultaneously used in information sharing, merging, integrating and querying processes. Therefore, the similarity identification between ontologies being used becomes a mandatory task for all these processes to handle the problem of semantic heterogeneity. In this paper, we propose an efficient technique for similarity measurement between two ontologies. The proposed technique identifies all candidate pairs of similar concepts without omitting any similar pair. The proposed technique can be used in different types of operations on ontologies such as merging, mapping and aligning. By analyzing its results a reasonable improvement in terms of completeness, correctness and overall quality of the results has been found.

——————————————————————————————————————————————————————————————————–

Engineering Semantic Web Applications by Using Object-Oriented Paradigm Full-Text ]
Amjad Farooq, Syed Ahsan and Abad Shah

The web information resources are growing explosively in number and volume. Now to retrieve relevant data from web has become very difficult and time-consuming. Semantic Web envisions that these web resources should be developed in machine-processable way in order to handle irrelevancy and manual processing problems. Whereas, the Semantic Web is an extension of current web, in which web resources are equipped with formal semantics about their interpretation through machines.  These web resources are usually contained in web applications and systems, and their formal semantics are normally represented in the form of web-ontologies. In this research paper, an object-oriented design methodology (OODM) is upgraded for developing semantic web applications. OODM has been developed for designing of web applications for the current web. This methodology is good enough to develop web applications. It also provides a systematic approach for the web applications development but it is not helpful in generating machine-pocessable content of web applications in their development. Therefore, this methodology needs to be extended. In this paper, we propose that extension in OODM. This new extended version is referred to as the semantic web object-oriented design methodology (SW-OODM).

——————————————————————————————————————————————————————————————————–

The State of the Art: Ontology Web-Based Languages: XML Based Full-Text ]
Mohammad Mustafa Taye

Many formal languages have been proposed to express or represent Ontologies, including RDF, RDFS, DAML+OIL and OWL. Most of these languages are based on XML syntax, but with various terminologies and expressiveness.  Therefore, choosing a language for building an Ontology is the main step. The main point of choosing language to represent Ontology is based mainly on what the Ontology will represent or be used for. That language should have a range of quality support features such as ease of use, expressive power, compatibility, sharing and versioning, internationalisation. This is because different kinds of knowledge-based applications need different language features. The main objective of these languages is to add semantics to the existing information on the web.  The aims of this paper is to provide a good knowledge of existing language and understanding of these languages and how could be used.

——————————————————————————————————————————————————————————————————–

An Overview: Extensible Markup Language Technology Full-Text ]
Rami Alnaqeib, Fahad H. Alshammari, M. A. Zaidan, A. A. Zaidan, B. B. Zaidan and Zubaidah M. Hazza

XML stands for the Extensible Markup Language. It is a markup language for documents, Nowadays XML is a tool to develop and likely to become a much more common tool for sharing data and store. XML can communicate structured information to other users. In other words, if a group of users agree to implement the same kinds of tags to describe a certain kind of information, XML applications can assist these users in communicating their information in an more robust and efficient manner. XML can make it easier to exchange information between cooperating entities. In this paper we will present the XML technique by fourth factors Strength of XML, XML Parser, XML Goals and Types of XML Parsers.

——————————————————————————————————————————————————————————————————–

Understanding Semantic Web and Ontologies: Theory and Applications Full-Text ]
Mohammad Mustafa Taye

Semantic Web is actually an extension of the current one in that it represents information more meaningfully for humans and computers alike. It enables the description of contents and services in machine-readable form, and enables annotating, discovering, publishing, advertising and composing services to be automated. It was developed based on Ontology, which is considered as the backbone of the Semantic Web. In other words, the current Web is transformed from being machine-readable to machine-understandable. In fact, Ontology is a key technique with which to annotate semantics and provide a common, comprehensible foundation for resources on the Semantic Web. Moreover, Ontology can provide a common vocabulary, a grammar for publishing data, and can supply a semantic description of data which can be used to preserve the Ontologies and keep them ready for inference. This paper provides basic concepts of web services and the Semantic Web, defines the structure and the main applications of ontology, and provides many relevant terms are explained in order to provide a basic understanding of ontologies.

——————————————————————————————————————————————————————————————————–

Approaches, Challenges and Future Direction of Image Retrieval Full-Text ]
Hui Hui Wang, Dzulkifli Mohamad and N. A. Ismail

This paper attempts to discuss the evolution of the retrieval approaches focusing on development, challenges and future direction of the image retrieval. It highlights both the already addressed and outstanding issues. The explosive growth of image data leads to the need of research and development of Image Retrieval. However, Image retrieval researches are moving from keyword, to low level features and to semantic features. Drive towards semantic features is due to the problem of the keywords which can be very subjective and time consuming while low level features cannot always describe high level concepts in the users’ mind. Hence, introducing an interpretation inconsistency between image descriptors and high level semantics that known as the semantic gap. This paper also discusses the semantic gap issues, user query mechanisms as well as common ways used to bridge the gap in image retrieval.

——————————————————————————————————————————————————————————————————–

Advanced Trace Pattern For Computer Intrusion Discovery Full-Text ]
Siti Rahayu S., Robiah Y., Shahrin S., Mohd Zaki M., Faizal M. A. and Zaheera Z.A

The number of crime committed based on the malware intrusion is never ending as the number of malware variants is growing tremendously and the usage of internet is expanding globally. Malicious codes easily obtained and use as one of weapon to gain their objective illegally. Hence, in this research, diverse logs from different OSI layer are explored to identify the traces left on the attacker and victim logs in order to establish worm trace pattern to defending against the attack and help revealing true attacker or victim.  For the purpose of this paper, it focused on malware intrusion and traditional worm namely sasser worm variants. The concept of trace pattern is created by fusing the attacker’s and victim’s perspective.    Therefore, the objective of this paper is to propose a general worm trace pattern for attacker’s, victim’s and multi-step (attacker/victim)’s by combining both perspectives.  These three proposed worm trace patterns can be extended into research areas in alert correlation and computer forensic investigation.

——————————————————————————————————————————————————————————————————–

Optimization of reversible sequential circuits Full-Text ]
Abu Sadat Md. Sayem and Masashi Ueda

In recent year’s reversible logic has been considered as an important issue for designing low power digital circuits. It has voluminous applications in the present rising nanotechnology such as DNA computing, Quantum Computing, low power VLSI and quantum dot automata. In this paper we have proposed optimized design of reversible sequential circuits in terms of number of gates, delay and hardware complexity. We have designed the latches with a new reversible gate and reduced the required number of gates, garbage outputs, and delay and hardware complexity. As the number of gates and garbage outputs increase the complexity of reversible circuits, this design will significantly enhance the performance. We have proposed reversible D-latch and JK latch which are better than the existing designs available in literature.