Showing posts with label indian journal. Show all posts
Showing posts with label indian journal. Show all posts

Friday, April 27, 2018

IJIRST – Call for Paper – Submit Paper – May 2018

International Journal for Innovative Research in Science and Technology – IJIRST
Call for Papers | Vol. 4 Issue 12 – May 2018

High Impact Factor: 4.371 | IC Value: 71.12
More Information & Query Contact us: 07405046536
Email us: ijirst.journal@gmail.com
Submit your Paper @ IJIRST.org

Wednesday, February 24, 2016

Innovations in Micro-electronics, Signal Processing and Communication Technologies "National Conference(V-IMPACT-2016)" at VIVEKANANDA INSTITUTE OF TECHNOLOGY,Jaipur,Rajasthan,India

"National Conference(V-IMPACT-2016)"  

on

Innovations in Micro-electronics, Signal Processing and Communication Technologies

VIT Campus is organizing a conference on 'Innovations in Micro-Electronics, Signal Processing and Communication Technologies'. This conference is fifth in succession to conferences held in years 2012, 2013, 2014 and 2015. The aim of the conference is to review the recent advancement in understanding the science and technology, facilitate exchange of new ideas and explore emerging directions both in basic sciences and technological applications of Electronics, Signal Processing Communication. Recently, scientific activities are on surge on the MEMS, VLSI, DSP and Communication. So the increase in the research activities and the consequent enthusiasm is on rise day by day. 

The new fields such as CAD, VLSI and MATLAB are at the horizon highlighting many important issues involved in the preparation and applications of these useful Systems & Fields. These topics which constitute the frontiers of devices and technology are expected to lead to the development of new systems and new technologies. 

The Conference is intended to bring theorists, experimentalists and experts on a common platform and foster inter disciplinary research. The thrust of the conference will be to facilitate emergence of collaborations between the participants. The informal atmosphere that will prevail is expected to facilitate interactions between young researchers and experts which will be particularly useful for graduate/ research students. We invite you all to participate, deliver talks, present your work and make this event a great success.



VIVEKANANDA INSTITUTE OF TECHNOLOGY
Sisyawas, NRI Road, Jagatpura, Jaipur-303012

Publication Partner:
Website:- www.ijirst.org

Tuesday, January 12, 2016

#IJIRST Journal: Dynamic Clustering in Wireless Sensor Networks Based on the Data Traffic Flow and the Node Residual Battery Life Computation



Department of Computer Science and Engineering 

Suresh Gyan Vihar University, Jagatpura

Abstract:- Wireless Sensor Networks forms the core of the infrastructural facilities and amenities that constitutes a major part of modern living. Wireless Sensor Networks founds tremendous applications in domains such as theft alarms, wildlife monitoring, radiation/pressure/light/heat sensor networks and the list is endless. It constitutes the core part of the modern Internet of Things (IoT) that will revolutionize the modern living. The Iot specifies a scenario in which the devices can communicate with each other using the internet over a flexible framework and can be programmed to perform specific actions based on the programming customization made by the users. For example, a refrigerator is runs out of milk or bread can email the requirement to the dairy that can entertain the mail and ship a delivery of the same to the location of the refrigerator. As sensor nodes are battery powered, there is a critical aspect to same battery power. This is possible only by avoiding the in-network communication as much as possible. A fraction of communication overhead can be reduced through clustering. In this paper, an approach for dynamic clustering is proposed based on the varying traffic loads to various PAN coordinators so as to maximize the battery life and therefore the network lifetime.

Keywords:- Wireless sensor network, Clustering Protocols, Battery Life etc.

I.    Clustering in Wireless Sensor Networks

Clustering forms, the backbone towards the persistence of sensor nodes towards sensing data in such a way that a single lithium ion battery can work even for one and a half year continuously. This is because of the reduction in in-network communication to the central node through the creation of clusters in such a way that all the node in the cluster transmit the data to the cluster head and the cluster head is responsible to transmit the data to the central node. The senario is expressed in the following figures.
Fig. 1: Wireless Sensor Network without clustering

Fig. 2: Wireless Sensor Network with clustering and Data Aggregation

       The individual collections shown in figure 1.2 are known as clusters and the nodes that belongs to a particular cluster sends the data only to the cluster head. Thus, reducing the data transmission over long distance from the individual nodes to the central computer. In the clustered approach, the nodes transmit the data to the cluster head over a relatively very short distance, thus, conserving the battery life and enhancing the network lifetime.

II.    Dynamic Clustering over the WirelessSensor Network

Consider a network of N nodes and a static number set initially k as the total number of cluster over the network. Thus, on an average, there are N/k in each cluster. Also, consider a rectangular plane of dimension aXa over which the sensor nodes are (approximately evenly) speeded.
      As state previously, there are k clusters each having (N/k)-1 nodes as ordinary sensing nodes and a Cluster head that hold the responsibility of aggregating data from each of the (N/k)-1 nodes. Also assume that each packet senses the medium and sends the data packet to the cluster head in specified TDMA frame.
      Considering the first order radio energy dissipation model, let the energy consumption per bit in the transmission circuitry be Et and the energy consumption per bit in the processing circuitry be Ep. Let there be B bits in a TDMA packet. Considering the initial energy level in the battery be E, one can approximate the residual battery life after N rounds.
    Let Me be the number of rounds after which the leader election takes place and a message is broadcasted to all the other nodes in the cluster regarding the node which is elected as the leader so that all the nodes may transmit the data to the specific node. The specified node then aggregates the data from all the nodes in its cluster and transmit the data to the central computer.
      It is important to note that the leader election process is an overhead and is incurred only to manage the network traffic. Rapidly electing new heads and consequently broadcasting the message to all other nodes in the network induce an overhead which is to be avoided. On the other hand, it is also important to note that the node which is elected as the cluster head depletes its energy very frequently as it has to perform all the data aggregation processing all be itself for all the nodes in the network. Thus, frequent leader election leads to an evenly consumption of battery power in all the nodes of the cluster. If no election of leader takes place, then the node which handles the task of leader will soon run out of the battery.
     In addition to the depltion of the battery in the normal rounds during the data gathering, the leader will deplete the energy
E = Ebroad*n*[(N/k)-1]
in view of broadcasting the message, where n is the number of bits in the broadcasted message, and all the nodes depletes an amount of energy equals to
E = n*Ep
in view of the reception of the message regarding the leader of the cluster.
      Let p be the average number of packets that are transmitted by any node and let the length of each packet be l. For implementation, the case study of Zigbee radio sensors is considered in which the underlying operating system is tiny OS having packet size of l=114 bytes.
The important points to analyze in the scenario is:

For More Information Click Here










Wednesday, December 9, 2015

Paper Title:- Development of ANN and AFIS Models for Age Predictionof in-Service Transformer Oil Samples


Author Name:- Mohammad Aslam Ansari 

Department of Electrical Engineering 

Abstract:- Power transformer is one of the most important and expensive equipment in electrical network. The transformer oil is a very important component of power transformers. It has twin functions of cooling as well as insulation. The oil properties like viscosity, specific gravity, flash point, oxidation stability, total acid number, breakdown voltage, dissipation factor, volume resistivity and dielectric constant suffer a change with respect to time. Hence it is necessary that the oil condition be monitored regularly to predict, if possible, the remaining lifetime of the transformer oil, from time to time. Six properties such as moisture content, resistivity, tan delta, interfacial tension and flash point have been considered. The data for the six properties with respect to age, in days, has been taken from literature, whereby samples of ten working power transformers of 16 to 20 MVA installed at different substations in Punjab, India have been considered. This paper aims at developing ANN and ANFIS models for predicting the age of in-service transformer oil samples. Both the the models use the six properties as inputs and age as target. ANN (Artificial Neural Network) model uses a multi-layer feedforward network employing back propagation algorithm, and ANFIS (Adaptive Neuro Fuzzy Inference System) model is based on Sugeno model. The two models have been simulated for estimating the age of unknown transformer oil samples taken from generator transformers of Anpara Thermal Power Project in state of U.P. India. A comparative analysis of the two models has been made whereby ANFIS model has been found to yield better results than ANN model.     

Keywords: ANN, ANFIS, Power Transformer, Regression, Performance, Backpropagation Algorithm   

I.         Introduction

Power transformer is one of the most important constituent of electrical power system. The transformer oil, a very important ingredient of power transformers, acts as a heat transfer fluid and also serves the purpose of electrical insulation. Its insulating property is subjected to the degradation because of the ageing, high temperature, electrical stress and other chemical reactions. Hence it is necessary that the oil condition be monitored regularly. This will help to predict, if possible, the in-service period or remaining lifetime of the transformer oil, from time to time.
       There are several characteristics which can be measured to assess the present condition of the oil. The main oil characteristics are broadly classified as physical, chemical and electrical characteristics; some of these are viscosity, specific gravity, flash point, oxidation stability, total acid number, breakdown voltage, dissipation factor, volume resistivity and dielectric constant. There exists a co-relation among some of the oil properties and suffer a change in their values with respect to time [2]. This variation of oil properties with respect to time has been utilised to develop the two models as said earlier
      The training data for the proposed work have been obtained from literature, whereby ten working transforms of 16 to 20 MVA, 66/11 KV installed at different substations in the state of Punjab, India have been considered. The six properties of transformer oil such as breakdown voltage (BDV), moisture, resistivity, tan delta, interfacial tension and flash point have been considered as inputs and age as target. Test data have been taken from generator transformers of 250 MVA, 15.75kV/400kV from Anpara Thermal Power Project in state of U. P., India.

II.     “Ann” and “Anfis” methods

It is known that classical models need linear data for their processing, therefore models like ANN and ANFIS that are based on soft computing techniques, play an important role for solving these kinds of non-linear problems.
        Neural networks exhibit characteristics such as mapping capabilities or pattern association, generalization, robustness, fault tolerance, parallel and high speed processing. Neural networks can be trained with known examples of a problem to acquire knowledge about it. Once trained successfully, the network can be put to effective use in solving unknown or untrained instances of the problem. ANN model which uses multilayer feed forward network is based on back propagation (BP) learning algorithm of neural network. Backpropagation gives very good answers when presented with inputs never seen before. This property of generalization makes it possible to train a network on giving set of input-target pairs and get good output.
          ANFIS stands for Adaptive Neural Fuzzy Inference System. Using a given input/output data set, the toolbox function ANFIS constructs a fuzzy inference system (FIS) whose membership function parameters are tuned (adjusted) using either a backpropagation algorithm alone, or in combination with a least squares type of method. This allows the fuzzy systems to learn from the data they are modelling. These techniques provide a method for the fuzzy modeling procedure to learn information about a data set, in order to compute the membership function parameters that best allow the associated fuzzy inference system to track the given input/output data. This learning method works similarly to that of neural networks.

III.       Development of ann model

The proposed ANN model uses “Levenburg-Marquardt (trainlm) algorithm which is independent of learning rate, hence by simply changing the number of neurons in hidden layer, training and testing error could be reduced. A total of 700 data sets obtained from literature [2] were arranged in tabular form and used for training the neural network. The model uses a simple two layer network, one hidden layer and one output layer. Input layer comprises of six neurons, one for the each input, while the output layer has a single neuron for a single output, the age of oil sample.
          It has been found that network architecture that uses 20 neurons in hidden layer gave the best performance with a regression of 0.999 and mean square error (MSE) of 83.0 ( data is non –normalized, so error looks large ) . The training continued for 184 iterations with training functions logsig in hidden layer and purelin in output layer respectively.

For More Information Click Here

Monday, December 7, 2015

A Time Domain Reference-Algorithm for Shunt Active Power Filters



Abstract:- The aim of this paper is to identify an optimum control strategy of three-phase shunt active filters to minimize the total harmonic distortion factor of the supply current Power Quality (PQ) is an important measure of an electrical power system. The term PQ means to maintain purely sinusoidal current wave form in phase with a purely sinusoidal voltage wave form. The power generated at the generating station is purely sinusoidal in nature. The deteriorating quality of electric power is mainly because of current and voltage harmonics due to wide spread application of static power electronics converters, zero and negative sequence components originated by the use of single phase and unbalanced loads, reactive power, voltage sag, voltage swell, flicker, voltage interruption etc. The simulation and the experimental results of the shunt active filter, along with the estimated value of reduction in rating, show that the shunt filtering system is quite effective in compensating for the harmonics and reactive power, in addition to being cost-effective.   

Keywords: Shunt voltage inverter APF, Time domain, instantaneous active power, carrier based PWM, Control strategy etc.

I.     Introduction

The wide use of power devices (based on semi-conductor switches) in power electronic appliances (diode and thyristor rectifiers, electronic starters, UPS and HVDC systems, arc furnaces, etc…) induces the appearance of the dangerous phenomenon of harmonic currents flow in the electrical feeder networks, producing distortions in the current/voltage waveforms. As a result, harmful consequences occur: equipment overheating, malfunction of solid-state material, interferences with telecommunication systems, etc... Damping harmonics devices must be investigated when the distortion rate exceeds the thresholds fixed by the ICE 61000 and IEEE 519 standards. For a long time, tuned LC and high pass shunt passive filters were adopted as a viable harmonics cancellation solution.

II.    Shunt active filtering algorithms

The control algorithm used to generate the reference compensation signals for the active power filter determines its effectiveness. The control scheme derives the compensation signals using voltage and/or current signals sensed from the system. The control algorithm may be based on frequency domain techniques or time domain techniques. In frequency domain, the compensation signals are computed using Fourier analysis of the input voltage/current signals. In time domain, the instantaneous values of the compensation voltages/currents are derived from the sensed values of input signals. There are a large number of control algorithms in time domain such as the instantaneous PQ algorithm, synchronous detection algorithm, synchronous reference frame algorithm and DC bus voltage algorithm. The instantaneous PQ algorithm by Akagi  is based on Park’s transformation of input voltage and current signals from which instantaneous active and reactive powers are calculated to arrive at the compensation signals. This scheme is most widely used because of its fast dynamic response but gives inaccurate results under distorted and asymmetrical source conditions.

For  More Information Click Here

Friday, November 27, 2015

Evaluation of Response Reduction Factor using Nonlinear Analysis #IJIRST Journal


Author Name:- Tia Toby

Department of Civil Engineering

Abstract:- The main objective of the study is to evaluate the response reduction factor of RC frames. We know that the actual earthquake force is considerably higher than what the structures are designed for. The structures can't be designed for the actual value of earthquake intensity as the cost of construction will be too high. The actual intensity of earthquake is reduced by a factor called response reduction factor R. The value of R depends on ductility factor, strength factor, structural redundancy and damping. The concept of R factor is based on the observations that well detailed seismic framing systems can sustain large inelastic deformation without collapse and have excess of lateral strength over design strength. Here the nonlinear static analysis is conducted on regular and irregular RC frames considering OMRF and SMRF to calculate the response reduction factor and the codal provisions for the same is critically evaluated. 

Keywords: Response Reduction Factor, Ductility Factor, Strength Factor, Nonlinear Analysis, Regular and Irregular Frames, OMRF, SMRF

I.    Introduction

The devastating potential of an earthquake can have major consequences on infrastructures and lifelines. In the past few years, the earthquake engineering community has been reassessing its procedures, in the wake of devastating earthquakes which have caused extensive damage, loss of life and property. These procedures involve assessment of seismic force demands on the structure and then developing design procedures for the structure to withstand the applied actions Seismic design follows the same procedure, except for the fact that inelastic deformations may be utilized to absorb certain levels of energy leading to reduction in the forces for which structures are designed. This leads to the creation of the Response Modification Factor (R factor); the all-important parameter that accounts for over-strength, energy absorption and dissipation as well as structural capacity to redistribute forces from inelastic highly stressed regions to other less stressed locations in the structure. This factor is unique and different for different type of structures and materials used. The objective of this paper is to evaluate the response reduction factor of a RC frame designed and detailed as per Indian standards IS 456, IS 1893 and IS 13920.The codal provisions for the same will be critically evaluated. Moreover parametric studies will be done on both regular and irregular buildings and finally a comparison of R value between OMRF and SMRF is also done.

II.  Definition of r factor and its components

During an earthquake, the structures may experience certain inelasticity, the R factor defines the levels of inelasticity. The R factor is allowed to reflect a structures capability of dissipating energy via inelastic behavior. The statically determinate structures response to stress will be linear until yielding takes place. But the behavioral change in structure from elastic to inelastic occurs as the yielding prevails and linear elastic structural analysis can no longer be applied. The seismic energy exerted by the structure is too high which makes the cost of designing a structure based on elastic spectrum too high. To reduce the seismic loads, IS 1893 introduces a “response reduction factor” R. So in order to obtain the exact response, it is recommended to perform Nonlinear analysis. In actual speaking R factor is a measure of overstrength and redundancy. It may be defined as a function of various parameters of the structural system, such as strength, ductility, damping and redundancy.

For More Information Click Here

Friday, November 20, 2015

Performance Assessment for Students using Different Defuzzification Techniques


Author Name:- Anjana Pradeep, Jeena Thomas

Department of Computer Science & Engineering

Abstract:- The aim of this study is to evaluate the performance of students using a fuzzy expert system. The fuzzy process is based solely on the principle of taking non-precise inputs on the factors affecting the performance of students and subjecting them to fuzzy arithmetic to obtain a crisp value of the performance. The system classifies each student's performance by considering various factors using fuzzy logic. Aimed at improving the performance of fuzzy system, several defuzzification methods other than the built methods in MATLAB have been devised in this system for producing more accurate and quantifiable result.  This study provides comparison and in depth examination of various defuzzification techniques like Weighted Average Formula (WAF), WAF-max method and Quality Method (QM). A new defuzzification method named as Max-QM which is extended from Quality method that falls within the general framework is also given and commented upon in this study.      

Keywords: Fuzzy logic, Fuzzy Expert System, Defuzzification, Weighted Average Formula, Quality Method 

I.   Introduction

An expert system is a software program that can be used to solve complex reasoning tasks that usually require a (human) expert. In other words, an expert system should help a novice, or partly experienced, problem solver, to match acknowledged experts in the particular domain of problem solving that the system is designed to assist. To be more specific, expert systems are generally conceptualized as shown in Fig 1. The user makes an interaction through the interface system and the system questions the user through the same interface in order to obtain the vital information upon which a decision is to be made. Behind this interface, there are two other sub-systems viz. the knowledge base, which is made up of all the domain-specific knowledge that human experts use when solving that category of problems and the inference engine, a system that performs the necessary reasoning and uses knowledge from the knowledge base in order to come to a judgment with respect to the problem modelled [1].
     Expert system has been playing a major role in many disciplines such as in medicines, assist physician in diagnosis of diseases, in agriculture for crop management, insect control, in space technology and  in power systems for fault diagnosis[5]. Some expert systems have been developed to replace human experts and to aid humans. The use of an expert system is increasing day by day in today’s world [40]. Expert systems are becoming an integral part of engineering education and even other courses like accounting and management are also accepting them as a better way of teaching[4].Another feature that makes expert system more demanding for students is its ability to adaptively adjust the training for each particular student on the bases of individual students learning pace. This feature can be used more effectively in teaching engineering students. It should be able to monitor student’s progress and make a decision about the next step in training.

Fig. 1: Expert system structure
        The few expert systems available in the market present a lot of opportunities for the students who desire more spotlight and time to learn the subjects. Some expert systems present an interactive and friendly environment for students which encourage them to study and adopt a more practical approach towards learning. The expert systems can also act as an assistor or substitute for the teacher. Expert systems focus on each student individually and also keep track of their learning pace. This behavior of an expert system provides autonomous learning procedure for both student and teacher, where teachers act as mentor and students can judge their own performance. Expert system is not only beneficial for the students but also for the teachers which help them guiding students in a better way.
        The integration of fuzzy logic with an expert system enhances its capability and is called a fuzzy expert system, as it is useful for solving real world problems which do not require a precise solution. So, there is a need to develop a fuzzy expert system as it can handle imprecise data efficiently and reduces the manual working while enhancing the use of expert system[40].

      There are various factors inside and outside college that results in poor quality of academic performance of students[2,3]. To determine all the influencing factors in a single effort is a complex and difficult task. It necessitates a lot of resources and time for an educator to identify all these factors first and then plan the classroom activities and approaches of teaching and learning. It also requires appropriate training, organizational planning and skills to conduct such studies for determining the contributing factors inside and outside college. This process of identification of determinants must be given full attention and priority so that the teachers may be able to develop instructional strategies for making sure that all the students be provided with the opportunities to attain at their fullest potential in learning and performance.  By using suitable statistical package it was found that communication, learning facilities, proper guidance and family stress were the factors that affect the student performance. Communication, learning facilities and proper guidance showed a positive impact on student performance and family stress showed a negative impact on student performance. It is indicated that communication is more important factor that affect the student performance than learning facilities and proper guidance [3].

      In this research article seven most important factors are included which affect the students’ performance. These are personal factors, college environment, family factors, and university factors, teaching factors, attendance and marks obtained by students. All these factors are scaled and ranked based on the various sub-factors that are further divided from the base factors. In this study the students’ marks have been focused and not solely on social, economic, and cultural features.  To evaluate students’ performance, fuzzy expert system has been developed by considering all the seven factors as inputs to the system. This system has been developed by taking the data of students collected from St. Josephs College of Engineering and Technology, Palai affiliated to M.G University.

II.   Literature review

In recent years, many researchers worked on the applications of fuzzy logic and fuzzy sets in educational assessments and grading systems. Biswas[25] presented two methods for evaluating  students’ answer scripts using fuzzy sets and a matching function: a fuzzy evaluation method (FEM) and a generalized fuzzy evaluation method. He used fuzzy set theory in student evaluation and found that it is potentially finer than awarding grades or numbers when evaluating answer scripts. He also highlighted that the importance of education system should be to provide students with the evaluation reports regarding their test/examination as sufficient as possible with unavoidable error as small as possible so as to make evaluation system more transparent and fairer to students.

                Chen and Lee [26] presented two methods for applying fuzzy sets to overcome the problem of giving two different fuzzy marks to students with the same total score which could arise from Biswas’ method. Their methods perform calculations much faster and complicated matching operations were not required. Echauz and Vachtsevanos [27] proposed a fuzzy logic system for translating traditional scores into letter-grades. Law [28] built a fuzzy structure model with its algorithm to aggregate different test scores in order to produce a single score for individual students in an educational grading system. A method to build the membership functions (MFs) of several linguistic values with different weights was also proposed in this paper. 

For more Information CLICK HERE

Wednesday, November 18, 2015

#IJIRST Journal : A Review on Thermal Insulation and Its Optimum Thickness to Reduce Heat Loss

Title:- A Review on Thermal Insulation and Its Optimum Thickness to Reduce Heat Loss

Author Name: Dinesh Kumar Sahu, Prakash Kumar Sen, Gopal Sahu, Ritesh Sharma, Shailendra Bohidar

Department of Mechanical Engineering

Abstract:- An understanding of the mechanisms of heat transfer is becoming increasingly important in today’s world. Conduction and convection heat transfer phenomena are found throughout virtually all of the physical world and the industrial domain. A thermal insulator is a poor conductor of heat and has a low thermal conductivity. In this paper we studied that Insulation is used in buildings and in manufacturing processes to prevent heat loss or heat gain. Although its primary purpose is an economic one, it also provides more accurate control of process temperatures and protection of personnel. It prevents condensation on cold surfaces and the resulting corrosion. We also studied that critical radius of insulation is a radius at which the heat loss is maximum and above this radius the heat loss reduces with increase in radius. We also gave the concept of selection of economical insulation material and optimum thickness of insulation that give minimum total cost.       

Keywords: Heat, Conduction, Convection, Heat Loss, Insulation

I.    Introduction

Heat flow is an inevitable consequence of contact between objects of differing temperature. Thermal insulation provides a region for insulation in which thermal conduction is reduced or thermal radiation is reflected rather than absorbed by the lower temperature body. To change the temperature of an object, energy is required in the form of heat generation to increase the temperature, or heat extraction to reduce the temperature. Once the heat generation or heat extraction is terminated a reverse flow of heat occurs to reverse the temperature back to ambient. To maintain a given temperature considerable continuous energy is required. Insulation will reduce this energy loss.
     Heat may be transferred in three mechanisms: conduction, convection and radiation. Thermal conduction is the molecular transport of heat under the effect of temperature gradient. Convection mechanism of heat occurs in liquids and gases, whereby the flow processes transfer heat. Free convection is flow caused by the differences in density as a result of temperature differences. Forced convection is flow caused by external influences (wind, ventilators, etc.). Thermal radiation mechanism occurs when thermal energy is emitted similar to light radiation.


      Heat transfers through insulation material occur by means of conduction, while heat loss to or heat gain from atmosphere occurs by means of convection and radiation. Materials, which have a low thermal conductivity, are those, which have a high proportion of small voids containing air or gases. These voids are not big enough to transmit heat by convection or radiation, and therefore reduce the flow of heat. Thermal insulation materials come into the latter category. Thermal insulation materials may be natural substances or man-made.

II.   The Need for Insulation


A thermal insulator is a poor conductor of heat and has a low thermal conductivity. Insulationis used in buildings and in manufacturing processes to prevent heat loss or heat gain. Although its primary purpose is an economic one, it also provides more accurate control of process temperatures and protection of personnel. It prevents condensation on cold surfaces and the resulting corrosion. Such materials are porous, containing large number of dormant air cells. Thermal insulation delivers the following benefits: [1][2]

A.      Energy Conservation

Conserving energy by reducing the rate of heat flow (fig 1) is the primary reason for insulating surfaces. Insulation materials that will perform satisfactorily in the temperature range of -268°C to 1000°C are widely available.

For more information Click Here

Thursday, September 24, 2015

Applications and Challenges of Human Activity Recognition using Sensors in a Smart Environment #IJIRST Journal


Department of Computer Science and Engineering

St. Joseph’s College of Engineering and Technology, Palai, Kerala, India

Abstract:- We are currently using smart phone sensors to detect physical activities. The sensors which are currently being used are accelerometer, gyroscope, barometer, etc. Recently, smart phones, equipped with a rich set of sensors, are explored as alternative platforms for human activity recognition. Automatic recognition of physical activities – commonly referred to as human activity recognition (HAR) – has emerged as a key research area in human-computer interaction (HCI) and mobile and ubiquitous computing. One goal of activity recognition is to provide information on a user’s behavior that allows computing systems to proactively assist users with their tasks. Human activity recognition requires running classification algorithms, originating from statistical machine learning techniques. Mostly, supervised or semi-supervised learning techniques are utilized and such techniques rely on labeled data, i.e., associated with a specific class or activity. In most of the cases, the user is required to label the activities and this, in turn, increases the burden on the user. Hence, user- independent training and activity recognition are required to foster the use of human activity recognition systems where the system can use the training data from other users in classifying the activities of a new subject.

Keyword:- Human Activity Recognition

I.       Introduction

Mobile phones or smart phones are rapidly becoming the central computer and communication device in people’s lives. Smart phones, equipped with a rich set of sensors, are explored as an alternative platform for human activity recognition in the ubiquitous computing domain. Today’s Smartphone not only serves as the key computing and communication mobile device of choice, but it  also comes with a rich set of embedded sensors [1], such as an accelerometer, digital compass, gyroscope, GPS, microphone, and camera. Collectively, these sensors are enabling new applications across a wide variety of domains, such as healthcare, social networks, safety, environmental monitoring, and transportation, and give rise to a new area of research called mobile phone sensing. Human activity recognition systems using different sensing modalities, such as cameras or wearable inertial sensors, have been an active field of research. Besides the inclusion of sensors, such as accelerometer, compass, gyroscope, proximity, light, GPS, microphone, camera, the ubiquity, and unobtrusiveness of the phones and the availability of different wireless interfaces, such as WI-Fi, 3G and Bluetooth, make them an attractive platform for human activity recognition. The current research in activity monitoring and reasoning has mainly targeted elderly people, or sportsmen and patients with chronic conditions.
The percentage of elderly people in today’s societies keep on growing. As a consequence, the problem of supporting older adults in loss of cognitive autonomy who wish to continue living independently in their home as opposed to being forced to live in a hospital. Smart environments have been developed in order to provide support to the elderly people or people with risk factors who wish to continue living independently in their homes, as opposed to live in an institutional care. In order to be a smart environment, the house should be able to detect what the occupant is doing in terms of one’s daily activities. It should also be able to detect possible emergency situations. Furthermore, once such a system is completed and fully operational, it should be able to detect anomalies or deviations in the occupant’s routine, which could indicate a decline in his abilities. In order to obtain accurate results, as much information as possible must be retrieved from the environment, enabling the system to locate and track the supervised person in each moment, to detect the position of the limbs and the objects the person interacts or has the intention to interact with. Sometimes, details like gaze direction or hand gestures [1] can provide important information in the process of analyzing the human activity. Thus, the supervised person must be located in a smart environment, equipped with devices such as sensors, multiple view cameras or speakers.
Although smart phone devices are powerful tools, they are still passive communication enablers rather than active assistive devices from the user’s point of view. The next step is to introduce intelligence into these platforms to allow them to proactively assist users in their everyday activities. One method of accomplishing this is by integrating situational awareness and context recognition into these devices. Smart phones represent an attractive platform for activity recognition, providing built-in sensors and powerful processing units. They are capable of detecting complex everyday activities of the user (i.e. Standing, walking, biking) or the device (i.e.  Calling), and they are able to exchange information with other devices and systems using a large variety of data communication channels.

For more information go on below link.


Friday, September 11, 2015

A Novel High Resolution Adaptive Beam Forming Algorithm for High Convergence

Abstract: This paper introduces a new robust four way LMS and variable step size NLMS beam forming algorithm to reduce interference in a smart antenna system. This algorithm is able to resolve signals arriving from narrowband sources propagating plane waves close to the array end fire. The results of previously used adaptive algorithm have the fixed step size NLMS will result in a trade-off issue between convergence rate and steady-state MSE of NLMS algorithm. This issue is solved by using four way LMS and VSSNLMS which will improve the efficiency of the convergence point. The proposed algorithm implemented reduces the mean square error (MSE) and shows faster convergence rate when compared to the conventional NLMS.
Keywords: Adaptive Antenna, Beamforming, Means Square Error (MSE), Convergence

I.   Introduction

A.      Introduction

In today’s world numbers of mobile users are increasing day by day, hence it is necessary to serve such a huge market of mobile users with high QOS even though the spectrum is limited. This becomes a major challenging problem for the service providers to solve. A major limitation in capacity and performance is co-channel interference caused by the increasing number of users and the multipath fading and delay spread. Research efforts investigating effective technologies to mitigate such effects have been going on and among these methods Adaptive antenna employment is the most promising technology. This project works on Adaptive Antenna which ensures high capacity providing with the same Quality of Service(QOS).In a normal scenario currently the mobile towers employ parabolic dish or a horn antenna but this suffers if the SNR is low the signals have to be repeatedly retransmitted from mobile station to base station. The use of Adaptive Antenna considers an array of antennas in which the antenna will receive the delayed versions of the electromagnetic wave and adds them to achieve high SNR.

B.      Problem Statement

In the earlier antenna radiation was directed based on frequency or time, Therefore spectrum was not utilized efficiently because as the number of users increases the quality of service decreases. Hence, in this work a solution to use the Adaptive antenna frameworks have been proposed and used as an efficient means to meet the quickly expanding the  traffic volume. This issue of Technology has discusses the importance of various advanced antenna schemes for improving the same amount of spectrum and provides service to the large amount of mobile users is deduced. This is done by separating the users with respect to direction.

II.                Adaptive antenna

Adaptive antenna is the one which adapts itself to pick the user signal in any direction without user intervention , basically it undergoes through a two phase process:
-         Direction detection Estimation (DDE) using a suitable algorithm and sensor data.
-         Beam forming which forms a beam in the desired direction and nulls in the interference direction.
Direction Detection Estimation (DDE) methods are used to detect the incoming wave and the other signals which arrive from different parts of the space can be processed to extract different type of data including direction desired incoming signal falling on the antenna array.

Beam forming is a process of forming the Main beam in the desired direction and nulls in the direction of jammers direction. The block diagram is shown in  Figure1 shows an adaptive antenna structure with N antenna elements, DDE blocks, Adaptive signal processor algorithms to make adaptive antenna system smart, where incoming signal is processed by beam forming algorithms the figure also shows main beam formed in the direction of desired signal and nulls in the jammers direction.
Fig. 1: Adaptive Antenna
http://ijirst.org/Article.php?manuscript=IJIRSTV2I3036
ijirst.org

Monday, August 24, 2015

Automatic Vs. Selective Criteria based Policy Network Extraction over Routers Data

Abstract:- Policy networks are generally utilized by political scientists and economists to clarify different financial and social phenomena, for example, the advancement of associations between political elements or foundations from diverse levels of administration. The examination of policy networks requires a prolonged manual steps including meetings and polls. In this paper, we proposed an automatic procedure for evaluating the relations between on-screen characters in policy networks utilizing web documents of other digitally documented information gathered from the web. The proposed technique incorporate website page information extraction, out-links. The proposed methodology is programmed and does not require any outside information source, other than the documents that relate to the political performers. The proposal assesses both engagement and disengagement for both positive and negative (opposing) performer relations. The proposed algorithm is tested on the political science writing from routers document database collections. Execution is measured regarding connection and mean square error between the human appraised and the naturally extricated relations.
Keywords: Policy Networks, Social Networks, Relatedness Metrics, Similarity Metrics, Web Search, Policy Actors, Link Analysis

I.       Introduction

The expression "network" is much of the time used to depict groups of various types of actor who are connected together in political, social or economic concerns. Networks may be loosely organized but must be capable for spreading data or participating in aggregate activity. The structure of these networks are frequently unclear or dynamic, or both. In any case developing such networks are required because it reflects how present day society, society and economy are related. Linkages between different organizations, have turned into the important aspect for some social scientists. The term policy network implies “a  cluster  of  actors,  each  of  which  has  an  interest,  or “stake” in a given…policy sector and the capacity to help determine policy success or failure” [1] on other words definition of a policy network, “as a set of relatively stable relationships which are of non-hierarchical and interdependent nature linking a variety of actors, who share common interests with regard to apolicy and who exchange resources to pursue these shared interests acknowledging that co-operations the best way to achieve common goals” [3]. Examiners of governance are often try to clarify policy results by examining that how networks, which relates between stakeholders over policy plan and point of interest, are organized in a specific segment. The policy networks are also acknowledged as to be an important analytical tool to analyze the relations amongst the actors who are interacting with each other in a selected policy area. Furthermore it can also be used as a technique of social structure analysis. Overall it can be said that policy networks provide a useful toolbox for analyzing public policy-making[2]. Although the policy networks are required for analysis of different relations however it is difficult to extract it because of the fact that policymaking involves a large number and wide variety of actors, which makes this taskvery time consuming and complex task.Considering the importance of policy networks and knowing that there is not any computational technique available for efficiently and automatically extracting the policy network in this paper we are presenting an efficient approach for it.

II.    Related work on policy network

The application of computational analysis for large sized datasetasgaining popularity in the recent past. Because of most of the relation documents are available in digital format and also it makes the process automated and fast. Since the policy networks is a kind of structure which presents the relations amongst the actors which are presented in documents as “name” or known words and the sentence in the text describes the relations between them hence the extraction technique in the basic form contains text data mining techniques, or it can be said that it is an extension of text and web mining, like Michael Laver et al [14] presented a new technique for extracting policy relations from political texts that treats texts not as sentences to be analyzed but rather, as data in the form of individual words. Kenneth Benoit et al [13] presented the computer word scoring for the same task. Their experiment on Irish Election shows that a statistical analysis of the words in related texts in terms of relations are well able to describe the relations amongst the parties on key policy considerations. They also evaluated that for such estimations the knowledge of the language in which the text were written is not required, because it calculates the mutual relations not the meaning of words. The WORDFISH scaling algorithm to estimate policy positions using the word counts in the related texts. This method allows investigators to detect position of parties in one or multiple elections. Their analysis on German political parties from 1990 to 2005 using this technique in party manifestos shows that the extracted positions reflectchanges in the party system very precisely. In addition, the method allows investigators to inspect which words are significant for placing parties on the opposite positions finally the words with strong political associations are the best for differentiate between parties. As already discussed that Semantic difference of documents are important for characterizing their dierences and is also useful in policy network extraction. Krishnamurthy KoduvayurViswanathanet al [7] describe several text-based similarity metrics to estimate the relation between Semantic Web documents and evaluate these metrics for specific cases of similarity.Elias Iosif et al [6] presented web-based metrics for semantic similarity calculation between words which are appeared on the web documents. The context-based metrics use a web documents and then exploit the retrieved related information for the words of interest. The algorithms can be applied to other languages and do not require any pre-annotated knowledge resources.

III.  Similarity computation techniques in documents

Metrics that live linguistics similarity between words or terms will be classified into four main classes relying if information resources area unit used or not[5]:
-        Supervised resource based mostly metrics, consulting solely human-built data resources, like ontologies.
-        Supervised knowledge-rich text-mining metrics, i.e., metrics that perform text mining relying conjointly on data resources,
-        Unsupervised co-occurrence metrics, i.e., unsupervised metrics that assume that the linguistics similarity among words or terms will be expressed by associate association quantitative relation that could be a measure of their co-occurrence.
-        Unsupervised text-based metrics, i.e., metrics that square measure absolutely text-based and exploit the context or proximity of words or terms to cipher linguistics similarity.
The last 2 classes of metrics don't use any language resources or skilled data, each rely solely on mutual relations, hence in this sense, the metrics are brought up as “unsupervised”; no linguistically labeled human-annotated information is needed to calculate the semantic distance between words or terms.
Resource-based and knowledge-rich text mining metrics, however, use such knowledge, and square measure henceforward stated as “supervised” metrics. Many resource-based strategies are planned within the literature that use, e.g., Word-Net, for linguistics similarity computation.

This paper is published in our journal for more information about this paper go on to below link


http://ijirst.org/Article.php?manuscript=IJIRSTV2I2001

http://ijirst.org/index.php?p=SubmitArticle




Saturday, August 22, 2015

Modeling of Student’s Performance Evaluation

Abstract:- We proposed a Fuzzy Model System (FMS) for student performance evaluation. A suitable fuzzy inference mechanism has been discussed in the paper. We mentioned how fuzzy principal can be applying in student performance prediction. This model can be useful for educational organization, educators, teachers, and students also. We proposed this model especially for first year students who need some extraordinary monitoring to their performance. Modeling based on the past academic result and on some information they earlier submitted for admission purposes.

Keywords: Fuzzy Logic, Membership Functions, Academic Evaluation

I.       Introduction

Success rate of any Educational Institute or Organization may depend upon the prior evaluation of student’s performance. They use different method for student’s performance evaluation usually any educational organization use grading system on the basis of academic performance especially higher education. We can involve other key points to evaluating student performance such as communication skill, marketing skill, leadership skill etc.
Performance evaluation can provide information. Information generated by evaluation can be helpful for students, teachers, educators etc. to take decisions.[6] In corporate field employers highly concern about all mentioned skill. If an educational institute involve other than academic performance for evaluation then it will be beneficial for students as well as organization also.

A.      Traditional Evaluation Method

Traditionally student’s performance evaluate done by academic performance like class assignment, model exams, Yearly etc. This Primary technique involves either numerical value like 6.0 to 8.0 which may call grade point average or 60% to 80% i.e average percentage. Some organization also using linguistic terms like pass, fail, supply for performance evaluation. Such kind of evaluation scheme depends upon criteria which are decided by experienced evaluators. So that evaluation may be approximate.
The objective of this paper is to present a model .which may be very useful for teachers, organization and students also. It helps to better understanding weak points which acts as a barrier in student’s progress.

B.      Method Used

Fuzzy logic can be described by fuzzy set. It provide reasonable method / technique through input and output process fig[1].Fuzzy set can be defined by class of object, there is no strident margins for object[1].A fuzzy set formed by combination of linguistic variable using linguistic modifier.
Linguistic Modifier is link to numerical value and linguistic variable [2]. In our work linguistic variable is performance and linguistic modifiers are good, very good, excellent, and outstanding.

For more information go to below link.

http://ijirst.org/Article.php?manuscript=IJIRSTV2I3022

http://ijirst.org/index.php?p=SubmitArticle

ijirst.org