
International Journal for Innovative Research in Science and Technology (IJIRST) is a one of the popular international multidisciplinary, open access, peer-reviewed, fully refereed journal. It is an international journal that aims to contribute to the constant innovative research and training, so as to promote research in the field of science and technology.
Showing posts with label top rated indian journal. Show all posts
Showing posts with label top rated indian journal. Show all posts
Saturday, March 26, 2016
Saturday, December 12, 2015
Retrofitting of Reinforced Concrete Beam with Externally Bonded CFRP
Author Name:- J. Gopi Krishna
Abstract:- In our country many of the existing reinforced concrete structures are in need of repair or reconstruction, rehabilitation, because of deterioration due to various factors like corrosion, lack of detailing, failure of bonding between beam-column joints, increase in service loads, improper design and unexpected external lateral loads such as wind or seismic forces acting on a structure, environment and accident events etc., leading to cracking, spalling, loss of strength, deflection, etc. Strengthening of existing reinforced concrete structures is necessary to obtain an expected life span and achieve specific requirements. The need for efficient rehabilitation and strengthening techniques of existing concrete structures has resulted in research and development of composite strengthening systems. Recent experimental and analytical research have demonstrated that the use of composite materials for retrofitting existing structural components is more cost-effective and requires less effort and time than the traditional means. Fiber Reinforced Polymer (FRP) composite has been accepted in the construction industry as a capable substitute for repairing and strengthening of RCC structures. The superior properties of (FRP) polymer composite materials like high corrosion resistance, high strength, high stiffness, excellent fatigue performance and good resistance to chemical attack etc., has motivated the researchers and practicing engineers to use the polymer composites in the field of rehabilitation of structures. During past two decades, much research has been carried out on shear and flexural strengthening of reinforced concrete beams using different types of fiber reinforced polymers and adhesives. A detailed Literature review based on the previous experimental and analytical research on retrofitting of reinforced concrete beams is presented. Proposed method of strengthening the RC beam is decided based on the previous experimental and analytical research. Behaviors of retrofitted reinforced concrete beams with externally bonded CFRP with various types of resins (Epoxy, Orthophthalic Resin (GP), ISO resin) after initial load (60 % control beam) is investigated. Static load responses of all the beams under two point load method had evaluated in terms of flexural strength, crack observation, compositeness between CFRP fabric and concrete, and the associated failure modes.
Keywords: Fiber Reinforced Polymer (FRP), CFRP fabric, reinforced concrete structures
I. Introduction
Concrete is the most widely used man-made construction material in world. It is obtained by mixing cementing materials, water and aggregates, and sometimes admixtures is required proportions. Concrete has high compressive strength, low cost and abundant raw material, but its tensile strength is very low. Reinforced concrete, which is concrete with steel bars embedded in it. Concrete is an affordable material, which is extensively used throughout in the infrastructure of nation’s construction, industry, transportation, defense, utility, and residential sector. The flexibility and mould ability of this material, its high compressive strength, and the discovery of the reinforcing and prestressing techniques which helped to make up for its low tensile strength have contributed largely to its widespread use.
Reinforced concrete structures often have to face modification and improvement of their performance during their service life. In such circumstances there are two possible solutions. The first is replacement and the other is retrofitting. Full structure replacement might have determinate disadvantages such as high costs for material and labour, a stronger environmental impact and inconvenience due to interruption of the function of the structure e.g. traffic problems. So if possible, it is often better to repair or upgrade the structure by retrofitting. Retrofitting methods is shown in figure 2.1.1. In recent years repair and retrofit of existing structures such as buildings, bridges, etc., have been quite prevalent among the most important challenges in Civil Engineering.
For more Information Click Here
Saturday, November 28, 2015
Performance of WRF (ARW) over River Basins in Odisha, India During Flood Season 2014
Author Name:- Sumant Kr. Diwakar
India Meteorological Department, New Delhi, India
Abstract:- Operational Weather Research & Forecasting – Advanced Research WRF in short WRF (ARW) 9 km x 9 km Model (IMD) based rainfall forecast of India Meteorological Department (IMD) is utilized to compute rainfall forecast over River basins in Odisha during Flood season 2014. The performance of the WRF Model at the sub-basin level is studied in detail. It is observed that the IMD’s WRF (ARW) day1, day2, day3 correct forecast range lies in between 31-47 %, 37-43%, and 28-47% respectively during the flood season 2014.
Keywords: GIS; WRF (ARW); IMD; Flood 2014; Odisha
I. Introduction
Forecast during the monsoon season river sub-basin wise in India is difficult task for meteorologist to give rainfall forecast where the country have large spatial and temporal variations. India Meteorological Department (IMD) through its Flood Meteorological Offices (FMO) is issuing Quantitative Precipitation Forecast (QPF) sub-basin wise for all Flood prone river basins in India (IMD, 1994). There are 10 FMOs all over India spread in the flood prone river basins and FMO Bhubaneswar, Odisha is one of them. The Categories in which QPF are issued are as follows
Rainfall (in mm)
|
0
|
1-10
|
11-25
|
26-50
|
51-100
|
>100
|
Odisha is an Indian state on the subcontinent’s east coast, by the Bay of Bengal. It is located between the parallels of 17.49’ N and 22.34’ N Latitudes and meridians of 81.27’ E and 87.29’ E Longitudes. It is surrounded by the Indian states of West Bengal to the north-east and in the east, Jharkhand to the north, Chhattisgarh to the west and north-west and Andhra Pradesh to the south. Bhubaneswar is the capital of Odisha.
Odisha is the 9th largest state by area in India and the 11th largest by population. Odisha has a coastline about 480 km long. The narrow, level coastal strip including the Mahanadi river delta supports the bulk of the population. On the basis of homogeneity, continuity and physiographical characteristics, Odisha has been divided into five major morphological regions. The Odisha Coastal Plain in the east, the Middle Mountainous and Highlands Region, the Central Plateaus, the western rolling uplands and the major flood plains.
A. River System
The river system of Odisha comprises the Mahanadi, Brahmani, Baitarani, Subarnarekha, Vamasadhara, Burhabalanga, Rushikulya, Nagavali, Indravati, Kolab, Bahuda, Jambhira and other tributaries and distributaries.
For More Information Click Here
Wednesday, November 18, 2015
#IJIRST Journal : A Review on Thermal Insulation and Its Optimum Thickness to Reduce Heat Loss
Title:- A Review on Thermal Insulation and Its Optimum Thickness to Reduce Heat Loss
Department of Mechanical Engineering
I. Introduction
II. The
Need for Insulation
A.
Energy Conservation
Author Name: Dinesh Kumar Sahu, Prakash Kumar Sen, Gopal Sahu, Ritesh Sharma, Shailendra
Bohidar
Department of Mechanical Engineering
Abstract:- An understanding of the mechanisms of heat transfer is becoming increasingly important in today’s world. Conduction and convection heat transfer phenomena are found throughout virtually all of the physical world and the industrial domain. A thermal insulator is a poor conductor of heat and has a low thermal conductivity. In this paper we studied that Insulation is used in buildings and in manufacturing processes to prevent heat loss or heat gain. Although its primary purpose is an economic one, it also provides more accurate control of process temperatures and protection of personnel. It prevents condensation on cold surfaces and the resulting corrosion. We also studied that critical radius of insulation is a radius at which the heat loss is maximum and above this radius the heat loss reduces with increase in radius. We also gave the concept of selection of economical insulation material and optimum thickness of insulation that give minimum total cost.
Keywords: Heat, Conduction, Convection, Heat Loss, Insulation
I. Introduction
Heat flow is an inevitable consequence of contact between objects of differing temperature. Thermal insulation provides a region for insulation in which thermal conduction is reduced or thermal radiation is reflected rather than absorbed by the lower temperature body. To change the temperature of an object, energy is required in the form of heat generation to increase the temperature, or heat extraction to reduce the temperature. Once the heat generation or heat extraction is terminated a reverse flow of heat occurs to reverse the temperature back to ambient. To maintain a given temperature considerable continuous energy is required. Insulation will reduce this energy loss.
Heat may be transferred in three mechanisms: conduction, convection and radiation. Thermal conduction is the molecular transport of heat under the effect of temperature gradient. Convection mechanism of heat occurs in liquids and gases, whereby the flow processes transfer heat. Free convection is flow caused by the differences in density as a result of temperature differences. Forced convection is flow caused by external influences (wind, ventilators, etc.). Thermal radiation mechanism occurs when thermal energy is emitted similar to light radiation.
Heat transfers through insulation material occur by means of conduction, while heat loss to or heat gain from atmosphere occurs by means of convection and radiation. Materials, which have a low thermal conductivity, are those, which have a high proportion of small voids containing air or gases. These voids are not big enough to transmit heat by convection or radiation, and therefore reduce the flow of heat. Thermal insulation materials come into the latter category. Thermal insulation materials may be natural substances or man-made.
II. The
Need for Insulation
A thermal insulator is a poor conductor of heat and has a
low thermal conductivity. Insulationis used in buildings and in manufacturing
processes to prevent heat loss or heat gain. Although its primary purpose is an
economic one, it also provides more accurate control of process temperatures
and protection of personnel. It prevents condensation on cold surfaces and the
resulting corrosion. Such materials are porous, containing large number of
dormant air cells. Thermal insulation delivers the following benefits: [1][2]
A.
Energy Conservation
Conserving energy by reducing the rate of heat flow (fig 1)
is the primary reason for insulating surfaces. Insulation materials that will
perform satisfactorily in the temperature range of -268°C to 1000°C are widely
available.
For more information Click Here
Tuesday, September 29, 2015
Study on the Ductile Characteristics of Hybrid Ferrocement Slab #IJIRST Journal
Department of Civil Engineering
Poojya Doddappa Appa college of Engineering, Kalaburagi – 585102, Karnataka, India
Abstract:- This paper presents the ductile characteristics of hybrid Ferro cement slab incorporating polypropylene fibres and GFRP sheet. A total of 9 slab have been tested under two point flexural loading. The size of the slab is 1000 mm(length) x1000 mm(width) x 60 mm(thickness). The parameters studied in this investigation includes the number of weld mesh layers, polypropylene fibres i.e (0.3%) and GFRP sheet. from the studies, it is observed that the load carrying capacity and deformation. The stiffness of the specimens with zero layer weld mesh is lower than that of the specimens with two layers and three layers bundled. Further, there is reduction in number of cracks with increase in fibre content.
Keyword:- Ferrocement Slabs, GFRP Wrapping, Fibre Reinforcement, Ductility Factors, Crack Pattern
I. Introduction
The development of new technology in the material science is progressing rapidly. In recent two or three decades, a lot research was carried out throughout globe for how to improve the performance of concrete in terms of strength and durability qualities. consequently concrete has no longer remained as a construction material. A new material consisting of wire meshes and cement mortar called ferrocement. it is one of the construction materials which may be able to fill the need for building light structures. ferrocement composite consist of cement-sand mortar and single or multi-layers of steel wire mesh to produce elements of small thickness having high durability, and when properly shaped it has high strength and rigidity. These thin elements can be shaped to produce structural members such as folded plates, flanged beams, wall pane,. etc for use in the construction of cheap structures. Ferro cement elements are generally more ductile when compared to conventional reinforced concrete elements but post peak portion of load- deflection curve in bending test of Ferro cement elements reveals that failure occur either due to mortar failure in compression or due to failure of extreme layers of mesh. From the above discussions, it can be noted that, research work out on the ductile behavior of hybrid ferrocement slab with fibre. The present Investigation is aimed at to investigate the ductile behavior of hybrid ferrocement slabs with and without Considering the effect of fibres. Compared with the conventional reinforced concrete, ferrocement is reinforced in two directions; therefore, it has homogenous-isotropic properties in two directions. Benefiting from its usual high reinforcement ratio, ferrocement generally has a high tensile strength and a high modulus of rupture. In addition, since the specific surface of reinforcement in ferrocement is one to two orders of magnitude higher than that of reinforced concrete, larger bond forces develop with the matrix resulting in average crack spacing and width more than one order of magnitude smaller than in conventional reinforced concrete (Shah and Naaman 1997, Guerra et al 1978). Other appealing features of ferrocement include ease of prefabrication and low cost in maintenance and repair. Based on the abovementioned advantages, the typical applications of ferrocement are water tanks, boats, housing wall panel, roof, formwork and sunscreen (Nimityongskul et al 1980 and Kadir 1997).. Ferrocement over the years have gained respect in terms of its superior performance and versatility. Ferrocement is a form of reinforced concrete using closely spaced multiple layers of mesh and/or small diameter rods completely infiltrated with, or encapsulated in, mortar. In 1940 Pier Luigi Nervy, an Italian engineer, architect and contractor, used ferrocement first for the construction of aircraft hangars, boats and buildings and a variety of other structures. It is a very durable, cheap and versatile material.
II. Experimental investigation
The experimental investigation consists of testing of nine hybrid ferrocement slabs. the Variables Considered In The Study(I) Numbers Of Welded Square Mesh Reinforcement.(ii) Percentage of polypropylene fibres in mortar.(iii) Number of GFRP layer wrapping. The details of experimental studies including characterization are presented below.
A. Materials Used
The materials that are used in this experiment are cement, steel fiber, fine aggregate, super plasticizer and water.
1) Cement:
OPC 53 grade cement from a single batch has been used throughout the course of the project work, properties of cement are shown in table 2.
2)
Fine aggregates:
Only fine aggregate is used in Ferrocement the aggregate
consists of well graded fine aggregate (sand)
that passes
a 4.75 mm
sieve. and since
salt-free source is recommended, sand should preferably be
selected from river-beds and be free from
organic or other
deleterious matter. Good amount
of consistency and compactibility is achieved by using a well- graded, rounded, natural sand having a
maximum top size about one-third of the small opening in the reinforcing mesh
to ensure proper penetration. The moisture content of the aggregate should be
considered in the calculation of required water.
For more information go on below link.
Thursday, September 24, 2015
Applications and Challenges of Human Activity Recognition using Sensors in a Smart Environment #IJIRST Journal
Department of Computer Science and Engineering
St. Joseph’s College of Engineering and Technology, Palai, Kerala, India
Abstract:- We are currently using smart phone sensors to detect physical activities. The sensors which are currently being used are accelerometer, gyroscope, barometer, etc. Recently, smart phones, equipped with a rich set of sensors, are explored as alternative platforms for human activity recognition. Automatic recognition of physical activities – commonly referred to as human activity recognition (HAR) – has emerged as a key research area in human-computer interaction (HCI) and mobile and ubiquitous computing. One goal of activity recognition is to provide information on a user’s behavior that allows computing systems to proactively assist users with their tasks. Human activity recognition requires running classification algorithms, originating from statistical machine learning techniques. Mostly, supervised or semi-supervised learning techniques are utilized and such techniques rely on labeled data, i.e., associated with a specific class or activity. In most of the cases, the user is required to label the activities and this, in turn, increases the burden on the user. Hence, user- independent training and activity recognition are required to foster the use of human activity recognition systems where the system can use the training data from other users in classifying the activities of a new subject.
Keyword:- Human Activity Recognition
I. Introduction
Mobile phones or smart phones are rapidly becoming the central computer and communication device in people’s lives. Smart phones, equipped with a rich set of sensors, are explored as an alternative platform for human activity recognition in the ubiquitous computing domain. Today’s Smartphone not only serves as the key computing and communication mobile device of choice, but it also comes with a rich set of embedded sensors [1], such as an accelerometer, digital compass, gyroscope, GPS, microphone, and camera. Collectively, these sensors are enabling new applications across a wide variety of domains, such as healthcare, social networks, safety, environmental monitoring, and transportation, and give rise to a new area of research called mobile phone sensing. Human activity recognition systems using different sensing modalities, such as cameras or wearable inertial sensors, have been an active field of research. Besides the inclusion of sensors, such as accelerometer, compass, gyroscope, proximity, light, GPS, microphone, camera, the ubiquity, and unobtrusiveness of the phones and the availability of different wireless interfaces, such as WI-Fi, 3G and Bluetooth, make them an attractive platform for human activity recognition. The current research in activity monitoring and reasoning has mainly targeted elderly people, or sportsmen and patients with chronic conditions.
The percentage of elderly people in today’s societies keep on growing. As a consequence, the problem of supporting older adults in loss of cognitive autonomy who wish to continue living independently in their home as opposed to being forced to live in a hospital. Smart environments have been developed in order to provide support to the elderly people or people with risk factors who wish to continue living independently in their homes, as opposed to live in an institutional care. In order to be a smart environment, the house should be able to detect what the occupant is doing in terms of one’s daily activities. It should also be able to detect possible emergency situations. Furthermore, once such a system is completed and fully operational, it should be able to detect anomalies or deviations in the occupant’s routine, which could indicate a decline in his abilities. In order to obtain accurate results, as much information as possible must be retrieved from the environment, enabling the system to locate and track the supervised person in each moment, to detect the position of the limbs and the objects the person interacts or has the intention to interact with. Sometimes, details like gaze direction or hand gestures [1] can provide important information in the process of analyzing the human activity. Thus, the supervised person must be located in a smart environment, equipped with devices such as sensors, multiple view cameras or speakers.
Although smart phone devices are
powerful tools, they are still passive communication enablers rather than
active assistive devices from the user’s point of view. The next step is to
introduce intelligence into these platforms to allow them to proactively assist
users in their everyday activities. One method of accomplishing this is by integrating
situational awareness and context recognition into these devices. Smart phones
represent an attractive platform for activity recognition, providing built-in
sensors and powerful processing units. They are capable of detecting complex
everyday activities of the user (i.e. Standing, walking, biking) or the device
(i.e. Calling), and they are able to
exchange information with other devices and systems using a large variety of
data communication channels.
For more information go on below link.
Tuesday, September 22, 2015
Dynamic Power Reduction in NOC by Encoding Techniques #IJIRST Journal
Abstract:- As technology improve the size will be reduced, and the power dissipated by the links of a network-on-chip (NoC) is starts to participate with the power dissipate by the other element of communication system, for example the routers and the network interfaces (NIs). We design an set of data encoding technique by different schemes to decrease the power dissipation by an links of NoC, which optimizing the on-chip communication system not only in terms of performance but also in terms of power. The idea presented in this paper is base on encoding the packets before they are inserted in to the network in such a way as to minimize both the switching action and the coupling-switching action in the NoC’s link which represent the main factor of power dissipation. These schemes were universal and transparent with respect to the construct NoC fabric that means this application will not require any change in the router and link of architecture. These will be carried in both artificial and real traffic scenario. These effective of the proposed scheme will tolerate to save the energy consumption and power dissipation without changing the performance degradation and with less area consumption in the NI.
Keywords: switching action, encoding, network-on-chip (NoC), low power, router, Network interfaces (NIs)
I. Introduction
Moving towards silicon technology node to the next results faster and more efficient gates but slower because there is a more power hungry wires. More than 50% of total dynamic power is dissipate in interconnection in current processor, and this was expected to increase more over in the next several years. Global interconnect length does not scale with smaller transistors and local wires. Chip size remains relatively constant because the chip function continues for instance the RC delay increases exponentially. The RC delay in a 1-mm worldwide wire at the smallest pitch is superior to the intrinsic delay of a two-input NAND fan-out. If the raw computation horsepower seems to be un-limited, thanks to the ability of instance more core’s in a single silicon chip, scalable issue occur, due to making an efficient and reliable communication among the increasing number of core’s, become the real problem. The NOC invent is documented as the most feasible way to tackle with scalable and variability issue that characterize the ultra-deep sub-micron-meter.
Now a days in the on-chip communication issue is relevant, in some of the case more relevant than commutating related issue. The communication sub-system more and more impacts the usual designed objective, and also includes cost (i.e., area of silicon), performances, dissipation of power, consumption of energy and reliability. As technology improves the size is reducing and more fraction of total power is budget of the complex in more core of the system-on-chip (SoC) this is because of communication sub-system.
Here we attentation on the
technique aim to minimize power dissipation by a network link. The power
dissipation in the network is relevant as that dissipation by NIs, routers and
it is giving that ordinary to increase the technology scale. We are
representing the set of encoding schemes for data which is in binary formate,
and it is operated at flit level, and an end-to-end basis, this allows us to
minimize the switching action and coupling switching action at the link of an
direction is traverse by a packet. This encoding schemes, were transparent by
respect to router execution, and they are presented, discussed in both
algorithmic-level and architectural level, it is assessed via the simulation in
the artificial, real traffic scenario. These analysis gives an different
aspects, metrics design, it include area of silicon, energy consumption and
dissipation of power. From the results we can conclude that with these proposed
encoding schemes that power will save and also energy will be save without changing
any major degradation in the performance in the NIs.
II.
Motivation and Related work
The accessibility of chips is growing every year. In next
few years, the accessibility of cores with 1000 cores is foreseen. Since the
focus of this paper is to decrease the power dissipation by link which
decreases the dynamic power, here we are going review the works in the area and
link power reduction. Also these will include some technique. They are, use of
shielding to increase line-to-line space
and repeater insertion. So above technique have large area consumption. One
method is the data encoding technique, its mainly focus is to reduce the link
power. The encoding technique’s is categorize in to two group. In 1st group we
are going to decrease the power by the self-switching action of the each bus
line and avoid the dissipation of power by coupling switching action.
These work concentrate on the
different component of the inter connection network such as NIs, router, and
link. Because these will reduce power dissipation by an link, in this paper, we
are going to brief the review some works in the region of link power reduction.
These include the technique that make use of shielding, which increase
line-to-line space and repeater inserted. They all increases the silicon chip
area. These encode scheme is an additional technique that is employed to reduce
dissipation of power in link. The data encoding technique has been classified
in to two class. In the first class, encoding technique concentrate on reducing
the power due to self-switching action of separate bus line while ignoring an
power dissipation due to their coupling-switching action. In these class, bus
invert (BI) and INC-XOR have been proposed for these case that casual random
data pattern is transmitte through the lines. On the other hand, gray code, T0,
working-zone encoding, and T0-XOR were suggest for the case of correlation data
pattern. Application particular approach have also been proposed
This class of encoding will not
be appropriate to be applied in the deep sub-micron meter technological node
where the coupling capacitance constitute an most important part of the total
inter-connect capacitance. This will cause the power consumption due to the
coupling-switching action to become a big fraction of the total power
consumption in the links, that making the aforementioned techniques, which
ignore such contributions, inefficient. The works in the second class
concentrate on reducing power dissipation through the reducing the
coupling-switching action. Among these schemes, the switching action is reduced
by using many additional control lines. For example, the data bus width grows
from 32 to 55. The techniques proposed in have a smaller number of control
lines but the complexity of their decoding logic is high. The technique
described as follows: first, the data are both odd inverted and even inverted,
and afterwards transmission is perform using these kind of inversion which
reduce more switching action. The coupling switching action it is reduced, this
is compared with another, so we use a simple decoder although achieving a
higher activity reduction.
For more information go on below link.
Tuesday, September 15, 2015
#IJIRST Journal
Top Rated International Journal Recommended By Most of University
Impact Factor : 1.638
ISSN : 2349-6010
Publish Your Research article with ijirst.org
We Accept Only Quality Papers...
No Profit No loss International Journal to Promote Research Scholar..
www.facebook.com/ijirst
submit Your Article : www.ijirst.org
Impact Factor : 1.638
ISSN : 2349-6010
Publish Your Research article with ijirst.org
We Accept Only Quality Papers...
No Profit No loss International Journal to Promote Research Scholar..
www.facebook.com/ijirst
submit Your Article : www.ijirst.org
Friday, September 11, 2015
A Novel High Resolution Adaptive Beam Forming Algorithm for High Convergence
Abstract: This paper introduces a new robust four way LMS and variable step size NLMS beam forming algorithm to reduce interference in a smart antenna system. This algorithm is able to resolve signals arriving from narrowband sources propagating plane waves close to the array end fire. The results of previously used adaptive algorithm have the fixed step size NLMS will result in a trade-off issue between convergence rate and steady-state MSE of NLMS algorithm. This issue is solved by using four way LMS and VSSNLMS which will improve the efficiency of the convergence point. The proposed algorithm implemented reduces the mean square error (MSE) and shows faster convergence rate when compared to the conventional NLMS.
I. Introduction
A. Introduction
In today’s world numbers of mobile users are increasing day by day, hence it is necessary to serve such a huge market of mobile users with high QOS even though the spectrum is limited. This becomes a major challenging problem for the service providers to solve. A major limitation in capacity and performance is co-channel interference caused by the increasing number of users and the multipath fading and delay spread. Research efforts investigating effective technologies to mitigate such effects have been going on and among these methods Adaptive antenna employment is the most promising technology. This project works on Adaptive Antenna which ensures high capacity providing with the same Quality of Service(QOS).In a normal scenario currently the mobile towers employ parabolic dish or a horn antenna but this suffers if the SNR is low the signals have to be repeatedly retransmitted from mobile station to base station. The use of Adaptive Antenna considers an array of antennas in which the antenna will receive the delayed versions of the electromagnetic wave and adds them to achieve high SNR.
B. Problem Statement
In the earlier antenna radiation was directed based on frequency or time, Therefore spectrum was not utilized efficiently because as the number of users increases the quality of service decreases. Hence, in this work a solution to use the Adaptive antenna frameworks have been proposed and used as an efficient means to meet the quickly expanding the traffic volume. This issue of Technology has discusses the importance of various advanced antenna schemes for improving the same amount of spectrum and provides service to the large amount of mobile users is deduced. This is done by separating the users with respect to direction.
II. Adaptive antenna
Adaptive antenna is the one which adapts itself to pick the user signal in any direction without user intervention , basically it undergoes through a two phase process:
- Direction detection Estimation (DDE) using a suitable algorithm and sensor data.
- Beam forming which forms a beam in the desired direction and nulls in the interference direction.
Direction Detection Estimation (DDE) methods are used to detect the incoming wave and the other signals which arrive from different parts of the space can be processed to extract different type of data including direction desired incoming signal falling on the antenna array.
Beam forming is a process of forming the Main beam in the desired direction and nulls in the direction of jammers direction. The block diagram is shown in Figure1 shows an adaptive antenna structure with N antenna elements, DDE blocks, Adaptive signal processor algorithms to make adaptive antenna system smart, where incoming signal is processed by beam forming algorithms the figure also shows main beam formed in the direction of desired signal and nulls in the jammers direction.
Fig. 1: Adaptive Antenna
http://ijirst.org/Article.php?manuscript=IJIRSTV2I3036
ijirst.org
Tuesday, September 8, 2015
IJIRST Journal
Top Rated International Journal Recommended By Most of University
Impact Factor : 1.638
ISSN : 2349-6010
Publish Your Research article with ijirst.org
We Accept Only Quality Papers...
No Profit No loss International Journal to Promote Research Scholar..
www.facebook.com/ijirst
submit Your Article : www.ijirst.org
Impact Factor : 1.638
ISSN : 2349-6010
Publish Your Research article with ijirst.org
We Accept Only Quality Papers...
No Profit No loss International Journal to Promote Research Scholar..
www.facebook.com/ijirst
submit Your Article : www.ijirst.org
Monday, August 24, 2015
Automatic Vs. Selective Criteria based Policy Network Extraction over Routers Data
Abstract:- Policy networks are generally utilized by political scientists and economists to clarify different financial and social phenomena, for example, the advancement of associations between political elements or foundations from diverse levels of administration. The examination of policy networks requires a prolonged manual steps including meetings and polls. In this paper, we proposed an automatic procedure for evaluating the relations between on-screen characters in policy networks utilizing web documents of other digitally documented information gathered from the web. The proposed technique incorporate website page information extraction, out-links. The proposed methodology is programmed and does not require any outside information source, other than the documents that relate to the political performers. The proposal assesses both engagement and disengagement for both positive and negative (opposing) performer relations. The proposed algorithm is tested on the political science writing from routers document database collections. Execution is measured regarding connection and mean square error between the human appraised and the naturally extricated relations.
Keywords: Policy Networks, Social Networks, Relatedness Metrics, Similarity Metrics, Web Search, Policy Actors, Link Analysis
I. Introduction
The expression "network" is much of the time used to depict groups of various types of actor who are connected together in political, social or economic concerns. Networks may be loosely organized but must be capable for spreading data or participating in aggregate activity. The structure of these networks are frequently unclear or dynamic, or both. In any case developing such networks are required because it reflects how present day society, society and economy are related. Linkages between different organizations, have turned into the important aspect for some social scientists. The term policy network implies “a cluster of actors, each of which has an interest, or “stake” in a given…policy sector and the capacity to help determine policy success or failure” [1] on other words definition of a policy network, “as a set of relatively stable relationships which are of non-hierarchical and interdependent nature linking a variety of actors, who share common interests with regard to apolicy and who exchange resources to pursue these shared interests acknowledging that co-operations the best way to achieve common goals” [3]. Examiners of governance are often try to clarify policy results by examining that how networks, which relates between stakeholders over policy plan and point of interest, are organized in a specific segment. The policy networks are also acknowledged as to be an important analytical tool to analyze the relations amongst the actors who are interacting with each other in a selected policy area. Furthermore it can also be used as a technique of social structure analysis. Overall it can be said that policy networks provide a useful toolbox for analyzing public policy-making[2]. Although the policy networks are required for analysis of different relations however it is difficult to extract it because of the fact that policymaking involves a large number and wide variety of actors, which makes this taskvery time consuming and complex task.Considering the importance of policy networks and knowing that there is not any computational technique available for efficiently and automatically extracting the policy network in this paper we are presenting an efficient approach for it.
II. Related work on policy network
The application of computational analysis for large sized datasetasgaining popularity in the recent past. Because of most of the relation documents are available in digital format and also it makes the process automated and fast. Since the policy networks is a kind of structure which presents the relations amongst the actors which are presented in documents as “name” or known words and the sentence in the text describes the relations between them hence the extraction technique in the basic form contains text data mining techniques, or it can be said that it is an extension of text and web mining, like Michael Laver et al [14] presented a new technique for extracting policy relations from political texts that treats texts not as sentences to be analyzed but rather, as data in the form of individual words. Kenneth Benoit et al [13] presented the computer word scoring for the same task. Their experiment on Irish Election shows that a statistical analysis of the words in related texts in terms of relations are well able to describe the relations amongst the parties on key policy considerations. They also evaluated that for such estimations the knowledge of the language in which the text were written is not required, because it calculates the mutual relations not the meaning of words. The WORDFISH scaling algorithm to estimate policy positions using the word counts in the related texts. This method allows investigators to detect position of parties in one or multiple elections. Their analysis on German political parties from 1990 to 2005 using this technique in party manifestos shows that the extracted positions reflectchanges in the party system very precisely. In addition, the method allows investigators to inspect which words are significant for placing parties on the opposite positions finally the words with strong political associations are the best for differentiate between parties. As already discussed that Semantic difference of documents are important for characterizing their differences and is also useful in policy network extraction. Krishnamurthy KoduvayurViswanathanet al [7] describe several text-based similarity metrics to estimate the relation between Semantic Web documents and evaluate these metrics for specific cases of similarity.Elias Iosif et al [6] presented web-based metrics for semantic similarity calculation between words which are appeared on the web documents. The context-based metrics use a web documents and then exploit the retrieved related information for the words of interest. The algorithms can be applied to other languages and do not require any pre-annotated knowledge resources.
III. Similarity computation techniques in documents
Metrics that live linguistics similarity between words or terms will be classified into four main classes relying if information resources area unit used or not[5]:
- Supervised resource based mostly metrics, consulting solely human-built data resources, like ontologies.
- Supervised knowledge-rich text-mining metrics, i.e., metrics that perform text mining relying conjointly on data resources,
- Unsupervised co-occurrence metrics, i.e., unsupervised metrics that assume that the linguistics similarity among words or terms will be expressed by associate association quantitative relation that could be a measure of their co-occurrence.
- Unsupervised text-based metrics, i.e., metrics that square measure absolutely text-based and exploit the context or proximity of words or terms to cipher linguistics similarity.
The last 2 classes of metrics don't use any language resources or skilled data, each rely solely on mutual relations, hence in this sense, the metrics are brought up as “unsupervised”; no linguistically labeled human-annotated information is needed to calculate the semantic distance between words or terms.
Resource-based and knowledge-rich text mining metrics, however, use such knowledge, and square measure henceforward stated as “supervised” metrics. Many resource-based strategies are planned within the literature that use, e.g., Word-Net, for linguistics similarity computation.
This paper is published in our journal for more information about this paper go on to below link
This paper is published in our journal for more information about this paper go on to below link
http://ijirst.org/Article.php?manuscript=IJIRSTV2I2001
http://ijirst.org/index.php?p=SubmitArticle
Saturday, August 22, 2015
Modeling of Student’s Performance Evaluation
Abstract:- We proposed a Fuzzy Model System (FMS) for student performance evaluation. A suitable fuzzy inference mechanism has been discussed in the paper. We mentioned how fuzzy principal can be applying in student performance prediction. This model can be useful for educational organization, educators, teachers, and students also. We proposed this model especially for first year students who need some extraordinary monitoring to their performance. Modeling based on the past academic result and on some information they earlier submitted for admission purposes.
Keywords: Fuzzy Logic, Membership Functions, Academic Evaluation
I. Introduction
Success rate of any Educational Institute or Organization may depend upon the prior evaluation of student’s performance. They use different method for student’s performance evaluation usually any educational organization use grading system on the basis of academic performance especially higher education. We can involve other key points to evaluating student performance such as communication skill, marketing skill, leadership skill etc.
Performance evaluation can provide information. Information generated by evaluation can be helpful for students, teachers, educators etc. to take decisions.[6] In corporate field employers highly concern about all mentioned skill. If an educational institute involve other than academic performance for evaluation then it will be beneficial for students as well as organization also.
A. Traditional Evaluation Method
Traditionally student’s performance evaluate done by academic performance like class assignment, model exams, Yearly etc. This Primary technique involves either numerical value like 6.0 to 8.0 which may call grade point average or 60% to 80% i.e average percentage. Some organization also using linguistic terms like pass, fail, supply for performance evaluation. Such kind of evaluation scheme depends upon criteria which are decided by experienced evaluators. So that evaluation may be approximate.
The objective of this paper is to present a model .which may be very useful for teachers, organization and students also. It helps to better understanding weak points which acts as a barrier in student’s progress.
B. Method Used
Fuzzy logic can be described by fuzzy set. It provide reasonable method / technique through input and output process fig[1].Fuzzy set can be defined by class of object, there is no strident margins for object[1].A fuzzy set formed by combination of linguistic variable using linguistic modifier.
Linguistic Modifier is link to numerical value and linguistic variable [2]. In our work linguistic variable is performance and linguistic modifiers are good, very good, excellent, and outstanding.
For more information go to below link.
http://ijirst.org/Article.php?manuscript=IJIRSTV2I3022
http://ijirst.org/index.php?p=SubmitArticle
ijirst.org
Subscribe to:
Posts (Atom)