Monday, August 24, 2015

Automatic Vs. Selective Criteria based Policy Network Extraction over Routers Data

Abstract:- Policy networks are generally utilized by political scientists and economists to clarify different financial and social phenomena, for example, the advancement of associations between political elements or foundations from diverse levels of administration. The examination of policy networks requires a prolonged manual steps including meetings and polls. In this paper, we proposed an automatic procedure for evaluating the relations between on-screen characters in policy networks utilizing web documents of other digitally documented information gathered from the web. The proposed technique incorporate website page information extraction, out-links. The proposed methodology is programmed and does not require any outside information source, other than the documents that relate to the political performers. The proposal assesses both engagement and disengagement for both positive and negative (opposing) performer relations. The proposed algorithm is tested on the political science writing from routers document database collections. Execution is measured regarding connection and mean square error between the human appraised and the naturally extricated relations.
Keywords: Policy Networks, Social Networks, Relatedness Metrics, Similarity Metrics, Web Search, Policy Actors, Link Analysis

I.       Introduction

The expression "network" is much of the time used to depict groups of various types of actor who are connected together in political, social or economic concerns. Networks may be loosely organized but must be capable for spreading data or participating in aggregate activity. The structure of these networks are frequently unclear or dynamic, or both. In any case developing such networks are required because it reflects how present day society, society and economy are related. Linkages between different organizations, have turned into the important aspect for some social scientists. The term policy network implies “a  cluster  of  actors,  each  of  which  has  an  interest,  or “stake” in a given…policy sector and the capacity to help determine policy success or failure” [1] on other words definition of a policy network, “as a set of relatively stable relationships which are of non-hierarchical and interdependent nature linking a variety of actors, who share common interests with regard to apolicy and who exchange resources to pursue these shared interests acknowledging that co-operations the best way to achieve common goals” [3]. Examiners of governance are often try to clarify policy results by examining that how networks, which relates between stakeholders over policy plan and point of interest, are organized in a specific segment. The policy networks are also acknowledged as to be an important analytical tool to analyze the relations amongst the actors who are interacting with each other in a selected policy area. Furthermore it can also be used as a technique of social structure analysis. Overall it can be said that policy networks provide a useful toolbox for analyzing public policy-making[2]. Although the policy networks are required for analysis of different relations however it is difficult to extract it because of the fact that policymaking involves a large number and wide variety of actors, which makes this taskvery time consuming and complex task.Considering the importance of policy networks and knowing that there is not any computational technique available for efficiently and automatically extracting the policy network in this paper we are presenting an efficient approach for it.

II.    Related work on policy network

The application of computational analysis for large sized datasetasgaining popularity in the recent past. Because of most of the relation documents are available in digital format and also it makes the process automated and fast. Since the policy networks is a kind of structure which presents the relations amongst the actors which are presented in documents as “name” or known words and the sentence in the text describes the relations between them hence the extraction technique in the basic form contains text data mining techniques, or it can be said that it is an extension of text and web mining, like Michael Laver et al [14] presented a new technique for extracting policy relations from political texts that treats texts not as sentences to be analyzed but rather, as data in the form of individual words. Kenneth Benoit et al [13] presented the computer word scoring for the same task. Their experiment on Irish Election shows that a statistical analysis of the words in related texts in terms of relations are well able to describe the relations amongst the parties on key policy considerations. They also evaluated that for such estimations the knowledge of the language in which the text were written is not required, because it calculates the mutual relations not the meaning of words. The WORDFISH scaling algorithm to estimate policy positions using the word counts in the related texts. This method allows investigators to detect position of parties in one or multiple elections. Their analysis on German political parties from 1990 to 2005 using this technique in party manifestos shows that the extracted positions reflectchanges in the party system very precisely. In addition, the method allows investigators to inspect which words are significant for placing parties on the opposite positions finally the words with strong political associations are the best for differentiate between parties. As already discussed that Semantic difference of documents are important for characterizing their dierences and is also useful in policy network extraction. Krishnamurthy KoduvayurViswanathanet al [7] describe several text-based similarity metrics to estimate the relation between Semantic Web documents and evaluate these metrics for specific cases of similarity.Elias Iosif et al [6] presented web-based metrics for semantic similarity calculation between words which are appeared on the web documents. The context-based metrics use a web documents and then exploit the retrieved related information for the words of interest. The algorithms can be applied to other languages and do not require any pre-annotated knowledge resources.

III.  Similarity computation techniques in documents

Metrics that live linguistics similarity between words or terms will be classified into four main classes relying if information resources area unit used or not[5]:
-        Supervised resource based mostly metrics, consulting solely human-built data resources, like ontologies.
-        Supervised knowledge-rich text-mining metrics, i.e., metrics that perform text mining relying conjointly on data resources,
-        Unsupervised co-occurrence metrics, i.e., unsupervised metrics that assume that the linguistics similarity among words or terms will be expressed by associate association quantitative relation that could be a measure of their co-occurrence.
-        Unsupervised text-based metrics, i.e., metrics that square measure absolutely text-based and exploit the context or proximity of words or terms to cipher linguistics similarity.
The last 2 classes of metrics don't use any language resources or skilled data, each rely solely on mutual relations, hence in this sense, the metrics are brought up as “unsupervised”; no linguistically labeled human-annotated information is needed to calculate the semantic distance between words or terms.
Resource-based and knowledge-rich text mining metrics, however, use such knowledge, and square measure henceforward stated as “supervised” metrics. Many resource-based strategies are planned within the literature that use, e.g., Word-Net, for linguistics similarity computation.

This paper is published in our journal for more information about this paper go on to below link


http://ijirst.org/Article.php?manuscript=IJIRSTV2I2001

http://ijirst.org/index.php?p=SubmitArticle




Saturday, August 22, 2015

Modeling of Student’s Performance Evaluation

Abstract:- We proposed a Fuzzy Model System (FMS) for student performance evaluation. A suitable fuzzy inference mechanism has been discussed in the paper. We mentioned how fuzzy principal can be applying in student performance prediction. This model can be useful for educational organization, educators, teachers, and students also. We proposed this model especially for first year students who need some extraordinary monitoring to their performance. Modeling based on the past academic result and on some information they earlier submitted for admission purposes.

Keywords: Fuzzy Logic, Membership Functions, Academic Evaluation

I.       Introduction

Success rate of any Educational Institute or Organization may depend upon the prior evaluation of student’s performance. They use different method for student’s performance evaluation usually any educational organization use grading system on the basis of academic performance especially higher education. We can involve other key points to evaluating student performance such as communication skill, marketing skill, leadership skill etc.
Performance evaluation can provide information. Information generated by evaluation can be helpful for students, teachers, educators etc. to take decisions.[6] In corporate field employers highly concern about all mentioned skill. If an educational institute involve other than academic performance for evaluation then it will be beneficial for students as well as organization also.

A.      Traditional Evaluation Method

Traditionally student’s performance evaluate done by academic performance like class assignment, model exams, Yearly etc. This Primary technique involves either numerical value like 6.0 to 8.0 which may call grade point average or 60% to 80% i.e average percentage. Some organization also using linguistic terms like pass, fail, supply for performance evaluation. Such kind of evaluation scheme depends upon criteria which are decided by experienced evaluators. So that evaluation may be approximate.
The objective of this paper is to present a model .which may be very useful for teachers, organization and students also. It helps to better understanding weak points which acts as a barrier in student’s progress.

B.      Method Used

Fuzzy logic can be described by fuzzy set. It provide reasonable method / technique through input and output process fig[1].Fuzzy set can be defined by class of object, there is no strident margins for object[1].A fuzzy set formed by combination of linguistic variable using linguistic modifier.
Linguistic Modifier is link to numerical value and linguistic variable [2]. In our work linguistic variable is performance and linguistic modifiers are good, very good, excellent, and outstanding.

For more information go to below link.

http://ijirst.org/Article.php?manuscript=IJIRSTV2I3022

http://ijirst.org/index.php?p=SubmitArticle

ijirst.org