Vous êtes sur la page 1sur 14

See

discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/281259912

iASSIST:An Intelligent Assistance System

Conference Paper · March 2013


DOI: 10.13140/RG.2.1.1953.9686

CITATIONS READS

0 27

1 author:

Sahil Shah
Vidya Pratishthan’s, College of Engineering, …
5 PUBLICATIONS 1 CITATION

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Online Tourism Guide View project

All in-text references underlined in blue are linked to publications on ResearchGate, Available from: Sahil Shah
letting you access and read them immediately. Retrieved on: 24 October 2016
iASSIST:An Intelligent Assistance System
Mr.Shah Sahil K. (ME-II Computer Engg.) Vidya Pratishthas College of Engineering,Baramati.
Prof.Takale Sheetal A. Assistant Professor, Information Technology Department
Vidya Pratishthans College of Engineering,Baramati.

Abstract Search Engine is doing the task of intelligent


A good quality of customer support is need of every
mankind. Most of the companies’ provide customer support help for all the users of internet. However, content on
in the form of helpdesk/assistance systems. Helpdesk systems
available today work on the principle of case based reasoning.
Such systems face major challenge of maintaining an up to the Web and Enterprise Intranet is increasing day by
date case history of each and every customer problem. In
proposed system, we present idea of utilising results returned day. The web is a vast collection of completely uncon-
by the search engine as the case history. However, current
search engines return keyword-based matching results irre- trolled heterogeneous documents. It is huge, diverse,
spective of considering semantic relevance of user query with
search engine results. Also, for a given keyword based query, and dynamic. For a user keyword query, current Web
user has to search down the list by checking each individual
link till desired result is obtained. This degrades the quality
of service provided by search engines. Search Engines return a list of pages with respect to the
To address aforementioned challenges, an Intelligent As-
sistance System: iASSIST is proposed. It utilises search en- query. However, the information for a topic, especially
gine results as case history of the user query, which resolves
the problem of maintaining an up-to-date case history for for multi-topic queries in which individual query key-
each and every customer query. The proposed system ranks
the search results based on their semantic relevance to the words occur relatively frequently in the document collec-
request. The semantic relevance of the search results with
the user query is computed using NEC SENNA and Word-
Net. These relevant results are grouped into different docu- tion but rarely occur together in the same document, is
ment clusters based on Minimum Description Length (MDL)
principle and Symmetric Non negative Matrix Factorization often distributed among multiple physical pages. So the
(SNMF) algorithm. Each cluster is summarized using re-
quest focussed multi document summarization technique to search engines are drowning in information, but starving
generate concise solution. For performance analysis the pro-
posed system is evaluated by user survey. Experiments con- for knowledge.
ducted demonstrate the effectiveness of iASSIST in semantic
text understanding, document clustering and summarization.
The better performance of iASSIST benefits from the sen-
tence level semantic analysis, clustering using MDL principle
and SNMF.

Key terms
Intelligent Helpdesk, Semantic Similarity, Web Search Re- 1.1. Existing System
sults,Document Summarization

Currently there are number of helpdesk systems


1. Introduction
which try to find the earlier similar requests and the case
Intelligent Help Desk System is the need of every indi- history associated with the customer request. These sys-
vidual. Many organizations use Case Based Help Desk tem returns the solutions by keyword-based search tech-
System to improve the quality of customer service. For nique and which are domain specific. However, these
a given customer request, an intelligent helpdesk sys- systems face two challenges: 1) Case retrieval mea-
tem tries to find the earlier similar requests and the case sures: most case-based systems use traditional keyword-
history associated with the request. Helpdesk systems matching-based ranking schemes[12,13] for case retrieval
usually use databases to store past interactions between and have difficulty to capture the semantic meanings of
customers and companies. Interactions may be descrip- cases and 2) Result representation: most case-based sys-
tions of a problem and recommended solutions. Major tems return a list of past cases ranked by their relevance
challenge face by these help desk systems is maintenance to a new request, and customers have to go through the
of up-to-date case history. Maintaining an up-to-date list and examine the cases one by one to identify their
Case History for each and every problem is difficult and desired cases. Also, maintenance of up-to- date case his-
costly. tory is a major problem faced by these system.

1
1.2. Motivation nonnegative matrix factorization (SNMF) algorithm and
request focused summarization technique.
• Help desk Systems: Many industries use help desk
systems/customer care for solving various customer
1.4. Features of Proposed System
queries. Companies provide the solution to the
customer problem in three ways viz. online help • It automatically finds “problem-solution” pattern
desk system,customer representative or customer from web search engine. No need of maintaining
care representative(telephonic enquiry). an up-to-date case history enables the system to ad-
Many of the problems/queries solved by these dress queries from any domain.
systems are based on reference to solutions for sim-
ilar type of problems which were faced previously • Use of Semantic Role labeling and semantic dictio-

by the customers or just by asking different ques- nary for extraction of semantics of sentences and

tions to the user related with problem and narrow- query is done.

ing(filtering) down the result/solution(Case based


• For grouping of semantically and contextually simi-
systems).
lar documents, clustering algorithm based on MDL
• Search Engine:Search Engine is doing the task of in- Principle and matrix factorization is used.
telligent help for all the users of internet. For a given
user keyword query, current web search engines re- • Generates concise description (summary) of solution

turn a list of individual web pages. However, infor- to the problem.

mation for the query is often spread across multiple


pages.The search engine results can be used as data
set providing solutions from different domains.
2. Related Work
1.3. Proposed System
2.1. Case Based Systems
The Proposed system address the challenges
faced by present help desk system and web search en- There has been major contribution of work
gines, by developing an online helpdesk system: iAS- in case-based recommender systems and decision guides,
SIST. It automatically finds problem-solution pattern where the user provides a brief initial description of prob-
from web using search engines like Google, Yahoo, etc. lems. The systems use the initial information to re-
For a given user query, iASSIST interacts with the search trieve the candidate set of cases that are similar to the
engine to retrieve the relevant solutions. These retrieved given problems. Case-based system is a system that uses
solutions are ranked based on their semantic similarity knowledge of past cases similar to new customer request
with user query. Semantic similarity is based on seman- while finding a solution to that system. Working of gen-
tic roles and semantic meanings. The semantically re- eral case based system is as shown in figure 1
lated documents are further grouped into clusters based Example: An auto mechanic who fixes an engine by re-
on minimum description length (MDL) principle. Fur- calling another car that exhibited similar symptoms is
ther in order to support multi topic query, multi doc- using case-based reasoning.
ument summarization is performed by using symmetric Such existing systems can be described as follows:

2
features describing the searched products in order to
generate questions/features that a user would likely
reply, and if replied, would effectively reduce the result
size of the initial query. Classical entropy-based feature
selection methods can be effective in term of result
size reduction, but they select questions uncorrelated
with user needs and therefore unlikely to be replied.
Feature-selection methods that combine feature entropy
with an appropriate measure of feature relevance can
better capture related questions with the user and can
avoid unwanted questions.

Figure 1: Case-based problem-solving System[14] Drawbacks:


These systems require some background knowl-
edge of user behaviour, such as feature popularity, and
2.1.1. Parameterized Search Engines [5] feature probabilistic dependency i.e. prior to problem
This search engine is based on attributes rather solving user preferences must be known so that proper
than Boolean combinations of keywords. This search decisions can be made.
considers preference of users in all dimensions or various
domains. This search engine helps to increase decision 2.1.3. Incremental Case Based Reasoning [4, 8]

quality, decision confidence, perceived ease of use and Incremental Case-Based Reasoning (I-CBR)
perceived usefulness. is an incremental case-retrieval technique based on
information-theoretic analysis. The technique is incre-
Drawbacks: mental in the sense that it does not require the entire
This model was developed for online shopping target case description to be available at the start, but
system but it considers very small domains tracing in fact builds it up by asking focused questions of the
different decisions. This model was quantitative and user. The ordering of these questions reflects their power
structural, considering input, process and output to discriminate effectively between the set of candidate
variables, but does not trace the processes themselves. cases at each step.
Another approach would be to model the user as a
Bayesian information processor. This approach would Drawbacks:
require the updating of probabilistic beliefs as users When the description of cases or items becomes
acquire information. complicated, these case-based systems suffer from the
curse of dimensionality, and the similarity/distance
2.1.2. Conversational Recommender Systems
between cases or items becomes difficult to measure.
with Feature Selection [10]
Furthermore, the similarity measurements used in these
In these systems given an initial user query, the systems usually are based on keyword matching, which
recommender systems ask the user to provide additional lacks the semantic analysis of customer requests and

3
existing cases.

2.2. Database Search and Ranking [11]

In database search, many methods have been


proposed to perform similarity search and rank results
of a query like context sensitive ranking which considers
preference of user from one item over another item,
automatic ranking of user query results which makes
use of TF-IDF[12,13] calculation to compute relevance
of user query with previous cases, nearest neighbour
search etc.

Drawbacks:
Similar to the case-based systems, the similar-
ity is measured based on keyword matching, which have
difficulty to understand the text deeply i.e. it does not
consider contextual relevance between user requests and
stored past cases.

2.3. Clustering Search Results

Existing search engines often return a long list Table 1: Mathematical Model of Proposed System-I

of search results, clustering technologies are used in


search result organization so that users’ efforts to search by filtering out this common information, the clustering
down the list are minimized. Such clustering has been quality can be improved, and better context organiza-
implemented with a dynamic interface to web search en- tions can then be obtained.

gine called as Grouper interface.

2.3.1. Grouper [4] 3. Programmer’s design

It’s an interface to the results of the Husky 3.1. Mathematical Model


Search meta-search engine, which dynamically groups
Table 1 and Table 2 gives mathematical model
the search results into clusters labelled by phrases ex-
of proposed system.
tracted from the snippets using different document clus-
tering algorithms. However, the existing document-
3.2. Data Flow architecture
clustering algorithms like suffix tree clustering (STC),
web document clustering (snippet clustering) etc. do not Figure 2 shows system architecture of iASSIST.
consider the impact of the general and common informa- System works in five modules: Preprocessing Module,
tion contained in the documents. In our proposed work, Case Ranking Module, Document Clustering Module,

4
cording the context. Top ranking documents are clus-
tered using Minimum Description Length (MDL) prin-
ciple [1]. Sentence Clustering Module groups sentences
having similar meaning into a cluster using Symmetric
Non-negative Matrix Factorization (SNMF) [3]. Sen-
tence Cluster Summarization module selects most rel-
evant sentences [2] from each cluster in order to form a
concise summary which is represented as reference solu-
tion to the user.
Table 2: Mathematical Model of Proposed System-II

3.2.1. Preprocessing Module

According to Luhn’s idea [13], in order to remove


the redundancy in documents as well as to reduce the
document size it is essential to consider only meaning-
ful words. So, preprocessing of problem-solution pattern
involves removal of non-words, stop words and suffix re-
moval from both the user query and documents retrieved
from search engine. In the proposed work semantic role
information has a great contribution in the decision of se-
mantic similarity. So preprocessing for semantic similar-
ity computation in this implementation does not involve
removal of stop words and suffix. Further, each sentence
in the retrieved document is passed to a semantic role
parser to find semantic meaning of each sentence based
on frames (or verbs) in a sentence.
Figure 2: System Architecture Semantic Role Labeling
Semantic role labelling, sometimes also called shal-
Sentence Clustering Module and Sentence Cluster Sum- low semantic parsing, is a task in natural language pro-
marization Module. As shown in figure, input to the cessing consisting of the detection of the semantic ar-
system is user query in the form of question. The system guments associated with the predicate or verb of a sen-
retrieves relevant solutions or past cases from search en- tence and their classification into their specific roles. A
gine. Pre-processing of user query and past cases involves semantic role is “a description of the relationship that
removal of non-words, and then each of the retrieved doc- a constituent plays with respect to the verb in the sen-
uments is truncated into sentences and passed through tence”. For example, given a sentence like “Riya sold the
semantic role parser for semantic role labelling. Case book to Abbas”, the task would be to recognize the verb
ranking module ranks the retrieved documents based on “to sell” as representing the predicate, “Riya” as repre-
their sentence level semantic similarity with user query. senting the seller (agent), “the book” as representing the
Semantically ranked documents need to be grouped ac- goods (theme), and “Abbas” as representing the recipi-

5
ent. This is an important step towards making sense of uments from search engine are required to be ranked
the meaning of a sentence. A semantic representation of based on their semantic importance to the input user
this sort is at a higher-level of abstraction than a syntax query. In order to rank these documents, the similarity
tree. For instance, the sentence “The book was sold by scores between the retrieved documents and the input
Riya to Abbas” has a different syntactic form, but the user query are computed. Simple keyword-based simi-
same semantic roles. larity measurement, such as the cosine similarity, cannot
In order to analyze user query and documents, se- capture the semantic similarity. Thus, this system uses
mantic roles of each sentence are computed by passing a method to calculate the semantic similarity between
these sentences through semantic role parser. This helps the sentences in retrieved documents from search engine
in categorizing the documents based on their semantic and the user query based on the semantic role analysis.
importance with user query. In iASSIST, NEC SENNA Along with this, the similarity computation uses Word-
is used as the semantic role labeler, which is based on Net in order to better capture the semantically related
PropBank [9] semantic annotation. This semantic role words.Figure 3 gives algorithmic design of SLSS Calcu-
labeler labels each verb in a sentence with its proposi- lation and top document ranking.
tional arguments, and the labelling for each particular
Algorithm:Sentence-Level Semantic Similarity
verb is called a ”frame.” Therefore, for each sentence,
Calculation and Top Document Ranking
the number of frames generated by the parser equals the
number of verbs in the sentence. A set of abstract argu-
ments given by the labeler indicates the semantic role of
each term in a frame. In general, Arg[m] represents role
of term in given sentence where m indicates argument
number within sentence. For example, Arg0 is actor,
Arg-NEG indicates negation.
In general a given sentence is parsed into different ar-
guments by semantic role labeler with syntax as shown
below

Figure 3: Semantic Role Syntax

3.2.2. Sentence-Level Semantic Similar-


ity(SLSS) Computation and Top Relevant
Document Ranking
Figure 4: Sentence Level Semantic Similarity Computa-
To assist users in finding answers relevant to their
tion
query, once a new user query arrives, the retrieved doc-

6
3.2.3. Document Clustering Using MDL
Principle

The identified top ranking cases are all relevant to the MDL COST Equation
user query. But these relevant cases may actually be-
long to different categories. For example, if the user M DL Cost of C = β
α ·(no.of 1sinMT C +no.of 1sand−
query is “Give Information about Taj Mahal”, the rel- 1sinM∆ ) + |D| · log2 |D|
evant cases may involve Taj Mahal as Tea Brand, Taj
as Five Star Hotel or Taj Mahal as white marble mau-
where α and β are computed using MT D matrix.
soleum etc. Therefore, it is necessary to further group P
β= xε0,1 −P r(x) log2 P r(x)
these cases into different contexts. The proposed system
makes use of Minimum Description Length (MDL) prin-
ciple in order to cluster documents with similar meaning α = T otal no. of 1s in matrix MT D
in one group. MDL Principle states that “Best model
inferred from a given data is the one which minimizes,
Algorithm AggloMDL (D)
length of the model in bits and the length of encoding
Begin
of data, in bits.” Figure 4 describes detailed document 1. Let C = c1 ,c2 ,c3 ,.............,cn , with ci = ({di })
clustering steps using MDL approach. 2. Select best cluster pair (ci ,cj ) from C for merging
and form new cluster ck .
(ci ,cj ,ck ) := GetBestPair(C)
3.while(ci ,cj ,ck )is not empty do {
Algorithm:Document Clustering using MDL 4.C:= C- {ci , cj } U {ck }
5.(ci ,cj ,ck ):=GetBestPair(C);
Principle.
6.}
7.return C
End

procedure GetBestPair(C)
Begin
1.MDLcostmin := ∞
2.for each pair(ci ,cj ) of clusters in C do
3.{
4.(MDLcost,ck):=GetMDLCost(ci ,cj ,C);
/*GetMDLCost returns the optimal MDLCost
when ck is made by merging ci and cj */
5.if MDLcost < MDLcostmin then
6.{
7.MDLcostmin :=MDLCost;
8.(cB B B
i , cj , ck )=(ci ,cj ,ck );
9.}
10.}
11.return (cB B B
i , cj , ck )
End

Figure 5: Document Clustering using MDL

7
procedure GetMDLCost(ci ,cj ,C) greater than zero. This factorization is carried out in
Begin
order to extract important objects. As,the input matrix
1. Dk = Di Dj ;
3. ck = (Dk ); is symmetric, we use SNMF algorithm here.Stepwise
4. C = C - {ci ,cj } {ck };
procedure to cluster sentences using SNMF is as shown
5. MDL := Approximate MDL Cost of C
by MDL COST Equation in figure 5.
6. return(MDL,ck );
End Algorithm:Symmetric Non negative Matrix
Factorization(SNMF).

3.2.4. Clustering Using Symmetric Non-


Negative Matrix Factorization (SNMF)
Algorithm

Once, document clusters of similar context


are obtained by MDL clustering algorithm, in order to
generate summary by extracting important sentences
(extractive summarization), pairwise similarity between
sentences in document clusters needs to be considered.
For achieving this, sentences with similar meaning
(having more value of similarity) are grouped into
sentence clusters.
Figure 6: Sentence Clustering using SNMF
Most of the clustering algorithms deal with a
rectangular data matrix representation, i.e. either
document-term matrix or sentence-term matrix. If such 3.2.5. Summarization of Each Sentence Cluster
representation is considered, it will not capture pairwise
Once the sentence clusters are obtained there
similarity between neighboring sentences. In this paper,
lies a need to generate a concise summary by extract-
for clustering the sentences we use sentence similarity
ing important sentences from these clusters. In order to
matrix (sentence-sentence matrix) which better captures
generate a reference solution this system performs multi-
the pairwise similarity. We use symmetric nonnegative
document summarization to generate a concise solution
matrix factorization (SNMF) algorithm to find the
(summary) for each sentence cluster. The summarization
sentence clusters.
method we use is extractive summarization with main
focus on customer request (Request focused extractive
Non-negative Matrix Factorization (NMF)
summarization).
It is a group of algorithms in multivariate analysis
While generating concise solution from multiple
and linear algebra where a matrix V, is factorized into
documents some issues are raised like:
(usually) two matrices W and H such that:
nmf(V) → WH 1. The information contained in different documents
Factorization of matrices is generally non-unique, often overlaps with each other; therefore, it is neces-
and with a constraint that the factors W and H must sary to find an effective way to merge the documents
be non-negative,i.e., all elements must be equal to or while recognizing and removing redundancy.

8
2. Identifying important discrimination between docu- Figure 7 shows flow of proposed system.
ments and covering the informative content as much
as possible.

To resolve aforementioned issues, we use the tech-


nique of selecting only semantically important sentences
using a measure combining the internal information (the
computed similarity between sentences in the sentence
cluster) and the external information (the input query
by users).

Algorithm:Within Cluster-Sentence Selection

After grouping the sentences into clusters


by the SNMF algorithm,
1.Remove the noisy clusters (the cluster of
sentences containing less than three sentences).
2.Then, in each sentence cluster,rank the
sentences based on the sentence score calculation, as
shown in following equations.
The score of a sentence measures the importance of
a sentence to be included in the final
concise solution(summary).
Score(Si ) = λF1 (Si ) + (1 − λ)F2 (Si )

Internal Similarity
P Measure : Figure 7: Flow of Proposed System
F1 (Si ) = N 1−1 Sj εCk −Si Sim(Si , Sj )

External Similarity Measure :


F2 (Si ) = Sim(Si , request) 4. Results and Discussion
Where F 1 (Si ) measures the average similarity
score between sentence Si and all other sentences in In this section the results obtained by iASSIST
cluster Ck ,and N is the number of sentences in Ck . for different user queries are presented. This section also
F 2 (Si ) represents similarity between sentence
Si and input request. deals with performance analysis and comparison of iAS-
λ(weight parameter)is set to 0.7 by trial and error. SIST with current helpdesk systems. All the classes in
High value of λ indicates more weightage is given to
internal similarity. proposed system were coded and compiled in the JAVA
1.7. The obtained results might slightly differ for other
Table 3: Algorithm:Within Cluster Sentence Selection
settings. All tests were carried out on an Intel Core i3
After extracting the sentences from different sen- CPU with 2.27 GHz Pentium processor and 4 GB RAM
tence clusters, a concise reference solution set is gener- under MS Windows 7 (64 bit) operating system. In pro-
ated to a given user query. A facility of visiting web posed system search engine results as case history for the
page/document from where sentence in summarized ref- user query are used as data set.
erence solution is extracted is also provided. In the set of experiments, randomly ques-

9
tions/queries were selected from different context and
search results returned by the search engine were used
as the dataset. During user survey, user is asked to man-
ually generate solution for the selected queries referring
the dataset. The sentences in the solution generated by
the user are considered as relevant sentence set.
In this section, some illustrative scenarios are pre-
sented, in which proposed request-focused case-ranking
results are analyzed with user evaluated summarization,
which is assumed to have high accuracy.
Scenario 1: Give information about taj mahal.
Table 4 shows the concise solution generated by iASSIST
and manually evaluated summary respectively. For iAS-
SIST, the word “give” is a verb, and the corresponding
semantic role is “rel.”Therefore, the cases related to the
keyword give will have less similarity score as compared
to the cases having actual information of taj mahal.
Scenario 2: The computer in the printing room needs to
add memory.
In this scenario(Table 5), search engine will take “print-
Table 4: Top-ranking Summary Samples By Manual
ing” as the keyword and return many cases related to
Evaluation and iASSIST In Scenario I
printing or printers as the search results. Obviously,
these are not the results which are useful to the user. In Sman : Set of sentences selected by manual evaluation
iASSIST while ranking different cases, the semantic role Ssys : Set of sentences selected by iASSIST in final
of the word “printing” is the location tag, which decides summary
that the cases related to “printing” will not be retrieved. We assume, sentences selected by user while manual
In this case, more importance is given to term“add” as itsevaluations are always relevant according to user per-
semantic role is rel (verb).This helps in returning cases spective. Thus, Sman are considered as relevant sentence
which are related with how to add memory to computer. set.Table 6 shows precision and recall values for sample
Performance of proposed system is anal- user queries.Figure 7 and 8 show the average precision
ysed by comparing the solution generated by iAS- and recall of the two techniques. Graphically, recall and
SIST with standard automated summarization tool re- precision can be shown as in figure 9 and 10 for differ-
sults.Performance of iASSIST is measured using stan- ent user queries. The higher precision and recall values
dard IR measures:precision and recall of iASSIST as compared to automated summarization
|Sman ∩SSys |
Recall = |Sman | tools demonstrates that the semantic similarity calcula-
tion can better capture the meanings of the user requests
|Sman ∩SSys |
P recision = |Ssys | and case documents returned by the search engine.
Where, Comparison of proposed iASSIST system with current

10
Table 6: Performance Analysis of iASSIST

Figure 8: Precision of retrieved solutions

5. Conclusion

Table 5: Top-ranking Summary Samples By Manual The proposed system presents a new approach
Evaluation and iASSIST In Scenario II
to the problem of intelligent help desk system and ad-
dresses the problem of search result summarization.The
proposed iASSIST system provides its users a single
helpdesk systems is shown in Table 7. point of access to their problems by providing solutions
From the analysis, it is observed that the user from different domains. This system will automatically
satisfaction can be improved by capturing semantically find problem-solution pattern for new request given by
related cases as compared to only keyword-based match- user by making use of search results returned by the
ing cases. From the values of recall and precision ob- search engine. Use of semantic case ranking, MDL clus-
tained for sample scenarios, we conclude that combining tering and SNMF with request-focused multi document
the MDL principle that groups documents according to summarization helps to improve the performance of iAS-
different contexts and the SNMF clustering algorithm SIST. In this proposed work, we presented a new tech-
can help users to easily find their desired solutions from nique in which text documents can be clustered using
multiple physical pages. The problem of maintaining an MDL principle. The basic idea of clustering using MDL
up-to-date history of past cases is solved by making use was applied for clustering the web pages and extract-
of search engine as a database. Also, user can query any ing the templates. We adapted this technique in order
problem related to any domain. to cluster text documents returned from the search en-
gine.As the proposed system uses search engine results as

11
Figure 9: Recall of retrieved solutions

Figure 11: Recall Analysis for sample user queries

Figure 10: Precision Analysis for sample user queries

Table 7: Comparison of iASSIST with current helpdesk


case history for the user query, the problem of maintain- systems
ing an updated case history for each and every problem
is automatically resolved.As compared to the keyword tions on Systems, Man, And Cybernetics-Part B: Cy-
based document similarity and summarization methods bernetics, Vol. 41, No. 1, February 2011.
the proposed method is efficient in extracting the seman-
tic information. This in turn contributes in improving [3] D.Wang, S. Zhu, T. Li, and C. Ding, “Multi-
the overall result of summarization. document summarization via sentence-level seman-
tic analysis and symmetric matrix factorization,” in
Proc. SIGIR, 2008, pp. 307-314.

References
[4] R. Agrawal, R. Rantzau, and E. Terzi, “Context-

[1] Chulyun Kim and Kyuseok Shim, Member, IEEE sensitive ranking,” in Proc. SIGMOD, 2006, pp. 383-

Transactions, “TEXT: Automatic Template Extrac- 394.

tion from Heterogeneous Web Pages”, Vol.23, NO.4,


[5] A. A. Kamis and E. A. Stohr, “Parametric search
April 2011.
engines: What makes them effective when shopping
[2] D.Wang, T. Li, S. Zhu, and Y. Gong, “iHelp: An online for differentiated products?” Inf.Manage., vol.
Intelligent Online Helpdesk System” IEEE Transac- 43, no. 7, pp. 907-918, Oct. 2006.

12
[6] C. Ding, T. Li, W. Peng, and H. Park, “Orthogonal
nonnegative matrix t-factorizations for clustering,” in
Proc. 12th ACM SIGKDD Int. Conf. Knowl. Discov-
ery Data Mining, 2006, pp. 126-135.

[7] T. Li and C. Ding, “The relationships among various


nonnegative matrix factorization methods for cluster-
ing,” in Proc. 6th ICDM, 2006,pp. 362-371.

[8] D. Bridge, M. H. Goker, L. Mcginty, and B. Smyth,


“Case-based recommender systems,” Knowl. Eng.
Rev., vol. 20, no. 3, pp. 315-320,Sep. 2005.

[9] M. Palmer, P. Kingsbury, and D. Gildea, “The propo-


sition bank: An annotated corpus of semantic roles,”
Comput. Linguist, vol.31, no. 1, pp. 71-106, Mar.
2005.

[10] N.Mirzadeh, F. Ricci, and M. Bansal, “Feature


selection methods for conversational recommender
systems,” in Proc. IEEE Int. Conf. e-Technol.,e-
Commerce e-Service, 2005, pp. 772-777.

[11] A. Leuski and J. Allan, “Improving interactive re-


trieval by combining ranked list and clustering,” in
Proc. RIAO, 2000, pp. 665-681.

[12] B. Ricardo and R. Berthier, Modern Information


Retrieval. New York: ACM Press, 1999.

[13] Rijsbergen C.J., “Information Retrieval”,


(www.dcs.gla.ac.uk)

[14] Web Resources

13

Vous aimerez peut-être aussi