English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Multi-Label Learning by Exploiting Label Dependency

MPS-Authors
/persons/resource/persons84328

Zhang,  K
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Zhang, M.-L., & Zhang, K. (2010). Multi-Label Learning by Exploiting Label Dependency. In B. Rao, B. Krishnapuram, A. Tomkins, & Q. Yang (Eds.), 16th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2010) (pp. 999-1008). New York, NY, USA: ACM Press.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-BF42-5
Abstract
In multi-label learning, each training example is associated
with a set of labels and the task is to predict the proper label
set for the unseen example. Due to the tremendous (exponential)
number of possible label sets, the task of learning
from multi-label examples is rather challenging. Therefore,
the key to successful multi-label learning is how to effectively
exploit correlations between different labels to facilitate
the learning process. In this paper, we propose to use a
Bayesian network structure to efficiently encode the condi-
tional dependencies of the labels as well as the feature set,
with the feature set as the common parent of all labels. To
make it practical, we give an approximate yet efficient procedure
to find such a network structure. With the help of this
network, multi-label learning is decomposed into a series of
single-label classification problems, where a classifier is constructed
for each label by incorporating its parental labels
as additional features. Label sets of unseen examples are
predicted recursively according to the label ordering given
by the network. Extensive experiments on a broad range of
data sets validate the effectiveness of our approach against
other well-established methods.