The interaction of textual and photographic information in an integrated
text/image database environment is being explored. Specifically,
our research group has developed an automatic indexing system for
captioned pictures of people; the indexing information and other
textual information is subsequently used in a content-based image
retrieval system. Our approach presents an alternative to traditional
face identification systems; it goes beyond a superficial combination
of existing text-based and image-based approaches to information
retrieval. By understanding the caption accompanying a picture, we
can extract information that is useful both for retrieving the picture
and for identifying the faces shown. In designing a pictorial database
system, two major issues are (1) the amount and type of processing
required when inserting new pictures into the database and (2) efficient
retrieval schemes for query processing. Our research has focused
on developing a computational model for understanding pictures based
on accompanying descriptive text. Understanding a picture can be
informally defined as the process of identifying relevant people
and objects. Several current vision systems employ the idea of top-down
control in picture understanding. We carry the notion of top-down
control one step further, exploiting not only general context but
also picture-specific context
%0 Journal Article
%1 Srihari1995
%A Srihari, R.K.
%D 1995
%J Computer
%K automatic captioned captions, computational content-based context, control, database databasesPiction, descriptive efficient environment, extraction, face general human identification, images, indexing, information information, insertion, integrated model, photograph photography, pictorial picture picture-specific processing, query recognition, retrieval retrieval, schemes, semantics system, systems, text, text/image textual top-down understanding, vision visual
%N 9
%P 49-56
%R 10.1109/2.410153
%T Automatic indexing and content-based retrieval of captioned images
%V 28
%X The interaction of textual and photographic information in an integrated
text/image database environment is being explored. Specifically,
our research group has developed an automatic indexing system for
captioned pictures of people; the indexing information and other
textual information is subsequently used in a content-based image
retrieval system. Our approach presents an alternative to traditional
face identification systems; it goes beyond a superficial combination
of existing text-based and image-based approaches to information
retrieval. By understanding the caption accompanying a picture, we
can extract information that is useful both for retrieving the picture
and for identifying the faces shown. In designing a pictorial database
system, two major issues are (1) the amount and type of processing
required when inserting new pictures into the database and (2) efficient
retrieval schemes for query processing. Our research has focused
on developing a computational model for understanding pictures based
on accompanying descriptive text. Understanding a picture can be
informally defined as the process of identifying relevant people
and objects. Several current vision systems employ the idea of top-down
control in picture understanding. We carry the notion of top-down
control one step further, exploiting not only general context but
also picture-specific context
@article{Srihari1995,
abstract = {The interaction of textual and photographic information in an integrated
text/image database environment is being explored. Specifically,
our research group has developed an automatic indexing system for
captioned pictures of people; the indexing information and other
textual information is subsequently used in a content-based image
retrieval system. Our approach presents an alternative to traditional
face identification systems; it goes beyond a superficial combination
of existing text-based and image-based approaches to information
retrieval. By understanding the caption accompanying a picture, we
can extract information that is useful both for retrieving the picture
and for identifying the faces shown. In designing a pictorial database
system, two major issues are (1) the amount and type of processing
required when inserting new pictures into the database and (2) efficient
retrieval schemes for query processing. Our research has focused
on developing a computational model for understanding pictures based
on accompanying descriptive text. Understanding a picture can be
informally defined as the process of identifying relevant people
and objects. Several current vision systems employ the idea of top-down
control in picture understanding. We carry the notion of top-down
control one step further, exploiting not only general context but
also picture-specific context },
added-at = {2009-09-12T19:19:34.000+0200},
author = {Srihari, R.K.},
biburl = {https://www.bibsonomy.org/bibtex/2f4683d30283a53249c56a1e42cd82dca/mozaher},
doi = {10.1109/2.410153},
file = {:Srihari1995.pdf:PDF},
interhash = {1aaacf25ee4b8e27e78adaa94316b70a},
intrahash = {f4683d30283a53249c56a1e42cd82dca},
issn = {0018-9162},
journal = {Computer},
keywords = {automatic captioned captions, computational content-based context, control, database databasesPiction, descriptive efficient environment, extraction, face general human identification, images, indexing, information information, insertion, integrated model, photograph photography, pictorial picture picture-specific processing, query recognition, retrieval retrieval, schemes, semantics system, systems, text, text/image textual top-down understanding, vision visual},
month = Sep,
number = 9,
owner = {Mozaher},
pages = {49-56},
timestamp = {2009-09-12T19:19:43.000+0200},
title = {Automatic indexing and content-based retrieval of captioned images
},
volume = 28,
year = 1995
}