CWIS (pronounced see-wis) is software to assemble, organize, and share collections of data about resources, like Yahoo! or Google Directory but conforming to international and academic standards for metadata. CWIS was specifically created to help build collections of Science, Technology, Engineering, and Math (STEM) resources and connect them into NSF's National Science Digital Library, but can be (and is being) used for a wide variety of other purposes.
Some of the features of CWIS include:
* resource annotations and ratings (a la Amazon)
* keyword searching (with phrase and exclusion support a la Google)
* fielded searching
* recommender system (a la Amazon)
* OAI 2.0 export (with oai_dc and nsdl_dc schemas)
* RSS feed support
* integrated metadata editing tool
* user-definable schema (comes with full qualified Dublin Core)
* prepackaged taxonomies (includes GEM Subject taxonomy)
* user interface themes
* turnkey installation
CWIS also has functionality (PHP) separated from appearance (HTML), making it relatively easy to customize for your own site.
Hibernate Annotations is my preferred way to map my entity classes, since they don't require any external file (thus keeping mapping info in your Java files), is fully integrated with all Hibernate mapping capabilities and Hibernate documentation encourages us to use this kind of configuration because it's more efficient.
Annotation driven mapping in Hibernate uses the standard JPA API annotations and introduce some specific extensions to deal with some Hibernate features. You can find a full reference in the official documentation.
This paper presents an approach to semi-automate photo annotation. Instead of using content-recognition techniques this approach leverages context information available at the scene of the photo such as time and location in combination with existing photo annotations to provide suggestions to the user. An algorithm exploits a number of technologies including Global Positioning System (GPS), Semantic Web, Web services and Online Social Networks, considering all information and making a best-eort attempt to suggest both people and places depicted in the photo. The user then selects which of the suggestions are correct to annotate the photo. This process accelerates the photo annotation process dramatically which in turn aids photo search for a wide range of query tools that currently trawl the millions of photos on the Web.
Lexical ambiguity is a fundamental problem in Information Retrieval (IR), especially in the medical domain. Many systems use a subset of the words contained in the document to represent the content, but they are faced with the problem of ambiguity.
Lexical ambiguity is a fundamental problem in Information Retrieval (IR), especially in the medical domain. Many systems use a subset of the words contained in the document to represent the content, but they are faced with the problem of ambiguity.
In the past few years, object detection has attracted a lot of attention in the context of human–robot collaboration and Industry 5.0 due to enormous quality improvements in deep learning technologies. In many applications, object detection models have to be able to quickly adapt to a changing environment, i.e., to learn new objects. A crucial but challenging prerequisite for this is the automatic generation of new training data which currently still limits the broad application of object detection methods in industrial manufacturing. In this work, we discuss how to adapt state-of-the-art object detection methods for the task of automatic bounding box annotation in a use case where the background is homogeneous and the object’s label is provided by a human. We compare an adapted version of Faster R-CNN and the Scaled-YOLOv4-p5 architecture and show that both can be trained to distinguish unknown objects from a complex but homogeneous background using only a small amount of training data. In contrast to most other state-of-the-art methods for bounding box labeling, our proposed method neither requires human verification, a predefined set of classes, nor a very large manually annotated dataset. Our method outperforms the state-of-the-art, transformer-based object discovery method LOST on our simple fruits dataset by large margins.
thumbtack collect, organize, share use thumbtack to collect a list of your favorite restaurants and share them with your friends plan a trip- collect information about places to stay and things to do research your next purchase- store, analyze and sift through your options in thumbtack take notes and share them with your team
Concept search, full-text search and annotation structure search in one scaleable index: "Mímir is a multi-paradigm information management index and repository which can be used to index and search over text, annotations, semantic schemas (ontologies), and semantic meta-data (instance data). It allows queries that arbitrarily mix full-text, structural, linguistic and semantic queries and that can scale to gigabytes of text. A typical semantic annotation project deals with large quantities of data of different kinds. Mímir provides a framework for implementing indexing and search functionality across all these data type."
M. Haouach, G. Venturini, and C. Guinot. HT '09: Proceedings of the Twentieth ACM Conference on Hypertext and Hypermedia, New York, NY, USA, ACM, (July 2009)
R. Kawase, E. Herder, and W. Nejdl. Learning in the Synergy of Multiple Disciplines, Proceedings of the EC-TEL 2009, volume 5794 of Lecture Notes in Computer Science, Berlin/Heidelberg, Springer, (October 2009)
J. Lowe, C. Baker, and C. Fillmore. Proceedings of ACL SIGLEX Workshop on Tagging Text with Lexical Semantics, page 18--24. Washington, D.C., ACL, (1997)
B. USADEL, F. POREE, A. NAGEL, M. LOHSE, A. CZEDIK-EYSENBERG, and M. STITT. Plant Cell Environ, 32 (9):
1211-29(2009)Usadel, Bjorn Poree, Fabien Nagel, Axel Lohse, Marc Czedik-Eysenberg, Angelika Stitt, Mark Comparative Study Research Support, Non-U.S. Gov't United States Plant, cell & environment Plant Cell Environ. 2009 Sep;32(9):1211-29. Epub 2009 Mar 24..