Article,

Deep Learning for X ray Image to Text Generation

.
International Journal of Trend in Scientific Research and Development, 3 (3): 1679-1682 (March 2019)
DOI: https://doi.org/10.31142/ijtsrd23168

Abstract

Motivated by the recent success of supervised and weakly supervised common object discovery, in this work we move forward one step further to tackle common object discovery in a fully unsupervised way. Mainly, object co localization aims at simultaneously localizing the objects of the same class across a group of images. Traditional object localization detection usually trains the specific object detectors which require bounding box annotations of object instances, or at least image level labels to indicate the presence absence of objects in an image. Given a collection of images without any annotations, our proposed fully unsupervised method is to simultaneously discover images that contain common objects and also localize common objects in corresponding images. It has been long envisioned that the machines one day will understand the visual world at a human level of intelligence. Now we can build very deep convolutional neural networks CNNs and achieve an impressively low error rate for tasks like large scale image classification. However, in tasks like image classification, the content of an image is usually simple, containing a predominant object to be classified. The situation could be much more challenging when we want computers to understand complex scenes. Image captioning is one such task. In these tasks, we have to train a model to predict the category of a given x ray image is to first annotate each x ray image in a training set with a label from the predefined set of categories. Through such fully supervised training, the computer learns how to classify an x ray image and convert into text. Mahima Chaddha | Sneha Kashid | Snehal Bhosale | Prof. Radha Deoghare "Deep Learning for X-ray Image to Text Generation" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-3 , April 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23168.pdf

Tags

Users

  • @ijtsrd

Comments and Reviews