@swarm-ai

Signed up since Jan. 25, 2018

Points

Timestamp Points Contributor Ad-hoc References
Jan. 29, 2018 5 @swarm-ai No PR #300
Jan. 26, 2018 6 @swarm-ai No Issues #230
Jan. 26, 2018 15 @swarm-ai No Issues #271
PR #301
Jan. 26, 2018 2 @swarm-ai No Issues #131

Activity

@swarm-ai commented on PR #300: Add Training Process for Nodule Detection and Classification - added customized datasets

Hi @isms Can you give me 1-2 days to work on resolving these issues and only just saw these comments?
10 months, 2 weeks ago

@swarm-ai opened a new pull request: #301: Added detection evaluation method for detection

<!--- Provide a general summary of your changes in the Title above --> ## Description <!--- Describe your changes in detail --> Correct detection A detection can be treated as a correct detection if the intersection-over-union (IoU) of the ground truth and the detected bounding boxes is larger than a predefined threshold. The concept is shown as below. (Images are borrowed from https://www.pyimagesearch.com/2016/11/07/intersection-over-union-iou-for-object-detection/) ## Reference to official issue <!--- If fixing a bug, there should be an existing issue describing it with steps to reproduce --> <!--- Please link to the issue here: --> Issue #271 ## Motivation and Context <!--- Why is this change required? What problem does it solve? --> <!--- If adding a new feature or making improvements not already reflected in an official issue, please reference the relevant sections of the design doc --> We extend the IoU calculation from 2D to 3D but in order to be simple, we only deal with 3D boxes / cubes. Although the detections we have are 3D spheres output from the grt123 system. The threshold we currently use is 0.5 but can easily be changed (‘correct_detection_threshold’ in evaluate_detection.py). We also did some experiments using different thresholds, please check the evaluation results below. ## How Has This Been Tested? <!--- Please describe in detail how you tested your changes. --> <!--- Include details of your testing environment, and the tests you ran to --> <!--- see how your change affects other areas of the code, etc. --> We have tested the performance on the NSCLC Radiogenomics data set found here: http://www.cibl-harvard.org/data You can see we include a results file that can be compared with the ground truth file in concept-to-clinic/prediction/src/algorithms/training/detector/label/custom_annos.csv ## Screenshots (if appropriate): ![screen shot 2018-01-25 at 3 57 17 pm](https://user-images.githubusercontent.com/35554754/35418651-73b2a844-01e8-11e8-8fe5-70af2989de5f.png) ## CLA - [x] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
10 months, 3 weeks ago

@swarm-ai opened a new pull request: #300: Add Training Process for Nodule Detection and Classification - added customized datasets

<!--- Provide a general summary of your changes in the Title above --> ## Description <!--- Describe your changes in detail --> Using the documented process in the Training/Readme, a developer can prepare custom datasets from radiologists who have annotated series of CT scans. The data should have lesion box annotations in a .csv file using the format specified. An exmaple using a CT scan data set from a Taiwan-based clinic is included. The data should also have labels for cancer/non-cancer as well. ## Reference to official issue <!--- If fixing a bug, there should be an existing issue describing it with steps to reproduce --> <!--- Please link to the issue here: --> Issue #130 Issue #131 ## Motivation and Context <!--- Why is this change required? What problem does it solve? --> <!--- If adding a new feature or making improvements not already reflected in an official issue, please reference the relevant sections of the design doc --> The motivation is to increase the available training examples so that the concept-to-clinic classifier can handle complex lung cancer cases besides those in the Luna and LIDC data sets. We have seen improved model accuracy with a preliminary run using additional data sets. A new model is currently being trained and is on epoch 80 now ## How Has This Been Tested? <!--- Please describe in detail how you tested your changes. --> <!--- Include details of your testing environment, and the tests you ran to --> <!--- see how your change affects other areas of the code, etc. --> We have run the training process using Luna, LIDC, and NSCLC-Radiomics Data sets. The NSCLC-Radiomics data set contains 422 cases of lung cancer type non-small cell lung cancer. We label these data sets with lesion location information and cancer/non-cancer labels using the software Horos. We then import this data for training in concurrence with the Luna16 and LIDC data sets. Here is a reference link to download the data sets: http://www.cibl-harvard.org/data ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
10 months, 3 weeks ago
10 months, 3 weeks ago

@swarm-ai commented on issue #230: Ask a Clinician! (add a question, get points)

A couple of questions to inform the work to help radiologists and patients with better software: >1. Given that search, detection and classification of lung nodules are one part of a pipeline of clinical tasks in the radiological evaluation of at-risk lung cancer patients’ chest CT scans, what are 3 key attributes that you would favor in a lung nodule diagnosis system that integrates with your clinical software and your related clinical workflows? _for example high accuracy (AUCROC), speed, operational software cost, ease of use, description of classification rationale for each read, incorporation of additional clinical tasks beyond detection/diagnosis etc_ >2. What are the key challenges you see to implementation of a lung nodule diagnosis in an actual clinical environment? _Culture?, system cost? Accuracy? Quality scientific data or studies? Something else?_ >3. What type of data and evidence would you require in order to actually use a lung nodule diagnosis program in your clinical practice - a prospective clinical trial, a minimal level of accuracy etc?
10 months, 3 weeks ago

@swarm-ai commented on issue #131: Continuous improvement of nodule classification models (see #2)

Hi @reubano I have been working on retraining the classifier and detector models for better performance. I am planning to document the process for both detector and classifier models and submit a pull request to the concept-to-clinic clone of the GRT code base here: https://github.com/concept-to-clinic/DSB2017 Will that work? I did not find any training code set up in the concept-to-clinic repo.
11 months ago