@reiinakano

Signed up since Aug. 22, 2017

Points

Timestamp Points Contributor Ad-hoc References
Sept. 5, 2017 20 @reiinakano No Issues #14
PR #65
Aug. 25, 2017 2 @reiinakano No Issues #69
Aug. 25, 2017 5 @reiinakano No Issues #10
PR #66

Activity

@reiinakano created a new issue: #315: What's the status of this project?

Sorry for asking it here, but I couldn't find it addressed anywhere. Any news on what happened to this project after the contest ended? Has it been picked up by a company now working on it privately?
4 months, 2 weeks ago
1 year, 3 months ago

@reiinakano commented on PR #66: Self-documenting endpoint

Hi @lamby, Github has a one click option to squash all commits when merging to master. It's what most open source projects do now. Wouldn't that be enough? This branch will be deleted after merging anyway. All that'll be left is a one nice commit on the master branch history.
1 year, 3 months ago

@reiinakano commented on issue #68: Cannot view the Travis CI logs

@lamby This has already been fixed. Although there is a new issue stemming from this at #69
1 year, 3 months ago

@reiinakano created a new issue: #69: Travis CI tests failing at `git clone`

<!--- Provide a general summary of the issue in the Title above --> ## Expected Behavior Tests shouldn't fail at the `git clone` step ## Current Behavior This error happens during downloading of .dcm files by the Travis server ``` Error downloading object: tests/assets/test_image_data/full/LIDC-IDRI-0001/1.3.6.1.4.1.14519.5.2.1.6279.6001.298806137288633453246975630178/1.3.6.1.4.1.14519.5.2.1.6279.6001.179049373636438705059720603192/000101.dcm (75aa4c8): Smudge error: Error downloading tests/assets/test_image_data/full/LIDC-IDRI-0001/1.3.6.1.4.1.14519.5.2.1.6279.6001.298806137288633453246975630178/1.3.6.1.4.1.14519.5.2.1.6279.6001.179049373636438705059720603192/000101.dcm (75aa4c8288eb31e7f6a40b8103019f664cfdd4e5b80e7a1e5d3d842c65cecfa5): batch response: Rate limit exceeded: https://github.com/concept-to-clinic/concept-to-clinic.git/info/lfs/objects/batch Errors logged to /home/travis/build/concept-to-clinic/concept-to-clinic/.git/lfs/objects/logs/20170824T153416.634441969.log Use `git lfs logs last` to view the log. error: external filter 'git-lfs filter-process' failed fatal: tests/assets/test_image_data/full/LIDC-IDRI-0001/1.3.6.1.4.1.14519.5.2.1.6279.6001.298806137288633453246975630178/1.3.6.1.4.1.14519.5.2.1.6279.6001.179049373636438705059720603192/000101.dcm: smudge filter lfs failed warning: Clone succeeded, but checkout failed. You can inspect what was checked out with 'git status' and retry the checkout with 'git checkout -f HEAD' The command "eval git clone --depth=50 https://github.com/concept-to-clinic/concept-to-clinic.git concept-to-clinic/concept-to-clinic " failed. Retrying, 2 of 3. fatal: destination path 'concept-to-clinic/concept-to-clinic' already exists and is not an empty directory. The command "eval git clone --depth=50 https://github.com/concept-to-clinic/concept-to-clinic.git concept-to-clinic/concept-to-clinic " failed. Retrying, 3 of 3. fatal: destination path 'concept-to-clinic/concept-to-clinic' already exists and is not an empty directory. The command "eval git clone --depth=50 https://github.com/concept-to-clinic/concept-to-clinic.git concept-to-clinic/concept-to-clinic " failed 3 times. The command "git clone --depth=50 https://github.com/concept-to-clinic/concept-to-clinic.git concept-to-clinic/concept-to-clinic" failed and exited with 128 during . Your build has been stopped. ``` ## Possible Solution <!--- Not obligatory, but suggest a fix/reason for the bug, --> Can't help here, not familiar with LFS ## Steps to Reproduce <!--- Provide a link to a live example, or an unambiguous set of steps to --> <!--- reproduce this bug. Include code to reproduce, if relevant --> 1. https://travis-ci.org/concept-to-clinic/concept-to-clinic/builds/268024624?utm_source=github_status&utm_medium=notification ## Context (Environment) <!--- How has this issue affected you? What are you trying to accomplish? --> <!--- Providing context helps us come up with a solution that is most useful in the real world --> Can't get my PRs to be green because of this. ## Detailed Description <!--- Provide a detailed description of the change or addition you are proposing --> ## Possible Implementation <!--- Not obligatory, but suggest an idea for implementing addition or change --> ## Checklist before submitting - [ ] I have confirmed this using the officially supported Docker Compose setup using the `local.py` configuration and ensured that I built the containers again and they reflect the most recent version of the project at the `HEAD` commit on the `master` branch - [ ] I have searched through the other currently open issues and am confident this is not a duplicate of an existing bug - [ ] I provided a **minimal code snippet** or list of steps that reproduces the bug. - [ ] I provided **screenshots** where appropriate - [ ] I filled out all the relevant sections of this template
1 year, 3 months ago

@reiinakano commented on PR #66: Self-documenting endpoint

@lamby Fixed. But test failed again due to LFS issues
1 year, 3 months ago

@reiinakano commented on PR #66: self-documenting endpoint for issue #10

Travis test failed due to Git LFS issues probably. ``` warning: Clone succeeded, but checkout failed. You can inspect what was checked out with 'git status' and retry the checkout with 'git checkout -f HEAD' ```
1 year, 3 months ago

@reiinakano opened a new pull request: #66: self-documenting endpoint for issue #10

<!--- Provide a general summary of your changes in the Title above --> ## Description <!--- Describe your changes in detail --> This is a very simple fix, I simply changed the endpoint to return the *docstring* of the selected `predict` function. Personally, I think this is the best way to keep it "self-documenting" and keeping the description in one place only. Let me know if you have problems with this approach. Also, I added an `__init__.py` file to the `algorithms` folder to make it a valid Python package. ## Reference to official issue <!--- If fixing a bug, there should be an existing issue describing it with steps to reproduce --> <!--- Please link to the issue here: --> #10 ## How Has This Been Tested? <!--- Please describe in detail how you tested your changes. --> <!--- Include details of your testing environment, and the tests you ran to --> <!--- see how your change affects other areas of the code, etc. --> Modified the `test_endpoint_documentation` test to make sure that calls return the appropriate docstring. ## CLA - [x] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
1 year, 3 months ago

@reiinakano commented on PR #65: adjust segment model interface to return volumes of each nodule

As I'm unclear on the exact API you are looking for, I went with what, for me, made most sense. Will wait for your feedback before I start the documentation for the endpoint.
1 year, 3 months ago

@reiinakano opened a new pull request: #65: adjust segment model interface to return volumes of each nodule

<!--- Provide a general summary of your changes in the Title above --> ## Description To solve issue #14, I went with simply modifying the output of `/segment/predict/` to a dictionary of the format: ```python {'binary_mask_path': str, 'volumes': list[float]} ``` ## Reference to official issue For Issue #14 ## How Has This Been Tested? I changed the unit test for `test_segment` to test for the new format of the returned response. ## CLA - [x] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
1 year, 3 months ago

@reiinakano commented on issue #14: Pass calculated summary statistics from segment model to API

Interested in this issue but am a bit unclear on exact details of the expected API. Should I modify the return result of `POST segment/predict` to include summary statistics? Or should I create a new endpoint, say `GET/POST segment/statistics`, that take the binary mask path and outputs the volumes?
1 year, 3 months ago

@reiinakano commented on issue #2: Feature: Implement classification algorithm

Thanks for the answer. My concern is that you're considering a lot of different algorithms from the get-go (as seen in #18 - #28). What happens when multiple people decide working on different algorithms? Personally I don't think it would be a terrible idea to separate them into subpackages from the start to facilitate parallel development. As long as they have the same interface, switching to the best-available model is a matter of changing the subpackage name in calls. Also, if the best model moving forward is an ensemble of the algorithms (as is virtually *always* the case), they're all already nicely separated for easy integration into an ensemble.
1 year, 3 months ago

@reiinakano commented on issue #2: Feature: Implement classification algorithm

Not sure if this is a dumb question, but given that there will probably be multiple models in the end, how do you intend to separate them? Would it make sense to to further divide `prediction/classify` into a subfolder for each model? e.g. `prediction/classify/vgg` would contain `prediction/classify/vgg/src/`, `prediction/classify/vgg/trained_model/predict/`, and `prediction/classify/vgg/assets/` Question applicable to #1 and #3
1 year, 3 months ago