@WGierke

Signed up since Aug. 20, 2017

Points

Timestamp Points Contributor Ad-hoc References
Dec. 12, 2017 5 @WGierke No Issues #229
PR #255
Dec. 4, 2017 5 @WGierke No Issues #229
PR #256
Nov. 27, 2017 5 @WGierke No Issues #229
PR #252
Nov. 26, 2017 5 @WGierke No Issues #222
PR #223
Nov. 23, 2017 3 @WGierke No Issues #247
PR #248
Nov. 21, 2017 3 @WGierke No Issues #195
PR #243
Nov. 21, 2017 6 @WGierke No Issues #187, #151
PR #237
Nov. 19, 2017 3 @WGierke No Issues #160
PR #218
Nov. 3, 2017 3 @WGierke No Issues #159
PR #199
Oct. 31, 2017 100 @WGierke No Issues #3
PR #147
Oct. 16, 2017 30 @WGierke No Issues #120
PR #133
Oct. 16, 2017 5 @WGierke No Issues #152
PR #154
Oct. 16, 2017 2 @WGierke No Issues #151
Oct. 16, 2017 3 @WGierke No Issues #153
PR #155
Sept. 25, 2017 100 @WGierke No Issues #1
PR #118
Sept. 25, 2017 5 @WGierke No Issues #127
PR #128
Sept. 25, 2017 5 @WGierke No Issues #123
PR #124
Sept. 24, 2017 3 @WGierke No Issues #28
PR #135
Sept. 23, 2017 5 @WGierke No Issues #34
PR #101
Sept. 15, 2017 3 @WGierke No Issues #23
PR #115
Sept. 15, 2017 5 @WGierke No Issues #30
PR #94
Sept. 14, 2017 3 @WGierke No Issues #111
PR #112
Sept. 13, 2017 3 @WGierke No Issues #113
Sept. 13, 2017 3 @WGierke No Issues #106
Sept. 11, 2017 1 @WGierke No Issues #105
Sept. 11, 2017 3 @WGierke No Issues #103
PR #104
Sept. 11, 2017 20 @WGierke No Issues #20
PR #108
Sept. 11, 2017 20 @WGierke No Issues #18
PR #82
Sept. 9, 2017 3 @WGierke No Issues #19
PR #99
Sept. 9, 2017 5 @WGierke No Issues #37
PR #92
Sept. 7, 2017 5 @WGierke No Issues #36
PR #93
Sept. 6, 2017 0 @WGierke No
Sept. 6, 2017 5 @WGierke No Issues #35
PR #88
Sept. 2, 2017 4 @WGierke No Issues #32
PR #84
Aug. 31, 2017 5 @WGierke No Issues #9
PR #73
Aug. 28, 2017 3 @WGierke No Issues #79
PR #80
Aug. 28, 2017 3 @WGierke No Issues #29
PR #71
Aug. 28, 2017 5 @WGierke No Issues #39
PR #74
Aug. 28, 2017 2 @WGierke No Issues #59
Aug. 26, 2017 5 @WGierke No Issues #8
PR #61
Aug. 23, 2017 2 @WGierke No Issues #63
Aug. 23, 2017 20 @WGierke No Issues #16
PR #60
Aug. 21, 2017 2 @WGierke No Issues #54

Activity

@WGierke commented on PR #267: Improved lungs segmentation algorithm

Good stuff! Would it be possible to provide a test for processing a LIDC scan and one for processing a LUNA scan? This would also make it easier to understand how to use the provided methods while an explanation in a pull request is harder to find than a test in the code base IMO :)
1 day, 9 hours ago

@WGierke commented on PR #221: Update PR TEMPLATE to include prediction algorithm metrics

I consulted my pillow and it said that one could handle an identification as correct if the predicted nodule location is within the boundaries of the nodule, but not necessary the centroid (for LIDC images we can get the boundaries easily using [pylidc](https://github.com/pylidc/pylidc/blob/ea04c829a50e3598934e3c2f603e20969fe8381f/pylidc/Annotation.py#L227). Another way to achieve this would be to handle an identification as correct if the predicted locations are within a Ɛ-pipe of the true location (x_true - Ɛ <= x_hat <= x_true + Ɛ, y_true - Ɛ <= y_hat ...). Ɛ could vary for z since the size of this dimension differs from x and y. This brings me to my third point: the accuracy is the ratio of correct classifications from all the classifications the algorithm performed. If the algorithm only predicts one nodule out of five and the prediction is correct, its accuracy would be 100% but it still wasn't very helpful since it missed four nodules. A metric to cover this case would be the [recall](https://en.wikipedia.org/wiki/Precision_and_recall) which can also easily be computed. Any thoughts on that? :)
1 day, 13 hours ago

@WGierke commented on issue #131: Continuous improvement of nodule classification models (see #2)

Maybe we should proceed in small steps by e.g. first adding evaluation methods that cover most of the metrics introduced in #221 so we can have a more standardized way to determine those metrics? This way we would also be able to quickly determine the quality of the current implementations of the identification, classification and segmentation algorithms. This would in turn make it easier to focus first on the algorithm that is performing the worst so far. Any thoughts @isms @reubano @lamby ?
1 day, 21 hours ago

@WGierke commented on PR #221: Update PR TEMPLATE to include prediction algorithm metrics

@reubano one thing that just came to my mind: isn't identification about predicting a (numerical) location of a nodule? How should we then calculate the accuracy based on that? From my understanding the accuracy can only be used for classification problems, since here we have a regression problem something like the mean squared error would be more appropriate, wouldn't it? Or do you want to put every x, y and z value in their own classes? This way the accuracy wouldn't differ for a prediction in which e.g. x_hat is x_true+1 and a prediction where x_hat is x_true+100.
1 day, 21 hours ago
1 week, 2 days ago

@WGierke opened a new pull request: #256: #229 Fix Classification

There was no method implemented how to get the origin of a DICOM image series which resulted in an error. I initially set it to [0, 0, 0] but due to my lack of domain knowledge I don't know how to correctly calculate it from DICOM images. If you know the answer, feel free to comment :) ## Reference to official issue This references #229 - classification doesn't work on DICOM images. ## How Has This Been Tested? I added the test given in #229 ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
1 week, 5 days ago

@WGierke opened a new pull request: #255: #229 Fix Segmentation

Apparently the segmentation issues noted in #229 have been fixed as a byproduct of #237 . This PR just adds the test given in #229 to show that all tests pass. ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
1 week, 5 days ago

@WGierke commented on issue #229: Algorithms don't work across both dicom and luna scans

@reubano could you do me a favor and check out my branch [229_fix_segmentation](https://github.com/concept-to-clinic/concept-to-clinic/compare/master...WGierke:229_fix_segmentation?expand=1)? All I did was to apply the patch for the segmentation tests and merge the current master. The outcome was that all segmentation tests passed (I also ran slow tests): ```bash ➜ concept-to-clinic git:(229_fix_segmentation) ✗ docker-compose -f local.yml run prediction pytest -vrsk src/tests/test_segmentation.py Starting base ... Starting base ... done ======================================== test session starts ======================================== platform linux -- Python 3.6.1, pytest-3.1.3, py-1.4.34, pluggy-0.4.0 -- /usr/bin/python3.6 cachedir: .cache rootdir: /app, inifile: collected 7 items src/tests/test_segmentation.py::test_correct_paths PASSED src/tests/test_segmentation.py::test_segment_predict_load PASSED src/tests/test_segmentation.py::test_segment_dicom PASSED src/tests/test_segmentation.py::test_segment_luna PASSED src/tests/test_segmentation.py::test_nodule_segmentation PASSED src/tests/test_segmentation.py::test_lung_segmentation PASSED src/tests/test_segmentation.py::test_stop_timeout PASSED ==================================== 7 passed in 514.55 seconds ===================================== ``` I think the issues behind the failing segmentation tests at the moment of opening this issue could have been resolved in #237 . Can you confirm?
2 weeks ago

@WGierke commented on PR #252: #229 Fix identification on LUNA

I really appreciate your offer to run the identification tests on your machine and to send me possible occurring stack traces @lamby ! :grin:
2 weeks, 3 days ago

@WGierke commented on PR #252: #229 Fix identification on LUNA

@lamby Done. I changed test_docker for faster testing purposes, but that has been reverted again :) Shouldn't have commited that.
2 weeks, 4 days ago

@WGierke opened a new pull request: #252: #229 Fix identification on LUNA

I fixed the identification of LUNA nodules as far as it gets on my machine. This means that currently all tests in `src/tests/test_identification.py` are failing for me since I don't have 21 GB of RAM. Maybe someone else can execute them and send me tracebacks for further error investigation? ## Description The problem was that sometimes the direct path to a MHD file was handed to the function instead of the path to the directory. The made changes should support now both alternatives. ## Reference to official issue This addresses #229 - identification test(s) should pass for luna scans. ## How Has This Been Tested? I added the test given in #229 . ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
2 weeks, 5 days ago

@WGierke opened a new pull request: #248: #247 Only use TimeoutExit for timeouts

This PR fixed the case that a `Termination` exception can be thrown in tests that should timeout. ## Reference to official issue This addresses #247 . ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
3 weeks ago

@WGierke created a new issue: #247: Race Condition with Timeout Tests

As discussed in #239 , apparently #243 introduced a race condition that involves tests that should timeout. Sometimes, a `TimeoutExit` is thrown (which is correctly caught) and sometimes a `Termination` exception is thrown which is currently now caught and makes the tests fail (e.g. [here](https://travis-ci.org/concept-to-clinic/concept-to-clinic/builds/306144770?utm_source=github_status&utm_medium=notification)) ## Expected Behavior A test that creates a timeout should be caught by `xfail` and thus not fail. ## Possible Solution Only throw `TimeoutExit` exceptions since there is no reason to throw two kinds of exceptions when a test should be killed. ## Checklist before submitting - [X] I have confirmed this using the officially supported Docker Compose setup using the `local.yml` configuration and ensured that I built the containers again and they reflect the most recent version of the project at the `HEAD` commit on the `master` branch - [X] I have searched through the other currently open issues and am confident this is not a duplicate of an existing bug - [X] I filled out all the relevant sections of this template
3 weeks ago

@WGierke commented on PR #239: Router Guard + Refactor Front-end Test Suite

I'm not able to do that but you can do a `git commit --amend` and `git push -f` to manually restart the Travis test. However, give me a second to file an issue for that and (hopefully) to directly propose a patch :)
3 weeks ago
3 weeks, 2 days ago

@WGierke commented on issue #165: Keep docker container size under control

From [Allow compose to run arbitrary commands at the host level after or before containers are started](https://github.com/docker/compose/issues/2012): `command is always run in a container. docker-compose is not a tool for running commands on the host.` Seems like we need to put your suggested commands into an extra bash script :/
3 weeks, 2 days ago
3 weeks, 2 days ago

@WGierke opened a new pull request: #243: #195 Stop slow tests after set period of time

We don't want to completely skip slow tests since this can hide bugs (e.g. #191). Instead, we want to start the slow tests but skip them after a specified period of time. ## Description If the environment variable RUN_SLOW_TESTS is set to True, all tests should be run. Otherwise tests that exceed the time specified by the variable TESTS_TIMEOUT (default is 15 seconds) are skipped without causing an error. ## Reference to official issue This addresses #195 ## Motivation and Context It was necessary to create a new [hook](https://github.com/concept-to-clinic/concept-to-clinic/compare/master...WGierke:195_stop_slow_tests?expand=1#diff-507495becfd1e874bad6e0b1cf4437b2R115) that checks whether a test should timeout, that kills the test when exceeding the timeout and then returns that the test is not failing (if the timeout is the only reason for the test exit). It wasn't possible to just use pytest-timeout and the xfail marker since pytest-timeout does not raise an exception when killing a test which could be caught by the xfail marker. ## How Has This Been Tested? I added a [test](https://github.com/concept-to-clinic/concept-to-clinic/compare/master...WGierke:195_stop_slow_tests?expand=1#diff-ed63136572f32bc5f7500b864f9bdfa9R60) that should always timeout (if we want slow tests to timeout), otherwise it raises an uncaught exception. The tests that are failing are expected to fail (as stated in the issue and #229 ) namely `src/tests/test_endpoints.py::test_identify`, `src/tests/test_identification.py::test_identify_nodules_001` and `src/tests/test_identification.py::test_identify_nodules_003`. ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
3 weeks, 3 days ago

@WGierke commented on PR #239: Router Guard

Cool stuff! Is it possible to introduce tests for the front-end as well?
3 weeks, 4 days ago

@WGierke commented on PR #237: #187 Fix Segmentation + #151 Training Data Shape

It would take 1 GB to connect all the input neurons to just 1 output neuron. Using max pooling and convolutional layers would of course crunch the number of trained weights. However, even the [2nd place of the NDSB 2017](http://juliandewit.github.io/kaggle-ndsb2017/) only used an input shape of 64x64x64 mm³ at max (which needed a Tesla K80 with 12 GB to train).
3 weeks, 4 days ago

@WGierke commented on issue #195: Stop slow tests after a set timeout period, instead of skipping them

If I understand [this](https://stackoverflow.com/questions/46766899/pytest-timeout-fail-test-instead-killing-whole-test-run) correctly, it doesn't appear to be so easy to catch the case when pytest-timeout stops a method because the SIGALRM signal does not produce an exception. I was able to implement the remaining logic but currently I'm struggling with somehow catching that timeout exception: `git checkout bcb743766386794bf373bd14972cb3b05c60d727` `git apply patch.txt` [patch.txt](https://github.com/concept-to-clinic/concept-to-clinic/files/1485956/patch.txt) Does someone have an idea how to still let tests pass even when they timeout?
3 weeks, 5 days ago

@WGierke commented on issue #229: Algorithms don't work across both dicom and luna scans

I took a quick glance at the identification problems. When running the tests, I get a `RuntimeError: $ Torch: not enough memory: you tried to allocate 21GB. Buy new RAM! at /pytorch/torch/lib/TH/THGeneral.c:270`. This basically means that our client / web server (depending on where the project will be deployed in the end) needs at least 21 GB of RAM. Is it realistic to assume that this will be the case? Otherwise we might need to crunch the model a bit ... PS: :+1: for providing tests that need to pass :)
3 weeks, 5 days ago

@WGierke commented on PR #218: #160 Add Route Testing

@lamby Yes ;) It should be done now.
3 weeks, 5 days ago

@WGierke commented on issue #229: Algorithms don't work across both dicom and luna scans

Once #237 is merged, I'd like to work on "segmentation test(s) should pass for luna scans".
3 weeks, 5 days ago

@WGierke opened a new pull request: #237: #187 Fix Segmentation + #151 Training Data Shape

The problem was that we haven't had a standardized way to handle training data. Now, the scans and binary nodule images have the shape 512x512x1024 so all DICOM images that have been rescaled to voxels fit into that. However, since this is a giant input matrix which can easily blow up the memory when training a neural network I added a `SegmentationModel` wrapper that takes this shape as an input and also predicts this shape, but the models that implement this interface can also rescale / crop / ... the input data as long as the predicted output is again of size 512x512x1024. For example consider the [`Simple3DModel`](https://github.com/concept-to-clinic/concept-to-clinic/compare/master...WGierke:187_fix_segmentation?expand=1#diff-7ebde87c389dab64f0c9e8a81e6b6391R49): it scales the input by 1/4 for each axis, learns a model based on that and after predicting it scales the predicted binary nodule mask again by factor 4 such that the output shape equals the given training data shape again. Note that I mainly moved the code from `models.py` to the classes in the `models/` directory to accomplish this. ## Reference to official issue This references #187 and #151 since I wasn't able to clearly separate them. ## How Has This Been Tested? I added the piece of code that threw an error given in #187 as a test. All tests pass. ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
3 weeks, 5 days ago

@WGierke commented on issue #230: Ask a Clinician! (add a question, get points)

I have some pretty obvious questions: 1. If you could wish for a piece of software that could help you detecting lung cancer nodules, what would be its most useful features and how would it be different from any possible "competitors" that already exist? 2. What would be the most important properties such a software has to fulfill such that you would use it on a regular basis (intuitive user interface, responsive, high accuracy, ...?)
3 weeks, 6 days ago

@WGierke commented on issue #195: Stop slow tests after a set timeout period, instead of skipping them

`The identification tests should fail on travis even when RUN_SLOW_TESTS isn't set to true.` Assuming that an error occurs in the seconds before the timeout, right? Otherwise the test passes because the timeout is not seen as an error?
4 weeks, 1 day ago

@WGierke commented on PR #218: #160 Add Route Testing

` this is about waiting for things to spin-up, no` Exactly, that's what I meant. Sorry for the confusion :)
4 weeks, 1 day ago

@WGierke commented on PR #218: #160 Add Route Testing

Okay but what if a port won't be opened because the service died for some reason? In that case we test endlessly which might be no problem on Travis since there is a timeout but not on local computers.
1 month ago

@WGierke opened a new pull request: #223: #222 Fix docker-compose build error

This assigns a static name to the generated `base` image such that docker still works if your directory is named something else than "concept-to-clinic". This addresses #222 . ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
1 month ago

@WGierke commented on issue #222: Fix docker-compose build error

Setting the image name automatically would be the cleanest solution IMO. [SO](https://stackoverflow.com/questions/32230577/how-do-i-define-the-name-of-image-built-with-docker-compose) says `docker-compose --project-name foo build bar` would build the image foo_bar no matter what the parent directory is named. [This answer](https://stackoverflow.com/questions/32230577/how-do-i-define-the-name-of-image-built-with-docker-compose/33951705#33951705) even shows how to do that using the docker-compose YAML file.
1 month ago
1 month ago

@WGierke commented on issue #222: Fix docker-compose build error

Can you give us the output of `docker images | grep concepttoclinic_base` ? I think the base image might get another name at your computer (since it works on Travis).
1 month ago

@WGierke commented on PR #218: #160 Add Route Testing

:D Okay so how about, instead of `sleep 60`, we try to connect to the 4 ports and after 60 seconds we time out and exit with 1, otherwise we perform the tests?
1 month ago

@WGierke commented on PR #218: #160 Add Route Testing

@lamby Thanks for the hint, that indeed is a way better solution to test for unsuccessful responses. However, apparently one has to do some more bash magic than expected. The docker containers must be started so the ports are served at localhost. However, the `compile_docs` service is producing such a big output that it hits the Travis log limit of 4MB and the test fails (apart from that, fixing `compile_docs` might be a new separate issue?). Thus, I'm piping the output to /dev/null. While the tests now work on my machine, they're still failing on Travis: ```bash ➜ concept-to-clinic git:(160_route_errors) ✗ git log | head -n 1 commit 0e8525940f3fda572745e400f8cb1fe01b35c329 ➜ concept-to-clinic git:(160_route_errors) ✗ bash tests/test_docker.sh ... + docker-compose -f local.yml run interface python manage.py makemigrations --dry-run --check Starting concepttoclinic_postgres_1 ... Starting base ... Starting base ... done Starting concepttoclinic_prediction_1 ... done Postgres is up - continuing... /usr/local/lib/python3.6/dist-packages/environ/environ.py:618: UserWarning: Error reading /app/.env - if you're not configuring your environment separately, check this. "environment separately, check this." % env_file) No changes detected + sleep 60 + docker-compose -f local.yml up + python tests/test_routes.py Fetching http://localhost:8000/ ... Fetching http://localhost:8000/api ... Fetching http://localhost:8080/ ... Fetching http://localhost:8001/ ... Fetching http://localhost:8001/classify/predict/ ... Fetching http://localhost:8001/identify/predict/ ... Fetching http://localhost:8001/segment/predict/ ... Fetching http://localhost:8002/ ... ➜ concept-to-clinic git:(160_route_errors) ✗ echo "$?" 0 ``` I'd be happy over any solution suggestions :)
1 month ago

@WGierke opened a new pull request: #218: #160 Add Route Testing

I added a simple Python script that can test whether routes error. It is highly customizable (e.g. one could add to check for the occurrence of the word "error" or "exception" etc.). ## Reference to official issue This addresses #160 ## Motivation and Context Currently, the Travis test passes even if some routes are non-functional (like http://0.0.0.0:8001/). ## How Has This Been Tested? It's currently not passing due to the unfixed issue #203 . ```bash ➜ concept-to-clinic git:(160_route_errors) ✗ bash tests/test_docker.sh ... + python tests/test_routes.py http://localhost:8001/ returns 500 + echo 'ERROR: test_routes did not pass. Check above for details.' ERROR: test_routes did not pass. Check above for details. + exit 1 ``` ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
1 month ago

@WGierke commented on PR #217: #203 Fix error showing no module named 'src' found.

Would it make sense to write a test that checks whether the traceback still exists when visiting the page? This would enable us to notice when the page breaks again.
1 month ago

@WGierke commented on issue #216: Invert docker-compose `interface` service dependency

I guess the `docs` service should still use the cache that was built by the `interface` service (instead of re-downloading all the Python dependencies)? Do you have any ideas how to accomplish that?
1 month, 1 week ago

@WGierke opened a new pull request: #199: #159 Docker: Reuse Caches

First, a base image is built that installs all needed pip requirements. The docker containers, that need parts of the requirements (prediction, interface and documentation services), then are built on top of that base image. Thus, building the documentation service doesn't require to install all pip requirements again resulting in a great speed-up of the container building. ## Reference to official issue This addresses #159 ## How Has This Been Tested? `docker-compose -f local.yml build && docker-compose -f local.yml up` builds all containers properly and fires them up. ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
1 month, 1 week ago
1 month, 1 week ago

@WGierke commented on PR #147: #3 Segmentation Algorithm

@reubano Done. Travis had some temporary issues so I had to `git commit --amend && git push -f` several times to restart the builds. In case you received any emails about me pushing the same commit message again and again to the branch ;)
1 month, 3 weeks ago

@WGierke commented on PR #147: #3 Segmentation Algorithm

Thanks @reubano - works like a charm :)
1 month, 3 weeks ago

@WGierke commented on issue #159: Make the docker-compose documentation service reuse the caches from prediction and interface

A friend of mine proposes that we should build our own base image which is based on the union of the pip requirements. Those, the base image is only supposed to be compiled once. If services use different versions of a library, they have to install them themselves after the base image was created. Just throwing in my two cents :)
1 month, 3 weeks ago

@WGierke commented on PR #147: #3 Segmentation Algorithm

Thanks a lot @reubano . Much appreciated :)
1 month, 3 weeks ago

@WGierke commented on PR #147: #3 Segmentation Algorithm

@lamby @reubano Would you mind reviewing again, please?
1 month, 3 weeks ago

@WGierke commented on PR #147: #3 Segmentation Algorithm

We use the scans of the [LIDC](https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI) dataset. You can already find 3 patient scans in [tests/assets/test_image_data/full](https://github.com/concept-to-clinic/concept-to-clinic/tree/master/tests/assets/test_image_data/full). The neat thing is that there is a library (pylidc) that makes it really easy to query scan and annotation information. After downloading the dataset, set up pylidc by creating a `.pylidc` file locally that points to your local LIDC directory (like [here](https://github.com/concept-to-clinic/concept-to-clinic/pull/147/files#diff-ff90b371f444f3305e167198719a5333)) and run [prepare_training_data](https://github.com/concept-to-clinic/concept-to-clinic/pull/147/files#diff-24a9ce10839958291d5e9180703e0d79R9) which will save binary lung masks of each patient in which 1 indicates that the voxel was annotated to be cancerous by the radiologist(s). For further questions please refer to the [gitter](https://gitter.im/concept-to-clinic/) chat, so succeeding contributors can also find that information and don't have to look through closed pull requests ;) ( @lamby The aforementioned workflow might fit well into the docs as soon as the PR is merged, don't you think so?)
2 months ago

@WGierke commented on PR #147: #3 Segmentation Algorithm

Hi @rracinskij , Thanks for your question! I just expected a 3D model to perform well as it also can capture the context between the scan slices in contract to a vanilla 2D model. But I think that any model that yields good results is a valuable contribution (especially since we currently don't have any nicely working segmentation model so far) :)
2 months ago

@WGierke commented on issue #153: Add to docs how to generate some JSON data to work with

Is there anything left that prevents this issue from getting closed?
2 months ago

@WGierke opened a new pull request: #155: #153 Add how to create JSON data to docs

I added @isms' post to the documentation. It references issue #153 . ![screenshot from 2017-10-15 21-17-27](https://user-images.githubusercontent.com/6676439/31588169-83484fc4-b1ee-11e7-84ad-68e3811c4722.png) ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
2 months ago

@WGierke opened a new pull request: #154: #152 Fix RuntimeError

I fixed the RuntimeError described in #152 by setting the appropriate variables. ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
2 months ago

@WGierke commented on PR #147: #3 Segmentation Algorithm

@lamby Would you mind reviewing again?
2 months ago

@WGierke created a new issue: #152: Python 3.6: RuntimeError: Click will abort further execution because Python 3 was configured to use ASCII

`docker-compose build` throws a RuntimeError in the prediction service: ```bash prediction_1 | Traceback (most recent call last): prediction_1 | File "/usr/local/bin/flask", line 11, in <module> prediction_1 | sys.exit(main()) prediction_1 | File "/usr/local/lib/python3.6/dist-packages/flask/cli.py", line 513, in main prediction_1 | cli.main(args=args, prog_name=name) prediction_1 | File "/usr/local/lib/python3.6/dist-packages/flask/cli.py", line 380, in main prediction_1 | return AppGroup.main(self, *args, **kwargs) prediction_1 | File "/usr/local/lib/python3.6/dist-packages/click/core.py", line 676, in main prediction_1 | _verify_python3_env() prediction_1 | File "/usr/local/lib/python3.6/dist-packages/click/_unicodefun.py", line 118, in _verify_python3_env prediction_1 | 'for mitigation steps.' + extra) prediction_1 | RuntimeError: Click will abort further execution because Python 3 was configured to use ASCII as encoding for the environment. Consult http://click.pocoo.org/python3/for mitigation steps. prediction_1 | prediction_1 | This system supports the C.UTF-8 locale which is recommended. prediction_1 | You might be able to resolve your issue by exporting the prediction_1 | following environment variables: prediction_1 | prediction_1 | export LC_ALL=C.UTF-8 prediction_1 | export LANG=C.UTF-8 prediction_1 exited with code 1 ``` ## Possible Solution Adding ```bash LC_ALL=C.UTF-8 LANG=C.UTF-8 ``` as suggested to local.yml seems to fix the problem. ## Steps to Reproduce `git fetch origin` `git checkout master` `git pull` `docker-compose -f local.yml build` `docker-compose -f local.yml up` Can someone reproduce this? ## Checklist before submitting - [X] I have confirmed this using the officially supported Docker Compose setup using the `local.yml` configuration and ensured that I built the containers again and they reflect the most recent version of the project at the `HEAD` commit on the `master` branch - [X] I have searched through the other currently open issues and am confident this is not a duplicate of an existing bug - [X] I provided a **minimal code snippet** or list of steps that reproduces the bug. - [X] I provided **screenshots** where appropriate - [X] I filled out all the relevant sections of this template
2 months ago

@WGierke created a new issue: #151: Segment nodules: find appropriate training data shape for training

We want to automatically segment nodules as described #3 . To train a machine learning algorithm, the model expects a fixed input and output size. This is challenging because on the one hand, the training data, which is represented by CT scans scaled to mm voxels, varies greatly between the different scans so small scaled scans have to be padded appropriately. On the other hand, having training data with shape 512x512x512 can blow up the memory consumption of the model if we're e.g. trying to implement a [3D U-Net](https://github.com/concept-to-clinic/concept-to-clinic/pull/147/files#diff-a5d50c079ff0f39a37a401c2499748a0R35). Currently, a fixed input size is implemented [here](https://github.com/concept-to-clinic/concept-to-clinic/pull/147/files#diff-6dba1c59850535ca155232ad51813988R23). ## Expected Behavior Find an appropriate shape for training data such that all the LIDC scans can be imported after being rescaled to mm voxels. Show that you can still train a classifier on top of that which is not too demanding regarding GPU memory while training time (preferably even using a convolutional neural network since algorithms based on them are state-of-the-art solutions for pattern detection in many areas). ## Possible Implementation You could try to use [get_max_scaled_dimensions](https://github.com/concept-to-clinic/concept-to-clinic/pull/147/files#diff-6dba1c59850535ca155232ad51813988R32) which iterates over all rescaled LIDC images and returns the maximal dimensions of them. Furthermore, you could try to use [cropping layers](https://faroit.github.io/keras-docs/1.2.2/layers/convolutional/#cropping3d) provided by keras.
2 months, 1 week ago

@WGierke commented on PR #147: #3 Segmentation Algorithm

@lamby I already updated the `get_dicom_paths` method [here](https://github.com/concept-to-clinic/concept-to-clinic/pull/147/files#diff-24a9ce10839958291d5e9180703e0d79R17) (didn't mention that in the commit message, sorry). Why does removing the ```python if CROP_SIZE != CUBE_SIZE: cube_img = helpers.rescale_patient_images2 ``` stuff need comments? If we're honest, this shouldn't even have made it to the current code base - I introduced it in #118 but only noticed now that it currently isn't executed and was probably used by Julian de Wit to play around with the `CROP_SIZE`.
2 months, 1 week ago

@WGierke commented on PR #147: #3 Segmentation Algorithm

Thanks @lamby and @reubano . Your proposed changes are way cleaner :)
2 months, 1 week ago

@WGierke commented on PR #147: #3 Segmentation Algorithm

Thanks @dchansen ! Do you mind sharing some stats about what's the input shape of your V-NET and how much GPU memory does it consume? Because I think that the model took more than 8 GB of GPU memory for an input shape of 512x512x512. As described in #3 I think using cross validation on the LIDC dataset would make sense. The evaluation metrics introduced by #146 could be used to give a broader overview of how well the different approaches perform. Maybe some radiologists prefer precision (How many selected nodules are relevant?) over recall (How many relevant nodules are selected?) because they don't want to waste time with false positives, or vice versa because they have the time to evaluate false positives and prefer to have as many potential nodules in the suggestion as possible.
2 months, 2 weeks ago

@WGierke opened a new pull request: #147: #3 Segmentation Algorithm

As demanded by the ticket, I implemented a very simple segmentation algorithm that can be trained using Keras, its model is saved in the `assets` folder and that can predict nodules. The model is still pretty shitty, so hopefully there are some more experiences data scientists than me around to improve it :) However, the "environment" to train the model should be implemented by this PR. ## Description I created training data pairs based on the LIDC dataset using pylidc. The label image is a binary mask whose pixels are 1 if there is an annotation in the LIDC dataset whose malice is bigger or equal to 3 (out of 5). The label images are saved in the `prediction/src/algorithms/segment/assets/` directory. Since some 3D arrays are hard to imagine, I added two methods that help to visualize these arrays by navigating through one dimension and simultaneously displaying the remaining two ones: ![screenshot from 2017-10-01 14-52-21](https://user-images.githubusercontent.com/6676439/31057865-feb65f44-a6e9-11e7-97af-f11a4e52acea.png) `display_training_pair()` shows the input image and the binary output mask. Note the segmented annotation on the right hand side and the slider for the Z axis at the bottom. When the mask is being applied on the input data, `cuboid_show_slider` shows us: ![screenshot from 2017-10-01 14-51-05](https://user-images.githubusercontent.com/6676439/31057888-7abaa8b6-a6ea-11e7-9054-b745460cf32c.png) [prepare_training_data() in prediction/src/algorithms/segment/src/data_generation.py](https://github.com/concept-to-clinic/concept-to-clinic/compare/master...WGierke:3_segmentation_algorithm_lung?expand=1#diff-24a9ce10839958291d5e9180703e0d79) saves a npy file including the binary segmented mask of a lung for each patient. [train() in prediction/src/algorithms/segment/src/training.py](https://github.com/concept-to-clinic/concept-to-clinic/compare/master...WGierke:3_segmentation_algorithm_lung?expand=1#diff-6dba1c59850535ca155232ad51813988) trains the model based on the previously generated binary masks and the rescaled volume images of each DICOM scan. The model configuration is saved in [prediction/src/algorithms/segment/src/model.py](https://github.com/concept-to-clinic/concept-to-clinic/compare/master...WGierke:3_segmentation_algorithm_lung?expand=1#diff-a5d50c079ff0f39a37a401c2499748a0). ## Reference to official issue This addresses #3 ## Motivation and Context We want to suggest the radiologist to have a look at nodules that are automatically found by an algorithm of us. ## How Has This Been Tested? I wrote a [test](https://github.com/concept-to-clinic/concept-to-clinic/compare/master...WGierke:3_segmentation_algorithm_lung?expand=1#diff-ed63136572f32bc5f7500b864f9bdfa9) that checks whether the response of the segmentation endpoint has the correct format. ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well I'd highly appreciate any feedback!
2 months, 2 weeks ago

@WGierke commented on issue #26: Document the Alex |Andre |Gilberto |Shize algorithm

@lamby same here - is something preventing this issue from getting closed? :)
2 months, 2 weeks ago

@WGierke commented on issue #21: Document the Aidence algorithm

@lamby Is there still anything missing to fulfill the issue? Otherwise it could be closed I guess? :)
2 months, 2 weeks ago

@WGierke created a new issue: #143: Segmentation Evaluation

As discussed in #142, to facilitate evaluating predicted segmentation images, it would make sense to implement metrics that compare the predicted images with the ground truth in order to compare approaches more efficiently. Some measures/distances that should at least be considered for this are sensitivity, specificity, Dice coefficient and Hausdorff distance. [The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4833122/) chapter "E. Evaluation Metrics and Ranking" describes them well. ## Expected Behavior Find metrics and distances that measure the deviation of a predicted, segmented image from its ground truth. Implement them so 2D as well as 3D images / data cubes can be passed to them. ## Possible Solution [scipy.spatial.distance](https://docs.scipy.org/doc/scipy/reference/spatial.distance.html#module-scipy.spatial.distance) seems to already offer some nice stuff. ## Acceptance criteria - [ ] explain why you chose to implement the selected metrics / distances - [ ] implement the comparison methods for 2D and 3D numpy arrays - [ ] write appropriate tests and show that the methods work
2 months, 2 weeks ago

@WGierke commented on issue #138: Lungs Segmentation | long term

Yep, we're definitely supposed to refine the lung segmentation. Have a look at e.g. patient 0001 and the nodule at 315.04852321, 365.87447257, -116.12078059 (slice ~91): ![screenshot from 2017-09-30 10-12-50](https://user-images.githubusercontent.com/6676439/31044067-217e7ba6-a5c8-11e7-8644-7db092746b47.png) It's state is labeled as solid, which isn't included in the current lung segmentation: ![screenshot from 2017-09-30 10-12-22](https://user-images.githubusercontent.com/6676439/31044079-4dd01ec6-a5c8-11e7-9b5b-714c1e7c0ec8.png) However, some of the slices of the nodule are included in the lung segmentation but currently this would still falsify statistics such as the predicted volume of a nodule as to be implemented in #3 .
2 months, 2 weeks ago

@WGierke commented on PR #142: Implemented simple nodule segmentation algorithm

Nice stuff! Do you have some measurement for the accuracy of your segmentation algorithm on the LIDC dataset? That would make it easier to compare it against further approaches :)
2 months, 2 weeks ago

@WGierke commented on PR #122: Converted Team 'grt123' identification and classification algorithms

Instead of rebasing, I prefer to squash the commits like [so](https://gist.github.com/patik/b8a9dc5cd356f9f6f980). It fulfills the purpose as well, namely having only 1 commit in the end :)
2 months, 2 weeks ago

@WGierke commented on PR #133: #120 Lung Segmentation (Hounsfield based)

@isms I added a [test](https://github.com/concept-to-clinic/concept-to-clinic/pull/133/files#diff-e6c519f70a6bee9cab764b751df70348) that checks whether the annotations of the local LIDC images are inside the segmented lung masks. I'm using [pylidc](https://github.com/pylidc/pylidc) for that which makes it easier to query attributes of the LIDC images (like annotations). However, the library has to be installed after the other requirements, otherwise it throws some `ModuleNotFoundError`s.
2 months, 3 weeks ago

@WGierke commented on PR #133: #120 Lung Segmentation (Hounsfield based)

@isms Oh alright, sorry, now I got it. Yes, thank definitely makes sense. I'll add that :)
2 months, 3 weeks ago

@WGierke opened a new pull request: #135: #28 Document the Owkin algorithm

I added an initial documentation of the Owkin algorithm. The algorithm scored 10th place during the DSB 2017. Unfortunately, there is no technical report or blog post about the approach so far. Thus, I had to derive the algorithm design from the source code and black-box models only, which resulted in a shorter documentation than for other algorithms. ## Reference to official issue This pull request addresses #28 ## Motivation and Context At some point we need to implement an algorithm that can segment the nodules and detect whether they are malicious or not. It makes sense to first evaluate the top 10 solutions for the Data Science Bowl 2017 regarding how performant they are and how easy it would be to use them in our application. ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
2 months, 3 weeks ago

@WGierke commented on PR #133: #120 Lung Segmentation (Hounsfield based)

@isms Thanks for your input. If I understand you correctly, the issue with the current identification algorithm by De Wit will be that it only considers tissue that is inside this mask, so it's not possible at all to find nodules outside the lung. I could rewrite the identification algorithm to consider the whole image and not just the lung but what would we expect to happen in that case? Only detecting nodules in an area that corresponds to the lung would be nice. However, if the algorithm detects nodules in areas that do not belong to the lung, the reason could also be that the model simply wasn't trained to distinguish between lung and not lung on its own - so the model could also be "false".
2 months, 3 weeks ago

@WGierke commented on PR #101: #34 Select Lung Orientation

@lamby I filed #134 . You're right, it blocked this PR when the front-end of the lung orientation selection was still part of this PR. Since that moved to #125 (and the issue has been filed), I'd say that this PR shouldn't be blocked anymore?
2 months, 3 weeks ago

@WGierke created a new issue: #134: Global CSRF Protection

This issue is a follow-up of the discussion from [here](https://github.com/concept-to-clinic/concept-to-clinic/pull/101#issuecomment-330208795). [Cross-Site Request Forgery (CSRF) is an attack that forces an end user to execute unwanted actions on a web application in which they're currently authenticated.](https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)) As a defensive measure, one configures the server to send a random token to the client which has to be sent back when e.g. a POST request is performed. This ensures that the origin of the POST request really is the own application. To solve this issue, please send a CSRF token by default to the (Vue.js) client and pass it back when performing POST/PUT/... requests to the server, by default as well. ## Expected Behavior The server should set a CSRF token per default with every response. When sending data to the server (e.g. to select the lung orientation as specified in #34), this token should be added automatically to the payload. ## Current Behavior We're not using CSRF tokens. ## Possible Implementation Letting Django respond to every request with a CSRF token seems pretty easy by e.g. adding `'config.middleware.ForceCsrfCookieMiddleware',` to `MIDDLEWARE` and adding ``` from django.utils.deprecation import MiddlewareMixin from django.middleware.csrf import get_token class ForceCsrfCookieMiddleware(MiddlewareMixin): """ Forces the CSRF cookie to be set for each request """ def process_request(self, request): get_token(request) ``` in `interface/config/middleware.py`. [Reference](https://stackoverflow.com/questions/32231607/django-forcing-csrf-token-on-all-responses) Regarding #125 , letting Vue (with axios) respond with the correct CSRF token seems to be as easy as ``` import axios from 'axios'; axios.defaults.xsrfCookieName = 'csrftoken' axios.defaults.xsrfHeaderName = 'X-CSRFToken' ``` [Reference](https://cbuelter.wordpress.com/2017/04/10/django-csrf-with-axios/) ## Acceptance criteria - [ ] the token is added to every Django response by default - [ ] the token is added to every payload send to the server by default - [ ] appropriate tests
2 months, 3 weeks ago

@WGierke commented on issue #120: Lung Segmentation

@reubano To me it looks like PR #132 is not directly connected to this issue as the PR is more about implementing ways to distort input data to artificially generate more data to train on. The sentence that references this issue (`The aforementioned pipeline mainly consists of Hounsfield scaling, lungs segmentation (#120), CT re-orientation and a batch of trivial, yet resource consumptive spacial operations: zoom, rotation, shear, shift, flip and combinations of them. The convenient solution for the latter process is to build a generator which will yield a processed patches via affine transformations.`) sounds to me like the author simply wanted to describe the context in which the data generator fits. Telling from the code, I don't see a possibility to segment lungs - neither in `src/preprocess/lungs_segmentation.py` (which doesn't exist in the PR) nor anywhere else - or am I overseeing something @vessemer ?
2 months, 3 weeks ago

@WGierke commented on issue #120: Lung Segmentation

@reubano Why has this been closed? Should I discard PR #133 now?
2 months, 3 weeks ago

@WGierke opened a new pull request: #133: #120 Lung Segmentation (Hounsfield based)

I refactored the current identification algorithm that's based on the approach of [Julian de Wit](https://github.com/concept-to-clinic/concept-to-clinic/blob/master/docs/algorithm_de_wit.md) which uses Hounsfield values between -1000 and 400 to segment lungs. As a byproduct, the algorithm already saves images of pre-processed scans and the related lung masks. I refactored the structure so the segmentation method can be accessed easier by other algorithms as well. ## Reference to official issue This addresses #120 . ## Motivation and Context Every nodule identification algorithm is supposed to somehow segment the lung first. Thus, it makes sense to have one separate function that is shared among the different identification algorithms. ## How Has This Been Tested? I ran the [nodule identification test](https://github.com/concept-to-clinic/concept-to-clinic/blob/master/prediction/src/tests/test_endpoints.py#L76) which saved files of the segmented lungs as PNG files to `data/extracted/USER_ID/`. I constructed the following GIF based on them, which shows how well the lung got segmented. ![result1](https://user-images.githubusercontent.com/6676439/30719717-dea5964c-9f24-11e7-87d0-10111b2d72d0.gif) I'm very open for suggestions for more "code-heavy" tests ;) ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
2 months, 3 weeks ago

@WGierke commented on PR #101: #34 Select Lung Orientation

Letting Django respond to every request with a CSRF token seems pretty easy by e.g. adding `'config.middleware.ForceCsrfCookieMiddleware',` to `MIDDLEWARE` and adding ``` from django.utils.deprecation import MiddlewareMixin from django.middleware.csrf import get_token class ForceCsrfCookieMiddleware(MiddlewareMixin): """ Forces the CSRF cookie to be set for each request """ def process_request(self, request): get_token(request) ``` in `interface/config/middleware.py`. Regarding #125 , letting Vue (with axios) respond with the correct CSRF token seems to be as easy as ``` import axios from 'axios'; axios.defaults.xsrfCookieName = 'csrftoken' axios.defaults.xsrfHeaderName = 'X-CSRFToken' ``` However, is CSRF really something that should block this PR about handling the lung orientation in the back-end? Shouldn't the CSRF token management be added in another PR?
2 months, 3 weeks ago

@WGierke commented on issue #126: SimpleITK: "Unable to determine ImageIO reader for ..." during local pytest

Re-cloning the repository "solved" it. What the heck. But thanks for your input @vessemer !
2 months, 3 weeks ago

@WGierke commented on PR #125: Port existing views and add VueRouter and Axios

Great stuff! If you add one more empty line to `serializers.py:38`, the Travis tests should pass :) ``` (venv3.5) ➜ concept-to-clinic_old git:(71855eb) flake8 interface && flake8 prediction interface/backend/api/serializers.py:38:1: E302 expected 2 blank lines, found 1 ```
2 months, 3 weeks ago

@WGierke commented on issue #126: SimpleITK: "Unable to determine ImageIO reader for ..." during local pytest

Thanks for trying to reproduce it @vessemer . The tests work at another computer of mine, but my main tower is still having troubles. I `chown`ed everything to my user, restarted `dockerd` and deleted all images and containers. No success so far :/ I'm currently cloning the repository to a new location so I can test it there again. If the tests still fail, I'll re-install docker.
2 months, 3 weeks ago

@WGierke commented on PR #118: #1 Identify Nodules: Julian de Wit Algorithm

It displays a summary of skipped tests :) otherwise, every attempt to print something by a test is captured and only showed if the test fails.
2 months, 3 weeks ago

@WGierke opened a new pull request: #128: #127: Fix typo in issue template: local.py -> local.yml

I replaced `local.py` with `local.yml` in the issue template. This addresses #127 . ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
2 months, 3 weeks ago

@WGierke created a new issue: #126: SimpleITK: "Unable to determine ImageIO reader for ..." during local pytest

When running the tests locally, I'm getting 4 failed tests due to a RuntimeError in SimpleITK (on the current master). ## Expected Behavior The tests should pass. ## Current Behavior 4 tests fail due to SimpleITK: ``` (venv3.5) ➜ concept-to-clinic git:(master) ✗ sudo bash tests/test_docker.sh + docker-compose -f local.yml run prediction pytest ======================================== test session starts ======================================== platform linux -- Python 3.6.2, pytest-3.1.3, py-1.4.34, pluggy-0.4.0 rootdir: /app, inifile: collected 26 items src/tests/test_calculate_volume.py ... src/tests/test_classify_trained_model_predict.py .. src/tests/test_crop_dicom.py . src/tests/test_endpoints.py ........ src/tests/test_load_ct.py FFFF src/tests/test_load_dicom.py .... src/tests/test_preprocess_dicom.py .... ============================================= FAILURES ============================================== ________________________________________ test_load_metaimage ________________________________________ metaimage_path = '../images/LUNA-0001/1.3.6.1.4.1.14519.5.2.1.6279.6001.102133688497886810253331438797' dicom_path = '../images/LIDC-IDRI-0001/1.3.6.1.4.1.14519.5.2.1.6279.6001.298806137288633453246975630178/1.3.6.1.4.1.14519.5.2.1.6279.6001.179049373636438705059720603192' def test_load_metaimage(metaimage_path, dicom_path): path = glob(os.path.join(metaimage_path, '*.mhd'))[0] > ct_array, meta = load_ct.load_metaimage(path) src/tests/test_load_ct.py:25: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ src/preprocess/load_ct.py:72: in load_metaimage meta = SimpleITK.ReadImage(path) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = ('../images/LUNA-0001/1.3.6.1.4.1.14519.5.2.1.6279.6001.102133688497886810253331438797/1.3.6.1.4.1.14519.5.2.1.6279.6001.102133688497886810253331438797.mhd',) def ReadImage(*args): """ ReadImage(VectorString fileNames, itk::simple::PixelIDValueEnum outputPixelType) -> Image ReadImage(std::string const & filename, itk::simple::PixelIDValueEnum outputPixelType) -> Image ReadImage is a procedural interface to the ImageSeriesReader class which is convenient for most image reading tasks. Note that when reading a series of images that have meta-data associated with them (e.g. a DICOM series) the resulting image will have an empty meta-data dictionary. It is possible to programmatically add a meta-data dictionary to the compounded image by reading in one or more images from the series using the ImageFileReader class, analyzing the meta-dictionary associated with each of those images and creating one that is relevant for the compounded image. See: itk::simple::ImageFileReader for reading a single file """ > return _SimpleITK.ReadImage(*args) E RuntimeError: Exception thrown in SimpleITK ReadImage: /tmp/SimpleITK/Code/IO/src/sitkImageReaderBase.cxx:82: E sitk::ERROR: Unable to determine ImageIO reader for "../images/LUNA-0001/1.3.6.1.4.1.14519.5.2.1.6279.6001.102133688497886810253331438797/1.3.6.1.4.1.14519.5.2.1.6279.6001.102133688497886810253331438797.mhd" /usr/local/lib/python3.6/site-packages/SimpleITK/SimpleITK.py:8332: RuntimeError ___________________________________________ test_load_ct ____________________________________________ metaimage_path = '../images/LUNA-0001/1.3.6.1.4.1.14519.5.2.1.6279.6001.102133688497886810253331438797' dicom_path = '../images/LIDC-IDRI-0001/1.3.6.1.4.1.14519.5.2.1.6279.6001.298806137288633453246975630178/1.3.6.1.4.1.14519.5.2.1.6279.6001.179049373636438705059720603192' def test_load_ct(metaimage_path, dicom_path): ct_array, meta = load_ct.load_ct(dicom_path) assert isinstance(ct_array, np.ndarray) assert ct_array.shape[2] == len(meta) > ct_array, meta = load_ct.load_ct(metaimage_path) src/tests/test_load_ct.py:47: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ src/preprocess/load_ct.py:103: in load_ct meta = load_metaimage(mhd_pattern, voxel=voxel) src/preprocess/load_ct.py:72: in load_metaimage meta = SimpleITK.ReadImage(path) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = ('../images/LUNA-0001/1.3.6.1.4.1.14519.5.2.1.6279.6001.102133688497886810253331438797/1.3.6.1.4.1.14519.5.2.1.6279.6001.102133688497886810253331438797.mhd',) def ReadImage(*args): """ ReadImage(VectorString fileNames, itk::simple::PixelIDValueEnum outputPixelType) -> Image ReadImage(std::string const & filename, itk::simple::PixelIDValueEnum outputPixelType) -> Image ReadImage is a procedural interface to the ImageSeriesReader class which is convenient for most image reading tasks. Note that when reading a series of images that have meta-data associated with them (e.g. a DICOM series) the resulting image will have an empty meta-data dictionary. It is possible to programmatically add a meta-data dictionary to the compounded image by reading in one or more images from the series using the ImageFileReader class, analyzing the meta-dictionary associated with each of those images and creating one that is relevant for the compounded image. See: itk::simple::ImageFileReader for reading a single file """ > return _SimpleITK.ReadImage(*args) E RuntimeError: Exception thrown in SimpleITK ReadImage: /tmp/SimpleITK/Code/IO/src/sitkImageReaderBase.cxx:82: E sitk::ERROR: Unable to determine ImageIO reader for "../images/LUNA-0001/1.3.6.1.4.1.14519.5.2.1.6279.6001.102133688497886810253331438797/1.3.6.1.4.1.14519.5.2.1.6279.6001.102133688497886810253331438797.mhd" /usr/local/lib/python3.6/site-packages/SimpleITK/SimpleITK.py:8332: RuntimeError __________________________________________ test_load_meta ___________________________________________ metaimage_path = '../images/LUNA-0001/1.3.6.1.4.1.14519.5.2.1.6279.6001.102133688497886810253331438797' dicom_path = '../images/LIDC-IDRI-0001/1.3.6.1.4.1.14519.5.2.1.6279.6001.298806137288633453246975630178/1.3.6.1.4.1.14519.5.2.1.6279.6001.179049373636438705059720603192' def test_load_meta(metaimage_path, dicom_path): meta = load_ct.load_ct(dicom_path, voxel=False) assert isinstance(meta, list) assert len(meta) > 0 assert all([isinstance(_slice, dicom.dataset.FileDataset) for _slice in meta]) > meta = load_ct.load_ct(metaimage_path, voxel=False) src/tests/test_load_ct.py:64: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ src/preprocess/load_ct.py:103: in load_ct meta = load_metaimage(mhd_pattern, voxel=voxel) src/preprocess/load_ct.py:72: in load_metaimage meta = SimpleITK.ReadImage(path) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = ('../images/LUNA-0001/1.3.6.1.4.1.14519.5.2.1.6279.6001.102133688497886810253331438797/1.3.6.1.4.1.14519.5.2.1.6279.6001.102133688497886810253331438797.mhd',) def ReadImage(*args): """ ReadImage(VectorString fileNames, itk::simple::PixelIDValueEnum outputPixelType) -> Image ReadImage(std::string const & filename, itk::simple::PixelIDValueEnum outputPixelType) -> Image ReadImage is a procedural interface to the ImageSeriesReader class which is convenient for most image reading tasks. Note that when reading a series of images that have meta-data associated with them (e.g. a DICOM series) the resulting image will have an empty meta-data dictionary. It is possible to programmatically add a meta-data dictionary to the compounded image by reading in one or more images from the series using the ImageFileReader class, analyzing the meta-dictionary associated with each of those images and creating one that is relevant for the compounded image. See: itk::simple::ImageFileReader for reading a single file """ > return _SimpleITK.ReadImage(*args) E RuntimeError: Exception thrown in SimpleITK ReadImage: /tmp/SimpleITK/Code/IO/src/sitkImageReaderBase.cxx:82: E sitk::ERROR: Unable to determine ImageIO reader for "../images/LUNA-0001/1.3.6.1.4.1.14519.5.2.1.6279.6001.102133688497886810253331438797/1.3.6.1.4.1.14519.5.2.1.6279.6001.102133688497886810253331438797.mhd" /usr/local/lib/python3.6/site-packages/SimpleITK/SimpleITK.py:8332: RuntimeError ___________________________________________ test_metadata ___________________________________________ metaimage_path = '../images/LUNA-0001/1.3.6.1.4.1.14519.5.2.1.6279.6001.102133688497886810253331438797' dicom_path = '../images/LIDC-IDRI-0001/1.3.6.1.4.1.14519.5.2.1.6279.6001.298806137288633453246975630178/1.3.6.1.4.1.14519.5.2.1.6279.6001.179049373636438705059720603192' def test_metadata(metaimage_path, dicom_path): meta = load_ct.load_ct(dicom_path, voxel=False) meta = load_ct.MetaData(meta) zipped = zip(meta.spacing, (0.703125, 0.703125, 2.5)) assert all([m_axis == o_axis for m_axis, o_axis in zipped]) > meta = load_ct.load_ct(metaimage_path, voxel=False) src/tests/test_load_ct.py:74: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ src/preprocess/load_ct.py:103: in load_ct meta = load_metaimage(mhd_pattern, voxel=voxel) src/preprocess/load_ct.py:72: in load_metaimage meta = SimpleITK.ReadImage(path) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = ('../images/LUNA-0001/1.3.6.1.4.1.14519.5.2.1.6279.6001.102133688497886810253331438797/1.3.6.1.4.1.14519.5.2.1.6279.6001.102133688497886810253331438797.mhd',) def ReadImage(*args): """ ReadImage(VectorString fileNames, itk::simple::PixelIDValueEnum outputPixelType) -> Image ReadImage(std::string const & filename, itk::simple::PixelIDValueEnum outputPixelType) -> Image ReadImage is a procedural interface to the ImageSeriesReader class which is convenient for most image reading tasks. Note that when reading a series of images that have meta-data associated with them (e.g. a DICOM series) the resulting image will have an empty meta-data dictionary. It is possible to programmatically add a meta-data dictionary to the compounded image by reading in one or more images from the series using the ImageFileReader class, analyzing the meta-dictionary associated with each of those images and creating one that is relevant for the compounded image. See: itk::simple::ImageFileReader for reading a single file """ > return _SimpleITK.ReadImage(*args) E RuntimeError: Exception thrown in SimpleITK ReadImage: /tmp/SimpleITK/Code/IO/src/sitkImageReaderBase.cxx:82: E sitk::ERROR: Unable to determine ImageIO reader for "../images/LUNA-0001/1.3.6.1.4.1.14519.5.2.1.6279.6001.102133688497886810253331438797/1.3.6.1.4.1.14519.5.2.1.6279.6001.102133688497886810253331438797.mhd" /usr/local/lib/python3.6/site-packages/SimpleITK/SimpleITK.py:8332: RuntimeError =============================== 4 failed, 22 passed in 28.94 seconds ================================ ``` ## Steps to Reproduce 1. `git fetch --all` 2. `git reset --hard origin/master` 3. `sudo git clean -x -i -d` 4. `sudo docker-compose -f local.yml build && sudo bash tests/test_docker.sh` The tests fail with the aforementioned error. I'm running Ubuntu 16.04 and docker-compose version 1.15.0, build e12f3b9. Is anyone else experiencing this issue? I'm really confused since the tests pass on Travis... ## Checklist before submitting - [X] I have confirmed this using the officially supported Docker Compose setup using the `local.py` configuration and ensured that I built the containers again and they reflect the most recent version of the project at the `HEAD` commit on the `master` branch - [X] I have searched through the other currently open issues and am confident this is not a duplicate of an existing bug - [X] I provided a **minimal code snippet** or list of steps that reproduces the bug. - [X] I filled out all the relevant sections of this template
2 months, 3 weeks ago
2 months, 4 weeks ago

@WGierke commented on issue #123: AttributeError: module 'src.algorithms.classify' has no attribute 'trained_model'

Thanks for filing the issue @vlavorini . I think I resolved it by adding the missing content to `__init__.py`, which has been added for the identify and segment algorithm folders but not for the classify one. Can you pull my branch `123_attribute_error` and verify whether it works for you there as well?
2 months, 4 weeks ago

@WGierke opened a new pull request: #124: #123 Fix AttributeError: module 'src.algorithms.classify' has no attribute

I added the missing content of __init__.py ## Reference to official issue This addresses #123 . ## How Has This Been Tested? I rebuilt and fired up `docker-compose` again, which showed be the correct API landing page. ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
2 months, 4 weeks ago

@WGierke commented on PR #101: #34 Select Lung Orientation

@antonow This sounds great since I'm a giant novice when it comes to Vue :) I removed my front-end code again and refactored the back-end code. Travis is still making some troubles although all tests pass locally ...
2 months, 4 weeks ago

@WGierke opened a new pull request: #118: #1 Identify Nodules: Julian de Wit Algorithm

I adapted the source code of [Julian de Wit](https://github.com/juliandewit/kaggle_ndsb2017) whose solution scored 2nd pace during the DSB 2017. I decided to use it as it was already written for Python 3, has a manageable code base and immediately ran using my CPU only. The algorithm is documented [here](https://github.com/concept-to-clinic/concept-to-clinic/blob/master/docs/algorithm_de_wit.md). It uses a Conv3D network to detect nodules. The network returns a set of nodules characterized by their position and the probability, that the marked tissue really is a nodule. This output is wrapped into the placeholder function that's already implemented in [prediction/src/algorithms/identify/trained_model/predict](https://github.com/concept-to-clinic/concept-to-clinic/blob/master/prediction/src/algorithms/identify/trained_model.py#L11-L38). ## Description I took most of the code from steps 2 (preprocessing) and 4 (predicting) of De Wits repository as well as the models. The models, that were trained on different data sets with different data transformations such as magnification, are saved using Git LFS. So far, I kept most of the commented code so it's easier to compare this ported version with his code, especially since his implementation is not very well documented what makes it difficult to understand why certain code regions are important. ## Reference to official issue This addresses #1 ## Motivation and Context We need a possibility to automatically find nodules so we can calculate further whether they are malicious or not. I believe that it might be even better to have multiple algorithms that detect nodules and their malice since this is supposed to be the core feature of this whole project. ## How Has This Been Tested? It takes approximately 8 Minutes to detect nodules of a full DICOM scan using CPU only with my i7. Thus, I didn't want to write a test (as it probably would take days for that to finish on Travis) so I can only provide a screenshot of the output of the API endpoint: ![screenshot from 2017-09-18 00-41-46](https://user-images.githubusercontent.com/6676439/30525629-76bfcaa6-9c0a-11e7-850e-8b078fb57637.png) I'd highly appreciate any feedback how to further simplify and beautify the code. ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
2 months, 4 weeks ago

@WGierke commented on PR #101: #34 Select Lung Orientation

Thanks for the feedback @lamby! I already started to implement it. However, I'm a bit blocked by the fact that currently the Vue.js front-end neither has a router and other pages than the "Open Image" page, nor does jQuery seem to work (when trying to run `$('select[name=lung_orientation]').val()`, I'm getting the error `'$' is not defined`, when trying it with `window.$...` I'm getting a `window.$ is not a function`). @isms do you think you could have a look into that? I'm a complete novice when it comes to Vue :/ Otherwise I'd be supposed to wait for #110 to be completely implemented.
2 months, 4 weeks ago

@WGierke commented on issue #116: DICOM test images raising error with pydicom

Hey @rahit could you give a complete piece of code that throws the error, please? That would make it a bit easier to reproduce the error I think :)
3 months ago

@WGierke opened a new pull request: #115: #23 Document the Therapixel algorithm

I added an initial documentation of the Therapixel algorithm. The algorithm scored 5th place during the DSB 2017. Unfortunately, there is no technical report or blog post about the approach so far. Thus, I had to derive the algorithm design from the source code and black-box models only, which resulted in a shorter documentation than for other algorithms. ## Reference to official issue This pull request addresses #23 ## Motivation and Context At some point we need to implement an algorithm that can segment the nodules and detect whether they are malicious or not. It makes sense to first evaluate the top 10 solutions for the Data Science Bowl 2017 regarding how performant they are and how easy it would be to use them in our application. ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
3 months ago

@WGierke commented on PR #94: #30 Add JSON case reports

@lamby Thanks again for suggesting some really nice and clean shortcuts. I implemented them and squashed the commits.
3 months ago

@WGierke created a new issue: #114: Always pass CSRF tokens when posting with jQuery

As stated [here](https://github.com/concept-to-clinic/concept-to-clinic/pull/101#discussion_r138825291), it would be nice to automatically add the current page's CSRF token to a POST request by jQuery. ## Expected Behavior When posting to the API, one is not required to always manually parse the CSRF token from the current page and send it to the backend. ## Current Behavior One is supposed to grap the token from the page every time one wants to post something: ``` update: function(nodule) { this.$http.post(nodule.url + "update", { csrfmiddlewaretoken: $("input[name=csrfmiddlewaretoken]").val(), lung_orientation: $("select[name=lung_orientation]").val() }).then(function(response) { console.log(response); } ``` We don't want to always manually add `csrfmiddlewaretoken: $("input[name=csrfmiddlewaretoken]").val()`. ## Possible Solution Use one of jQuerys global AJAX handlers like [.ajaxSend()](https://api.jquery.com/ajaxSend/). ## Checklist before submitting - [X] I have searched through the other currently open issues and am confident this is not a duplicate of an existing bug - [X] I provided a **minimal code snippet** or list of steps that reproduces the bug. - [X] I filled out all the relevant sections of this template
3 months ago

@WGierke commented on issue #113: Fix Vue container - Error: Cannot find module 'chalk'

@isms I deleted all node_modules folders, removed all containers and built them from scratch. I'm still running into this issue. Can you reproduce it?
3 months ago

@WGierke created a new issue: #113: Fix Vue container - Error: Cannot find module 'chalk'

Currently, when attempting to build the project using `docker-compose -f local.yml build && docker-compose -f local.yml up`, the Vue container dies while trying to set up the npm dependencies. ## Expected Behavior Building and firing up the containers should be enough to let the Vue container serve at http://localhost:8080. ## Current Behavior Firing up the Vue container fails with `Error: Cannot find module 'chalk'`: ``` ➜ concept-to-clinic git:(master) ✗ sudo docker-compose -f local.yml build && sudo docker-compose -f local.yml up [...] vue_1 | npm info it worked if it ends with ok vue_1 | npm info using npm@5.3.0 vue_1 | npm info using node@v8.5.0 vue_1 | npm info lifecycle c2c-vue@1.0.0~predev: c2c-vue@1.0.0 vue_1 | npm info lifecycle c2c-vue@1.0.0~dev: c2c-vue@1.0.0 vue_1 | vue_1 | > c2c-vue@1.0.0 dev /app/frontend vue_1 | > node build/dev-server.js vue_1 | vue_1 | module.js:529 vue_1 | throw err; vue_1 | ^ vue_1 | vue_1 | Error: Cannot find module 'chalk' vue_1 | at Function.Module._resolveFilename (module.js:527:15) vue_1 | at Function.Module._load (module.js:476:23) vue_1 | at Module.require (module.js:568:17) vue_1 | at require (internal/module.js:11:18) vue_1 | at Object.<anonymous> (/app/frontend/build/check-versions.js:1:75) vue_1 | at Module._compile (module.js:624:30) vue_1 | at Object.Module._extensions..js (module.js:635:10) vue_1 | at Module.load (module.js:545:32) vue_1 | at tryModuleLoad (module.js:508:12) vue_1 | at Function.Module._load (module.js:500:3) vue_1 | npm info lifecycle c2c-vue@1.0.0~dev: Failed to exec dev script vue_1 | npm ERR! code ELIFECYCLE vue_1 | npm ERR! errno 1 vue_1 | npm ERR! c2c-vue@1.0.0 dev: `node build/dev-server.js` vue_1 | npm ERR! Exit status 1 vue_1 | npm ERR! vue_1 | npm ERR! Failed at the c2c-vue@1.0.0 dev script. vue_1 | npm ERR! This is probably not a problem with npm. There is likely additional logging output above. vue_1 | npm WARN Local package.json exists, but node_modules missing, did you mean to install? vue_1 | vue_1 | npm ERR! A complete log of this run can be found in: vue_1 | npm ERR! /root/.npm/_logs/2017-09-13T16_15_20_703Z-debug.log ``` ## Possible Solution When I manually cd into frontend, install the dependencies using `npm install` and run `docker-compose -f local.yml up` again, it works. ## Steps to Reproduce Check out the current master `git reset --hard origin/master` Remove all files and folders that are not explicitly tracked by Git (such as npm build files or modules) `sudo git clean -x -i -d` Build and start the docker containers `sudo docker-compose -f local.yml build && sudo docker-compose -f local.yml up` You should already see that the Vue container throws an error and exits. When visiting http://localhost:8080/, you'll get a `Connection refused`. Thus, we need to install the npm dependencies manually `cd interface/frontend && npm install` Run `sudo docker-compose -f local.yml up` and visit localhost:8080 again - it should work now. I'd appreciate any feedback - maybe I made a mistake while setting it up? ## Checklist before submitting - [X] I have confirmed this using the officially supported Docker Compose setup using the `local.py` configuration and ensured that I built the containers again and they reflect the most recent version of the project at the `HEAD` commit on the `master` branch - [X] I have searched through the other currently open issues and am confident this is not a duplicate of an existing bug - [X] I provided a **minimal code snippet** or list of steps that reproduces the bug. - [X] I provided **screenshots** where appropriate - [X] I filled out all the relevant sections of this template
3 months ago

@WGierke opened a new pull request: #112: #111 Replace JsonResponses by Responses

I replaced JsonResponses by Responses which follows the DRF idioms. ## Reference to official issue This addresses #111 . ## How Has This Been Tested? I added two tests so all functions, that previously used JsonResonses, are tested whether they still return valid JSON. ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
3 months ago

@WGierke commented on PR #94: #30 Add JSON case reports

Thanks @isms I wasn't aware of that. Now, ModelSerializers are used for the JSON serialization. However, so far I haven't found a solution to let the prettified JSON be rendered in a pretty way using a default HTML Renderer. Currently, I'm supposed to use a custom Renderer that wraps the JSON with `<pre>` tags. In case that's still not correct, could you point me to the correct idiomatic way to solve this, please?
3 months ago
3 months ago

@WGierke commented on issue #106: Fix and refactor npm build process

@antonow Thanks for the clarification! Sorry, I did not notice that conversation. So it might make sense to file a new issue about porting the front-end we already had with Django templates to Vue (@isms @lamby @reubano )? Otherwise, the implementation of the remaining front-end issues #30 #33 #34 might become problematic
3 months ago

@WGierke commented on PR #108: #20 Document the Julian de Wit algorithm

@reubano Alright, I squashed everything including the work of @timothyman into one commit.
3 months ago

@WGierke commented on PR #108: #20 Document the Julian de Wit algorithm

@timothyman Thanks a lot :) It looks way nicer with such detailed dependency versions! Oh I see :D I also didn't manage to run an algorithm so far because my internet connection always aborts before I finish downloading the whole training sets ...
3 months ago

@WGierke commented on PR #108: #20 Document the Julian de Wit algorithm

@timothyman Do you want to add any insights you talked about in PR #99 ? We'd highly appreciate it :)
3 months ago

@WGierke opened a new pull request: #108: #20 Document the Julian de Wit algorithm

I added an initial documentation of the Julian de Wit algorithm. The algorithm output was ensembled with the results of the [Daniel Hammack algorithm](https://github.com/concept-to-clinic/concept-to-clinic/blob/master/docs/algorithm_hammack.md). ## Reference to official issue This pull request addresses #20 ## Motivation and Context At some point we need to implement an algorithm that can segment the nodules and detect whether they are malicious or not. It makes sense to first evaluate the top 10 solutions for the Data Science Bowl 2017 regarding how performant they are and how easy it would be to use them in our application. ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
3 months ago

@WGierke created a new issue: #106: Fix and refactor npm build process

Currently, when attempting to build the new npm front-end, I'm running into a few issues. Firstly, it's not enough anymore to run `docker-compose -f local.yml build && docker-compose -f local.yml up`. Now, one also needs to run `npm install` and `npm run build` manually in the `frontend` directory. Without these commands, the front-end could not be served. Unfortunately, the latter command throws an error while building (which is easy to fix). However, even after fixing it I can only see the navigation but no content. ## Expected Behavior The documentation says that building the docker containers and starting them should be enough to run the application locally. One would also expect to see the content we already implemented until now. ## Current Behavior One also has to run some npm commands manually, which throws an error. Even after fixing that, only the navbar can be seen but no content like we had it before with the Django templates. ## Possible Solution - move the npm commands in the Dockerfiles (might be cleaner than adding them to the documentation) - adapt the image path that's throwing an error during `npm run build` ## Steps to Reproduce Check out the current master `git reset --hard origin/master` Remove all files and folders that are not explicitly tracked by Git (such as npm build files or modules) `sudo git clean -x -i -d` Build and start the docker containers `sudo docker-compose -f local.yml build && sudo docker-compose -f local.yml up` When visiting http://localhost:8000/, you'll get a `TemplateNotFoundError`. ![screenshot from 2017-09-10 18-54-20](https://user-images.githubusercontent.com/6676439/30251086-7ee554e6-9659-11e7-8000-e52a6df8b318.png) Thus, we need to install the npm dependencies manually and compile the application `cd interface/frontend && npm install && npm run build` You should get the error `Module not found: Error: Can't resolve '../assets/c-logo-light.svg' in '/app/frontend/src/components'`. ![screenshot from 2017-09-10 18-56-17](https://user-images.githubusercontent.com/6676439/30251102-c5117e72-9659-11e7-8a7c-5e85cceb6234.png) This can be solved by adapting the image path (only a hot fix): ``` --- a/interface/frontend/src/components/AppNav.vue +++ b/interface/frontend/src/components/AppNav.vue @@ -1,6 +1,6 @@ <template> <nav class="navbar navbar-toggleable-md navbar-inverse bg-inverse fixed-top"> - <img src="../assets/c-logo-light.svg"/> + <img src="../assets/images/c-logo-light.svg"/> <a class="navbar-brand" href="#">Concept To Clinic</a> ``` `npm run build` should now succeed. However, when visiting http://localhost:8000/, I can only see the navbar without any content when switching tabs: ![screenshot from 2017-09-10 18-48-10](https://user-images.githubusercontent.com/6676439/30251040-b4b7fffc-9658-11e7-9074-89e216896ef6.png) ## Checklist before submitting - [X] I have confirmed this using the officially supported Docker Compose setup using the `local.py` configuration and ensured that I built the containers again and they reflect the most recent version of the project at the `HEAD` commit on the `master` branch - [X] I have searched through the other currently open issues and am confident this is not a duplicate of an existing bug - [X] I provided a **minimal code snippet** or list of steps that reproduces the bug. - [X] I provided **screenshots** where appropriate - [X] I filled out all the relevant sections of this template
3 months ago

@WGierke created a new issue: #105: Fix failing tests

Currently, two tests on the master branch are failing which seemingly has been introduced by PR #97 . It would be nice if the tests would pass again. ## Expected Behavior All tests pass. ## Current Behavior 2 tests don't pass: ``` ERROR: test_landing (backend.static.tests.SmokeTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/app/backend/static/tests.py", line 7, in test_landing url = reverse('static:open-image') File "/usr/local/lib/python2.7/dist-packages/django/urls/base.py", line 91, in reverse return force_text(iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs))) File "/usr/local/lib/python2.7/dist-packages/django/urls/resolvers.py", line 497, in _reverse_with_prefix raise NoReverseMatch(msg) NoReverseMatch: Reverse for 'open-image' not found. 'open-image' is not a valid view function or pattern name. ====================================================================== ERROR: test_get_create_image_series (backend.images.tests.SmokeTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/app/backend/images/tests.py", line 17, in test_get_create_image_series image_series, created = ImageSeries.get_or_create(uri) TypeError: unbound method get_or_create() must be called with ImageSeries instance as first argument (got str instance instead) ``` ## Steps to Reproduce Check the [Travis build of the merge commit](https://travis-ci.org/concept-to-clinic/concept-to-clinic/builds/273792503) or 1. `git reset origin/master` 2. `bash tests/test_docker.sh ` ## Possible Implementation For the first test, it might make sense to replace the 'open-image' view name with 'home', since the 'open-image' view name has been replaced with 'home' [on the current master](https://github.com/concept-to-clinic/concept-to-clinic/blob/master/interface/backend/static/urls.py#L5). ## Checklist before submitting - [X] I have confirmed this using the officially supported Docker Compose setup using the `local.py` configuration and ensured that I built the containers again and they reflect the most recent version of the project at the `HEAD` commit on the `master` branch - [X] I have searched through the other currently open issues and am confident this is not a duplicate of an existing bug - [X] I provided a **minimal code snippet** or list of steps that reproduces the bug. - [X] I provided **screenshots** where appropriate - [X] I filled out all the relevant sections of this template
3 months ago

@WGierke opened a new pull request: #104: #103 Adapt Travis badge

This should update the Travis badge in the readme to reflect this repository's current build state. ## Reference to official issue This addresses #103 ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
3 months ago

@WGierke created a new issue: #103: Adapt Travis badge in Readme to this repo

Currently, the Travis badge that displays the build state of the latest commit on the master branch belongs to the repository concept-to-clinic/point-platform. It might make more sense that the badge should actually display the Travis state of this repository. ![screenshot from 2017-09-10 16-16-13](https://user-images.githubusercontent.com/6676439/30249781-7417116e-9643-11e7-92f4-c0b5357e2860.png) ## Expected Behavior The badge for this repository should be displayed: ![](https://travis-ci.org/concept-to-clinic/concept-to-clinic.svg?branch=master) ![screenshot from 2017-09-10 16-16-24](https://user-images.githubusercontent.com/6676439/30249779-6cf21c30-9643-11e7-980b-31472f2c2f7d.png) ## Possible Solution Update the badge in the README. ## Context (Environment) The badge should be updated appropriately so we can e.g. see that currently the tests on the master branch are failing. ## Checklist before submitting - [X] I have confirmed this using the officially supported Docker Compose setup using the `local.py` configuration and ensured that I built the containers again and they reflect the most recent version of the project at the `HEAD` commit on the `master` branch - [X] I have searched through the other currently open issues and am confident this is not a duplicate of an existing bug - [X] I provided a **minimal code snippet** or list of steps that reproduces the bug. - [X] I provided **screenshots** where appropriate - [X] I filled out all the relevant sections of this template
3 months ago

@WGierke commented on PR #97: Set up base webpack config and install npm dependencies in Dockerfile

Hey @antonow thanks a lot for your contribution! Do you happen to have some time to fix the tests that are currently still failing? Otherwise it might sense to open an issue for that so we can keep the tests passing on the master :)
3 months ago

@WGierke opened a new pull request: #101: #34 Select Lung Orientation

This PR adds the ability to select whether a nodule belongs to the right or to the left lung. ## Description I added an accordion view of all nodules with a drop-down menu to select in which part of the lung a nodule is located. ## Reference to official issue This addresses #34 . ## Motivation and Context This should be a simple entry point for adding information to a nodule. ## How Has This Been Tested? I wrote a [test](https://github.com/concept-to-clinic/concept-to-clinic/compare/master...WGierke:34_set_lung_orientation?expand=1#diff-4a8b2a0cc0727d9b77d206172538ca85R36) that checks whether POSTing to the new back-end endpoint results in an appropriately updated nodule entity. Also, I tested it manually. ## Screenshots: ![screenshot from 2017-09-09 23-05-11](https://user-images.githubusercontent.com/6676439/30243935-391e8522-95b4-11e7-8275-019cd18d76ef.png) ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
3 months, 1 week ago
3 months, 1 week ago

@WGierke opened a new pull request: #99: #19 Document Daniel Hammack algorithm

I added an initial documentation of the Daniel Hammack algorithm. The algorithm output was ensembled with the results of the [Julian de Wit algorithm](https://github.com/concept-to-clinic/concept-to-clinic/issues/20) whose documentation should follow soon. Please feel free to add commits and extend it! ## Reference to official issue This pull request addresses #19 . ## Motivation and Context At some point we need to implement an algorithm that can segment the nodules and detect whether they are malicious or not. It makes sense to first evaluate the top 10 solutions for the Data Science Bowl 2017 regarding how performant they are and how easy it would be to use them in our application. ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
3 months, 1 week ago

@WGierke commented on PR #85: #31 Show directory tree of possible images

@lamby conflicts are resolved again. I'm not fooling you - after merging other pull requests into the master this branch just gets conflicts again :)
3 months, 1 week ago

@WGierke commented on PR #97: Set up base webpack config and install npm dependencies in Dockerfile

If you remove the space in front of your 'X' in the PR description, the CLA test should pass :) Could you adapt the current failing test, that checks whether the 'Open Image' page exists, to the new Vue.js frontend? I'm still not quite sure why the second test fails ...
3 months, 1 week ago

@WGierke commented on issue #19: Document the Daniel Hammack algorithm

Unfortunately, the author didn't respond so far. @reubano you already wrote that the code runs on Windows 64bit with Python 3.5. From where did you receive this information? I'm a bit confused because I found a lot `print` statements in the code that don't have parentheses which would throw errors when running with Python 3.
3 months, 1 week ago

@WGierke commented on PR #82: #18 Document the grt123 algorithm

Hey @reubano unfortunately I had no success so far, sorry. Nevertheless, I updated the relevant sections based on your remarks.
3 months, 1 week ago

@WGierke opened a new pull request: #94: #30 Add JSON case reports

This PR is based on and blocked by #93 (because this already adds a proper creation of candidates). To see the raw changes this PR wants to add, please see [here](https://github.com/WGierke/concept-to-clinic/compare/36_accept_reject_candidates...WGierke:30_case_json_report). Now, a JSON report can be generated for a case that includes the patient id, candidates, nodules, etc. ## Description I fixed the creation of nodules, added the JSON report and wrote a test for that. ## Reference to official issue This addresses #30. ## Motivation and Context This should summarize the data of a case and help debugging. ## How Has This Been Tested? I wrote a test which adds a case with image series, candidates and nodules and which tests for a proper report. ## Screenshots (if appropriate): ![screenshot from 2017-09-06 15-43-02](https://user-images.githubusercontent.com/6676439/30115903-5b97ae02-931c-11e7-82ca-f5d92d41a175.png) ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
3 months, 1 week ago

@WGierke commented on PR #92: #37 Show "X candidates found"

You're right, they've been a piece of cake - and resolved :)
3 months, 1 week ago

@WGierke commented on PR #85: #31 Show directory tree of possible images

@lamby Thanks, I'll stick to that in the future. The conflicts should now be resolved.
3 months, 1 week ago

@WGierke opened a new pull request: #93: #36 Accept and Reject Candidates

This pull request is based on #88 and also blocked by it. The buttons, that have been added in this PR, now also get a functionality (that still needs to be implemented in the back-end). ## Description You can find the "raw" changes between PR #88 and this PR [here](https://github.com/WGierke/concept-to-clinic/compare/35_list_candidates...WGierke:36_accept_reject_candidates?expand=1) which might be easier to oversee. I added two methods that are invoked when the Dismiss / Mark Concerning buttons are clicked. The functions receive the ID of the candidate as soon as the user triggers the buttons. The entire page's results are not aggregated. ## Reference to official issue This addresses #36 . ## Motivation and Context The end-user should be able to specify whether candidates might be concerning or whether they can simply be dismissed. ## How Has This Been Tested? I fixed the creation of candidates (before, there was an error stating that the serializer can't handle nested `create` calls) and created three candidates. I clicked the Dismiss and Mark buttons of candidates 1 and 3 and was ably to verify the correct API responses in the console: ![screenshot from 2017-09-05 23-25-23](https://user-images.githubusercontent.com/6676439/30084158-88451e48-9291-11e7-91b2-65228cbd3064.png) ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
3 months, 1 week ago

@WGierke opened a new pull request: #92: #37 Show "X candidates found"

This pull request is based on #88 . The fetched candidates are now counted, pluralized and displayed. ## Description You can find the "raw" changes between PR #88 and this PR [here](https://github.com/WGierke/concept-to-clinic/compare/35_list_candidates...WGierke:37_candidates_found?expand=1). Filters have been removed in Vue.js 2.0 so I was supposed to import [vue2-filters](https://github.com/freearhey/vue2-filters) which includes the `capitalize` filter. This is IMO a cleaner solution than writing `capitalize` and further filters by ourselves. ## Reference to official issue This addresses #37 ## Motivation and Context The end-user should get a quick overview of the scale of the case. ## How Has This Been Tested? I used the same stubbed response as in PR #88 . The resulting screen can be seen here: ![screenshot from 2017-09-05 20-16-30](https://user-images.githubusercontent.com/6676439/30076319-c2244652-9278-11e7-9099-944d535bfebc.png) ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
3 months, 1 week ago

@WGierke commented on issue #33: Display metadata about an image prior to selection

Do you know what the "mA" (probably milliampere?) information means and how to access it using `pydicom`?
3 months, 1 week ago

@WGierke commented on PR #85: #31 Show directory tree of possible images

@lamby Thanks, I squashed the commits. Hopefully, merging should work now :)
3 months, 1 week ago

@WGierke commented on PR #88: #35 Show list of candidates nodules

@lamby Thanks again for your feedback! I implemented it and squashed the commits.
3 months, 1 week ago

@WGierke commented on PR #85: #31 Show directory tree of possible images

@lamby That sounds nice :) Are there any more remarks?
3 months, 1 week ago

@WGierke opened a new pull request: #88: #35 Show list of candidates nodules

This should add an "accordion" view that shows the candidate nodules for a selected image. The candidate fields `lidc_max_sensitiv, convnet_vgg, convnett_vgg_lidc, Slice` still need to be implemented in the back-end. ## Description I added the front-end Vue.js implementation to display candidate nodules of an image based on the current API. ## Reference to official issue This addresses #35 . ## Motivation and Context The doctor should be able to quickly get an overview of candidate nodules including their probability of being cancer. ## How Has This Been Tested? I created two dummy candidates by letting the API return ``` [{ "centroid": { "x": 54, "y": 78, "z": 23, "series": 0 }, "probability_concerning": 0.94, "case": 1 }, { "centroid": { "x": 1, "y": 2, "z": 3, "series": 1 }, "probability_concerning": 0.94, "case": 1 }] ``` The outcome was ![screenshot from 2017-09-03 01-00-39](https://user-images.githubusercontent.com/6676439/30002240-8cdeac24-90a4-11e7-99a8-760e074bd7c4.png) ![screenshot from 2017-09-03 01-00-55](https://user-images.githubusercontent.com/6676439/30002242-92e659d2-90a4-11e7-95ec-d5296cb57a1e.png) ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
3 months, 1 week ago

@WGierke commented on PR #85: #31 Show directory tree of possible images

Thanks @lamby , I added appropriate url tags and filed a new issue. What do you think of rahit's question - are we expecting that the API returns a directory structure deeper than two levels? That would most likely require us to render the file tree recursively. To resolve #31 so far I assumed that the tree should always look like the one in the picture of #31 so to only have two levels.
3 months, 1 week ago

@WGierke created a new issue: #86: Gloabl JS Error Handler

From [PR #85 ](https://github.com/concept-to-clinic/concept-to-clinic/pull/85#pullrequestreview-60289138): build a global JS error handler that displays a notification on any passed error. ## Expected Behavior There's one global error handler that is used as an error callback by JS functions and that displays the errors as "some kind of drop-down". ## Current Behavior Every JS function has to implement its function that handles errors. ## Detailed Description One example is `fetchData()` in `open_image.html`. Adding another method would mean to implement an error handler that should do the same thing as the one in `fetchData()`. According to the DRY principle, this logic should be moved into one component that is globally accessible. ## Possible Implementation For displaying the errors, something like [bootstrap-notify](http://bootstrap-notify.remabledesigns.com/) could be used. ## Acceptance criteria - [ ] the error handler can be invoked by scripts on different pages - [ ] it displays the error (and optimally vanishes again without any user interaction)
3 months, 1 week ago

@WGierke commented on PR #82: #18 Document the grt123 algorithm

Thanks for your feedback! I'm not familiar with pyTorch at all but I would have never thought that it's so complicated to port it to Python 3 enable it to run on a CPU... In that case I should clearly update the relevant parts of the algorithm documentation about that. DId you happen to try how long it takes the model to predict the cancer probability on one image?
3 months, 2 weeks ago

@WGierke opened a new pull request: #85: #31 Show directory tree of possible images

A file tree is rendered using Vue.js that should show the images the user can select from the local disk. ## Description I based this PR on rahit's work in #81 that adds a new endpoint to fetch images from. The changes render a file tree at the index page that visualizes the response from that new endpoint. ## Reference to official issue This should address #31 . ## Motivation and Context It already adds some functionality in the front-end to render the structure of the available images. This is already done in Vue.js instead of using the Django template language. ## How Has This Been Tested? `ImageAvailableApiView.get()` is currently not implemented, yet. So currently, the tree only has a lonely root: ![screenshot from 2017-09-02 00-41-58](https://user-images.githubusercontent.com/6676439/29990220-d59c15b4-8f78-11e7-91d2-3a04bdc786e3.png) However, assuming the view would return the following response ``` JsonResponse({'directories': [{'name': 'test_cases, no modules', 'children': ['test_case_001', 'test_case_002', 'test_case_003']}, {'name': 'test_cases, benign', 'children': ['test_case_004', 'test_case_005', 'test_case_006']}, {'name': 'test_cases, cancerous', 'children': ['test_case_007', 'test_case_008', 'test_case_009']}]}) ``` the picture referenced in issue #31 would be rendered: ![screenshot from 2017-09-02 00-40-31](https://user-images.githubusercontent.com/6676439/29990226-ec487820-8f78-11e7-8931-9d52937c95de.png) ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
3 months, 2 weeks ago

@WGierke opened a new pull request: #84: #32 Breadcrumb Navigation

I implemented a basic DRY breadcrumb navigation. It's currently blocked by #73 as it's already using the basic bootstrap template. Since I don't have that much experience in front-end programming and Django, I'd be very happy about any feedback or commits. ## Description I moved the navigation to [frontend/shared/tabs.html](https://github.com/concept-to-clinic/concept-to-clinic/compare/master...WGierke:32_breadcrumb_navigation?expand=1#diff-e6833f2ee1dbb5169558392ed92ee414). Every page includes that shared template by passing the name of the current page as `active_tab`: ``` {% block tabs %} {% include "shared/tabs.html" with active_tab='detect_and_select' %} {% endblock %} ``` The navigation will then be rendered and the tab that indicates the current page should be highlighted: ``` ... <li class="nav-item {% if active_tab == 'detect_and_select' %}active{% endif %}"> <a class="nav-link" href="/detect">Detect and Select</a> </li> ... ``` ## Reference to official issue This issue addresses #32 . ## Motivation and Context The user should be able to see in the navigation on which page he currently is. ## How Has This Been Tested? I added placeholder pages and verified that when visiting every page, each tab that belongs to the page is highlighted. ## Screenshots (if appropriate): ![screenshot from 2017-08-31 11-44-54](https://user-images.githubusercontent.com/6676439/29917319-eaba8cce-8e41-11e7-9810-a5b4991552f9.png) ![screenshot from 2017-08-31 11-45-06](https://user-images.githubusercontent.com/6676439/29917320-eabb90e2-8e41-11e7-912f-c50d068de013.png) ![screenshot from 2017-08-31 11-45-20](https://user-images.githubusercontent.com/6676439/29917321-eabc5e78-8e41-11e7-9826-dde728ff3759.png) ![screenshot from 2017-08-31 11-45-31](https://user-images.githubusercontent.com/6676439/29917322-eabceb0e-8e41-11e7-87d6-951ec2bfb724.png) ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
3 months, 2 weeks ago

@WGierke opened a new pull request: #82: #18 Document the grt123 algorithm

I added an initial documentation of the grt123 algorithm. Please feel free to add commits and extend it! ## Reference to official issue This pull request addresses #18 . ## Motivation and Context At some point we need to implement an algorithm that can segment the nodules and detect whether they are malicious or not. It makes sense to first evaluate the top 10 solutions for the Data Science Bowl 2017 regarding how performant they are and how easy it would be to use them in our application. ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
3 months, 2 weeks ago

@WGierke commented on PR #70: Added first draft of template for documentation of prediction algorithm

Thanks for the template! I have a small question: where is the difference between [Prerequisites - is porting to Python 3.5 feasible?](https://github.com/concept-to-clinic/concept-to-clinic/pull/70/commits/f1aa17bd538be2166d55a7428d031e440c7d32b4#diff-946fedbc22c9d2d2db3c74954de7fd11R22) and [Adaptation into Concept to Clinic - Comment on possible problems/solutions for porting the algorithm to Python 3.5+](https://github.com/concept-to-clinic/concept-to-clinic/pull/70/commits/f1aa17bd538be2166d55a7428d031e440c7d32b4#diff-946fedbc22c9d2d2db3c74954de7fd11R105)? Should the letter one contain more detailed steps / comments?
3 months, 2 weeks ago

@WGierke commented on PR #73: #9 Layout base Django template to match wireframes

@lamby Alright, sorry for the confusion. I merged the current master and added a screenshot of the current version (including the Git version in the top right corner).
3 months, 2 weeks ago

@WGierke commented on PR #78: Calculate tumor volume

I'd suggest you to use `flake8` here as well :) ``` ➜ concept-to-clinic git:(13_calculate_tumor_volume) flake8 prediction prediction/src/algorithms/segment/trained_model.py:53:66: W291 trailing whitespace prediction/src/algorithms/segment/trained_model.py:54:81: W291 trailing whitespace prediction/src/algorithms/segment/trained_model.py:62:77: W291 trailing whitespace prediction/src/algorithms/segment/trained_model.py:63:65: W291 trailing whitespace prediction/src/algorithms/segment/trained_model.py:65:12: W291 trailing whitespace prediction/src/algorithms/segment/trained_model.py:67:1: W293 blank line contains whitespace prediction/src/algorithms/segment/trained_model.py:76:1: W293 blank line contains whitespace prediction/src/tests/test_calculate_volume.py:17:1: W293 blank line contains whitespace prediction/src/tests/test_calculate_volume.py:24:1: W293 blank line contains whitespace prediction/src/tests/test_calculate_volume.py:35:48: W291 trailing whitespace prediction/src/tests/test_calculate_volume.py:38:1: W293 blank line contains whitespace prediction/src/tests/test_calculate_volume.py:41:47: W291 trailing whitespace prediction/src/tests/test_calculate_volume.py:43:1: W293 blank line contains whitespace prediction/src/tests/test_calculate_volume.py:46:1: E303 too many blank lines (3) prediction/src/tests/test_calculate_volume.py:52:1: W293 blank line contains whitespace prediction/src/tests/test_calculate_volume.py:59:1: W293 blank line contains whitespace prediction/src/tests/test_calculate_volume.py:64:1: W293 blank line contains whitespace ```
3 months, 2 weeks ago

@WGierke commented on PR #76: Classification algorithm

Thanks for your contribution! I'd suggest you to use `flake8` to check for proper PEP8 formatting of your code as it will be done by Travis as soon as Travis works again. Here is the output that was generated for me: ``` ➜ concept-to-clinic git:(2_classify_algorithm) flake8 prediction prediction/classify/trained_model.py:12:20: W291 trailing whitespace prediction/classify/trained_model.py:18:1: E402 module level import not at top of file prediction/classify/trained_model.py:18:49: W291 trailing whitespace prediction/classify/trained_model.py:21:47: W291 trailing whitespace prediction/classify/trained_model.py:39:82: W291 trailing whitespace prediction/classify/trained_model.py:64:5: E303 too many blank lines (2) prediction/classify/src/preprocess_patch.py:18:76: W291 trailing whitespace prediction/classify/src/preprocess_patch.py:29:31: W291 trailing whitespace prediction/classify/src/preprocess_patch.py:30:31: W291 trailing whitespace prediction/classify/src/preprocess_patch.py:31:31: W291 trailing whitespace prediction/classify/src/preprocess_patch.py:33:86: W291 trailing whitespace prediction/classify/src/preprocess_patch.py:34:86: W291 trailing whitespace prediction/classify/src/preprocess_patch.py:37:1: W293 blank line contains whitespace prediction/classify/src/preprocess_patch.py:45:76: W291 trailing whitespace prediction/src/tests/test_preprocess_dicom.py:2:1: F401 'os' imported but unused prediction/src/tests/test_preprocess_dicom.py:4:1: F401 'shutil' imported but unused prediction/src/tests/test_preprocess_dicom.py:19:5: F841 local variable 'voxel_shape' is assigned to but never used prediction/src/tests/test_preprocess_dicom.py:39:5: E303 too many blank lines (2) prediction/src/tests/test_preprocess_dicom.py:41:1: W293 blank line contains whitespace prediction/src/tests/test_classify_trained_model_predict.py:1:1: F401 'numpy as np' imported but unused prediction/src/tests/test_classify_trained_model_predict.py:4:1: F401 'shutil' imported but unused prediction/src/tests/test_classify_trained_model_predict.py:5:1: F401 'glob.glob' imported but unused prediction/src/tests/test_classify_trained_model_predict.py:12:1: E402 module level import not at top of file prediction/src/tests/test_classify_trained_model_predict.py:13:1: E402 module level import not at top of file prediction/src/tests/test_classify_trained_model_predict.py:28:50: W291 trailing whitespace prediction/src/tests/test_classify_trained_model_predict.py:29:42: W291 trailing whitespace prediction/src/tests/test_classify_trained_model_predict.py:30:50: W291 trailing whitespace prediction/src/tests/test_classify_trained_model_predict.py:31:61: W291 trailing whitespace prediction/src/tests/test_classify_trained_model_predict.py:38:55: W291 trailing whitespace prediction/src/tests/test_classify_trained_model_predict.py:39:53: W291 trailing whitespace prediction/src/tests/test_classify_trained_model_predict.py:42:50: W291 trailing whitespace prediction/src/tests/test_classify_trained_model_predict.py:43:69: W291 trailing whitespace prediction/src/tests/test_classify_trained_model_predict.py:44:50: W291 trailing whitespace prediction/src/tests/test_classify_trained_model_predict.py:45:67: W291 trailing whitespace prediction/src/preprocess/preprocess_dicom.py:13:92: W291 trailing whitespace prediction/src/preprocess/preprocess_dicom.py:14:62: W291 trailing whitespace prediction/src/preprocess/preprocess_dicom.py:15:78: W291 trailing whitespace prediction/src/preprocess/preprocess_dicom.py:16:65: W291 trailing whitespace prediction/src/preprocess/preprocess_dicom.py:18:80: W291 trailing whitespace prediction/src/preprocess/preprocess_dicom.py:25:17: E211 whitespace before '(' prediction/src/preprocess/preprocess_dicom.py:25:58: W291 trailing whitespace prediction/src/preprocess/preprocess_dicom.py:26:70: W291 trailing whitespace prediction/src/preprocess/preprocess_dicom.py:35:37: W291 trailing whitespace prediction/src/preprocess/preprocess_dicom.py:41:25: W291 trailing whitespace prediction/src/preprocess/preprocess_dicom.py:45:90: W291 trailing whitespace prediction/src/preprocess/preprocess_dicom.py:61:92: W291 trailing whitespace prediction/src/preprocess/preprocess_dicom.py:62:62: W291 trailing whitespace prediction/src/preprocess/preprocess_dicom.py:63:78: W291 trailing whitespace prediction/src/preprocess/preprocess_dicom.py:64:65: W291 trailing whitespace prediction/src/preprocess/preprocess_dicom.py:66:80: W291 trailing whitespace prediction/src/preprocess/preprocess_dicom.py:74:5: E303 too many blank lines (2) prediction/src/preprocess/preprocess_dicom.py:82:5: E303 too many blank lines (2) prediction/src/preprocess/preprocess_dicom.py:89:1: W293 blank line contains whitespace prediction/src/preprocess/preprocess_dicom.py:101:78: W291 trailing whitespace prediction/src/preprocess/load_dicom.py:42:92: W291 trailing whitespace prediction/src/preprocess/load_dicom.py:58:17: E128 continuation line under-indented for visual indent prediction/src/preprocess/load_dicom.py:59:1: W293 blank line contains whitespace ```
3 months, 2 weeks ago

@WGierke opened a new pull request: #80: #79 Fix: Code Documentation is not being displayed

I added the auto-generated .rst files. ## Description I removed `_apidoc_{prediction, interface}` from `.gitignore`, removed the underscore prefix and updated the references to the folders accordingly. ## Reference to official issue This should fix #79 ## Motivation and Context This should enable us to see the code documentation on http://concept-to-clinic.readthedocs.io/en/latest/ as well. ## How Has This Been Tested? I ran `$ docs: make html && python -m http.server` and saw the documentation on `localhost:8000`. ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
3 months, 2 weeks ago

@WGierke created a new issue: #79: Readthedocs does not include code documentation from #8

When visiting http://concept-to-clinic.readthedocs.io/en/latest/, the auto-generated code documentation should be visible. ## Current Behavior It is not visible. ![screenshot from 2017-08-26 13-30-54](https://user-images.githubusercontent.com/6676439/29747869-6f1423aa-8b08-11e7-9088-992ea59e3f2b.png) ## Possible Solution I guess the problem is that readthedocs needs the auto-generated .rst files that are located in `_apidoc_{prediction, interface}` which are currently ignored by Git. ## Steps to Reproduce Visit http://concept-to-clinic.readthedocs.io/en/latest/ and search for the "Code Documentation" section which has been added with #8 . ## Possible Implementation Remove `_apidoc_interface` and `_apidoc_prediction` from `.gitignore`. ## Checklist before submitting - [X] I have confirmed this using the officially supported Docker Compose setup using the `local.py` configuration and ensured that I built the containers again and they reflect the most recent version of the project at the `HEAD` commit on the `master` branch - [X] I have searched through the other currently open issues and am confident this is not a duplicate of an existing bug - [X] I provided a **minimal code snippet** or list of steps that reproduces the bug. - [X] I provided **screenshots** where appropriate - [X] I filled out all the relevant sections of this template
3 months, 2 weeks ago

@WGierke commented on issue #19: Document the Daniel Hammack algorithm

I've reached out to the author since he mentions in the README that he knows how to simplify and speed-up his code without major performance loss.
3 months, 3 weeks ago

@WGierke commented on PR #73: #9 Layout base Django template to match wireframes

@lamby This way we are independent of third-party servers and can be sure that Bootstrap/JQuery/... are really always served (with the specific version we expect) and you can also develop locally. Though, when checking out [this SO thread](https://stackoverflow.com/questions/3832446/benefits-vs-pitfalls-of-hosting-jquery-locally) it appears to be faster including the assets from third-party servers because the browser can only open a small amount of connections at the same time to the same domain. What do you think? Should I change it again so we load the assets from external servers for better performance?
3 months, 3 weeks ago

@WGierke opened a new pull request: #74: #39 Inspect a DICOM directory and create appropriate model instances

Get or create an `ImageSeries` instance given the path to a DICOM directory. ## Description I added the class function `get_or_create` to the `ImageSeries` class. ## Reference to official issue This should solve #39 ## Motivation and Context The `ImageSeries` model needs to know the data it's holding. ## How Has This Been Tested? I wrote a test for it. ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
3 months, 3 weeks ago

@WGierke opened a new pull request: #73: #9 Layout base Django template to match wireframes

This should add a simple and extendable Django base template to build on further UI. ## Description I added a base template and made index.html extend it. ## Reference to official issue This should implement #9 ## Motivation and Context We'd like to start at some point with a nice and extendable UI. ## How Has This Been Tested? I fired up `docker-compose` and saw the following page ![screenshot from 2017-08-25 23-59-44](https://user-images.githubusercontent.com/6676439/29734364-150c39f4-89f2-11e7-8929-e81794845046.png) ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
3 months, 3 weeks ago

@WGierke commented on PR #61: #8 Add Sphinx Autodoc

@lamby Sorry for the delay, I finally figured out how to solve it. Now sphinx-apidoc should be used correctly and the paths in [conf.py](https://github.com/concept-to-clinic/concept-to-clinic/pull/61/files#diff-85987f48f1258d9ee486e3191495582dR23) should also be fixed. ![screenshot from 2017-08-25 19-17-50](https://user-images.githubusercontent.com/6676439/29724780-2c77d10c-89ca-11e7-8536-6bfe313c2943.png) ![screenshot from 2017-08-25 19-18-17](https://user-images.githubusercontent.com/6676439/29724785-2f5e5242-89ca-11e7-8942-067a388febcd.png)
3 months, 3 weeks ago

@WGierke commented on PR #71: #29 Show current Git version in top-level navigation

Any feedback is highly appreciated, especially since I'm a complete novice concerning Docker :)
3 months, 3 weeks ago

@WGierke opened a new pull request: #71: #29 Show current Git version in top-level navigation

For debugging purposes it would be really nice to see in the top right corner which version is currently used. The version is the short SHA-1 commit hash. ## Description Since I didn't want to add the whole `.git` directory to the Docker container, I just added `.git/logs/HEAD` as a volume which contains historical information about branch changes etc. ## Reference to official issue This should solve #29 ## Motivation and Context The big advantage is that by just adding `HEAD` we don't have the full Git history in our container. Furthermore, every time we switch branches or commit, the `HEAD` file is synchronized with the container (as well as the Docker volumes). In the container, a Django view parses the short current commit hash out of that file and displays it. This way, when switching branches not only the code in the container is synchronized with the code at the host but also the commit hash is updated accordingly. The container can then also be deployed to production. ## How Has This Been Tested? 1. `docker-compose -f local.yml up` yields ![screenshot from 2017-08-24 23-11-18](https://user-images.githubusercontent.com/6676439/29688898-94e29b54-8921-11e7-804e-95e81144959d.png) (with the short version of the commit of this branch: a9f0e6a) 2. add an empty commit: `git commit -m "empty" --allow-empty` 3. I refreshed the page and ended up with ![screenshot from 2017-08-24 23-12-13](https://user-images.githubusercontent.com/6676439/29688923-b009ea54-8921-11e7-8944-6e408e371950.png) which was the short version of the commit hash of the empty commit (1bd5c33) ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
3 months, 3 weeks ago

@WGierke commented on PR #61: #8 Add Sphinx Autodoc

@isms oh I'm sorry. I wasn't able to see that as I have no access to the Travis site. Are you able to cancel them? Would another possibility be to squash the commits when merging the branch?
3 months, 3 weeks ago

@WGierke commented on PR #61: #8 Add Sphinx Autodoc

Thanks a lot @isms , I was able to fix everything :)
3 months, 3 weeks ago

@WGierke commented on PR #61: #8 Add Sphinx Autodoc

Thanks a lot @isms that clearly helps :)
3 months, 3 weeks ago

@WGierke commented on issue #63: Travis: "Could not find repository"

Hey @isms Thanks for your fast comment! Sorry for the missing screenshot but yes, I can confirm it even when I'm logged in ![screenshot from 2017-08-22 17-30-44](https://user-images.githubusercontent.com/6676439/29573624-bfe82be8-875f-11e7-96f5-b7bc8e153833.png)
3 months, 3 weeks ago

@WGierke created a new issue: #63: Travis: "Could not find repository"

When trying to see the Travis logs of a build, you end up on a Travis page stating "We couldn't find the repository concept-to-clinic/concept-to-clinic". ## Expected Behavior When trying to access the Travis logs of a build, you should be able to see the logs of the respective build. ## Current Behavior You see the described error page. ## Possible Solution Are the builds only visible to admins? If so, it might be helpful to change some access rights. ## Steps to Reproduce Check a build like [this one](https://travis-ci.com/concept-to-clinic/concept-to-clinic/builds/52855581?utm_source=github_status&utm_medium=notification) ## Context (Environment) I want to see any problems that occurred when running a Travis build. ## Detailed Description Contributors should be able to access the Travis logs so they know what went wrong. ## Checklist before submitting - [X] I have confirmed this using the officially supported Docker Compose setup using the `local.py` configuration and ensured that I built the containers again and they reflect the most recent version of the project at the `HEAD` commit on the `master` branch - [X] I have searched through the other currently open issues and am confident this is not a duplicate of an existing bug - [X] I provided a **minimal code snippet** or list of steps that reproduces the bug. - [X] I provided **screenshots** where appropriate - [X] I filled out all the relevant sections of this template
3 months, 3 weeks ago

@WGierke opened a new pull request: #61: #8 Add Sphinx Autodoc

## Description This pull request should add the ability to auto-generate code documentation based on Google-styled docstrings. ## Reference to official issue This should solve #8 ## Motivation and Context Instead of manually adapting the documentation to the written code and its docstrings, we can now auto-generate the Sphinx documentation based on the docstrings in the code. ## How Has This Been Tested? 1. I merged my code into #55 (so it's easier for me to update the documentation :-) ) 2. I started docker-compose 3. I saw the auto-summaries of all modules, classes and methods that have proper docstrings 4. To confirm that it really supports Google-style docstrings, I downloaded [this](http://www.sphinx-doc.org/en/stable/ext/example_google.html) sample code having lots of Google-docstrings 5. I restarted docker-compose 6. I confirmed that the docstrings of that file were added to the Sphinx documentation ## Screenshots (if appropriate): Overview ![screenshot from 2017-08-22 01-12-20](https://user-images.githubusercontent.com/6676439/29541838-16715f5e-86d7-11e7-959d-d0877003cfd3.png) Details of the interface.backend package ![screenshot from 2017-08-22 01-12-35](https://user-images.githubusercontent.com/6676439/29541847-22af0bd6-86d7-11e7-95db-32e346dd851d.png) One legacy method that already has a complete docstring ![screenshot from 2017-08-22 01-13-10](https://user-images.githubusercontent.com/6676439/29541863-3cad0f7e-86d7-11e7-8cd1-4eb6bd03e03a.png) The Google-docstring file I added for testing purposes was rendered correctly ![screenshot from 2017-08-22 01-04-16](https://user-images.githubusercontent.com/6676439/29541817-e63bf164-86d6-11e7-97f0-96e87bd7a074.png) ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
3 months, 3 weeks ago

@WGierke opened a new pull request: #60: Add details on how to use Git LFS to the documentation

## Description I added some details on how to clone the repo using LFS, how to track files and how to remove files from the Git history. ## Reference to official issue This should implement #16 ## Motivation and Context If possible, people should use Git LFS for big files so the repository size does not exceed the quota. ## How Has This Been Tested? I checked out #55 and altered the documentation on that branch to directly see the compiled documentation on my computer. ## Screenshots (if appropriate): ![screencapture-localhost-8002-contribute-html-1503316676828](https://user-images.githubusercontent.com/6676439/29518229-d227e95c-8678-11e7-9c1a-d2a4a69e3885.png) ## CLA - [X] I have signed the CLA; if other committers are in the commit history, they have signed the CLA as well
3 months, 3 weeks ago
3 months, 3 weeks ago

@WGierke commented on issue #54: External links in https://concepttoclinic.drivendata.org/documentation do not work

I'd completely agree with dropping the iframe in favor of e.g. opening the documentation page in a new tab. This way, the user stays on https://concepttoclinic.drivendata.org while he enjoys expected browsing behavior in the documentation and by following all external links.
3 months, 3 weeks ago

@WGierke created a new issue: #59: 140MB fresh repo size (without much code or data, not using LFS)

## Expected Behavior After cloning using Git (and without LFS), after 30 commits and without test data, the repository should only be at most a few MB in size. ## Current Behavior It's over 140 MB when cloning the master. ![screenshot from 2017-08-20 23-47-45](https://user-images.githubusercontent.com/6676439/29498742-5ce78358-8602-11e7-9b94-68c28be6a7b4.png) ## Possible Solution I guess that the reason is that in the beginning even big files were pushed using Git. When they were then transferred to Git LFS, they weren't removed from the "normal" Git history. ## Steps to Reproduce 1. Clone the repository 2. Check its size (including hidden folders such as .git) ## Context (Environment) ## Detailed Description ## Possible Implementation Would it be possible to remove those big files from the history using references like [Removing sensitive data from a repository](https://help.github.com/articles/removing-sensitive-data-from-a-repository/) or [How to remove/delete a large file from commit history in Git repository?](https://stackoverflow.com/questions/2100907/how-to-remove-delete-a-large-file-from-commit-history-in-git-repository)? Unfortunately, I'm not that familiar with LFS or history rewriting... ## Checklist before submitting - [X] I have confirmed this using the officially supported Docker Compose setup using the `local.py` configuration and ensured that I built the containers again and they reflect the most recent version of the project at the `HEAD` commit on the `master` branch - [X] I have searched through the other currently open issues and am confident this is not a duplicate of an existing bug - [X] I provided a **minimal code snippet** or list of steps that reproduces the bug. - [X] I provided **screenshots** where appropriate - [X] I filled out all the relevant sections of this template
3 months, 3 weeks ago

@WGierke commented on issue #52: Fix broken links in main documentation

I guess this can be closed as it was implemented in #53 ?
3 months, 3 weeks ago

@WGierke commented on issue #42: Add default githooks for tests and flake8

I guess this can be closed as it was implemented in #51 ?
3 months, 3 weeks ago
3 months, 3 weeks ago

@WGierke commented on issue #54: External links in https://concepttoclinic.drivendata.org/documentation do not work

@isms but we can't alter them of third parties such as Github. If they set frame-ancestors to none, it's not possible to frame their content at all. So I'll agree with @alexanderrich that either opening the problematic links in a new tab using `target="_blank"` or to directly link to the readthedocs page instead of including it in an iframe are the only reasonable solutions here (from my point of view).
3 months, 3 weeks ago

@WGierke commented on issue #54: External links in https://concepttoclinic.drivendata.org/documentation do not work

Hey @alexanderrich the reason is that external pages (such as https://github.com/concept-to-clinic/concept-to-clinic in your example) set the [Content-Security-Policy frame-ancestors](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/frame-ancestors) no `none`, meaning that no third party is allowed to frame their content. Since we can't alter their CSP flags, the only solution from my POV is to open all links in our documentation that refer to such pages in a new tab using `target="_blank"` for every link tag.
3 months, 3 weeks ago