Upload
others
View
0
Download
0
Embed Size (px)
Citation preview
Computational Phenotyping of Critically Ill ChildrenA Deep Learning Approach
presented by David C. Kale1,2
Joint work with Zhengping Che1, Yan Liu1, and Randall Wetzel2
1University of Southern California2Laura P. and Leland K. Whittier VPICU, Children’s Hospital LA
November 15, 2014
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 1 / 22
Complex physiologic patterns as phenotypes [1][2]
20
30
40
50
60
70
80DBP
40
50
60
70
80
90
100
110
120SBP
1.0
1.5
2.0
2.5
3.0
3.5
4.0
4.5
5.0CRR
15
20
25
30
35
40ETCO2
0.20
0.25
0.30
0.35
0.40
0.45
0.50
0.55
0.60
0.65FIO2
2
4
6
8
10
12
14
16TGCS
100
120
140
160
180
200
220
240Gluc
0 2 4 6 8 10 12 1460
80
100
120
140
160
180
200HR
0 2 4 6 8 10 12 147.10
7.15
7.20
7.25
7.30
7.35
7.40pH
0 2 4 6 8 10 12 1410
15
20
25
30
35
40
45RR
0 2 4 6 8 10 12 1475
80
85
90
95
100SAO2
0 2 4 6 8 10 12 1431
32
33
34
35
36
37
38Temp
0 2 4 6 8 10 120.0
0.5
1.0
1.5
2.0
2.5UO
0 2 4 6 8 10 120.0
0.2
0.4
0.6
0.8
1.0
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 2 / 22
Phenotypes vs. features
Medicine: phenotypes, biomarkers [3]
1 Measurable attributes of patient/disease.
2 Independent of other biomarkers.
3 Separate patients into meaningful groups.
4 Improve outcome prediction, risk assessment.
5 Clinically plausible, interpretable.
Machine learning: features, representations [4]
1 Measurable properties of objects.
2 Independent, disentangle factors of variation.
3 Form natural clusters.
4 Useful for discriminative, predictive tasks.
5 Interpretable, provide insight into problem.
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 3 / 22
Deep learning of representations
Representation learning : learn transformation of data useful for some task.
Main workhorse: deep learning (i.e., noveau neural networks).
• Date back to 40s; abandoned in 90s.
• Revived as deep learning in 2000s.(new methods, big data, fast CPUs)
• State-of-the-art in vision, speech, NLP
• Google, Apple, Microsoft, Facebook
• Biologically inspired (if not plausible).
• Maximally varying, nonlinear functions.
• Exploit labeled and unlabeled data.
• Layers yield increasing abstraction.
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 4 / 22
Deep learning of representations
Representation learning : learn transformation of data useful for some task.
Main workhorse: deep learning (i.e., noveau neural networks).
• Date back to 40s; abandoned in 90s.
• Revived as deep learning in 2000s.(new methods, big data, fast CPUs)
• State-of-the-art in vision, speech, NLP
• Google, Apple, Microsoft, Facebook
• Biologically inspired (if not plausible).
• Maximally varying, nonlinear functions.
• Exploit labeled and unlabeled data.
• Layers yield increasing abstraction.
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 4 / 22
Deep learning: feed-forward neural networks
Ouputs: y (e.g., diagnosis or outcome)
binary label: y = 11+exp{−(w(o))>x(`)}
Hidden layers: x(`) = f(W (`)x(`−1))
where
x(`−1) =
{x(0) for ` = 1
f(a(`−1)) otherwise
and f(u) is a nonlinear function(e.g., sigmoid: 1/(1 + exp{−u}))
Inputs: x(0) (vital signs, lab tests, etc.)
Trained via greedy layer-wise pretraining, back-propagation using(stochastic) gradient descent.
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 5 / 22
Deep learning for time series
Classifica(on
Feature extrac(on
• Apply to fixed-size windows.
• Classification, feature extraction.
• Correlations across variables, time.
• Relatively few, weak model assumptions.
• Can learn to detect smooth,trajectory-like patterns.
• Inputs to other tasks (e.g., classification),models (e.g., HMM).
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 6 / 22
Deep learning for time series: sliding window approach
Classifica(on
Feature extrac(on
• Sliding window classification or feature extraction for• Sequential state estimation or detection (e.g., speech recognition).• Features for classification of longer time series.
• Can train, combine feature extractors for various lengths and scales.
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 7 / 22
Deep learning for time series: training on windows
• Sliding window to extract training data.
• Similar to training on smaller image patches.
• Leverage labels, etc., in different ways:• Unsupervised, all windows to learn patterns.• Supervised: only windows with labels.
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 8 / 22
Experiments
Time series from ∼ 8, 500 PICU episodes from CHLA:
• 24 hours of data, sampled every 60 minutes (after preprocessing1).
• 13 variables: vitals, labs, inputs/outputs, assessments.
• Binary label: Respiratory diagnostic category.
• Note: 8500(24-S+1) overlapping windows of length S.(e.g., > 1 million for S = 12)
Experiments:
• Train 3-layer fully-connected neural networks for different lengths S.2
• Classification AUC vs. baselines (raw data, sample statistics) for:
1 Classifying first S hours.2 Classifying full 24 hours using sliding window features.
• Visualize physiologic patterns detected by hidden units.
1Age correction, resampling, imputation, rescaling, etc.2See Appendix D for our fast incremental training procedure.
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 9 / 22
First 24-hour classification
AUC Precision RecallFull 24-hour window onlyNNetInc(Ft) 0.7953± 0.0141 0.6448± 0.0191 0.7213± 0.0257NNet 0.7845± 0.0208 0.6342± 0.0387 0.7086± 0.0176SampleStats 0.7767± 0.0130 0.6419± 0.0147 0.6733± 0.0226NNetInc(FtOnly) 0.7543± 0.0144 0.6015± 0.0173 0.7068± 0.0189Raw 0.7392± 0.0173 0.6096± 0.0173 0.6300± 0.0277S = {8, 12, 16, 20, 24} sliding windowsNNetInc(FtOnly) 0.8065± 0.0085 0.6595± 0.0167 0.7296± 0.0135NNet 0.8014± 0.0087 0.6590± 0.0201 0.7204± 0.0187NNetInc(Ft) 0.8012± 0.0151 0.6597± 0.0227 0.7233± 0.0185SampleStats 0.7948± 0.0125 0.6791± 0.0155 0.6886± 0.0197Raw 0.7293± 0.0176 0.6094± 0.0402 0.6216± 0.0813
• Classifier: linear SVM with L2 regularization.
• NNet: Layer 3 hidden unit activations.
• SampleStats: mean, standard deviation, median, extremes, etc.
• 5-fold cross validation to estimate mean performance, 95% confidence intervals.
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 10 / 22
Example 12-hour respiratory phenotype #1
• Elevated capillary refill rate (CRR), end-tidal CO2 (ETCO2),fraction-inspired O2 (FIO2).
• Complications from chronic bronchopulmonary dysplasia (BPD).
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 11 / 22
Example 12-hour respiratory phenotype #2
• Elevated CRR, ETCO2, FIO2, heart rate (HR), respiratory rate (RR);declining cognitive function (GCS).
• Common symptoms of pneumonia.
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 12 / 22
References
1 B. Marlin, D. Kale, R. Khemani, and R. Wetzel. Unsupervised Pattern Discoveryin Electronic Health Care Data Using Probabilistic Clustering Models. Proceedingsof the 2nd ACM SIGHIT International Health Informatics Symposium (IHI), 2012.
2 T.A. Lasko, J.C. Denny, M.A. Levy. Computational Phenotype Discovery UsingUnsupervised Feature Learning over Noisy, Sparse, and Irregular Clinical Data.PLoS ONE 8 (6): e66341, 2013.
3 Biomarkers Definitions Working Group. Biomarkers and surrogate endpoints:preferred definitions and conceptual framework. Clin Pharmacol Ther, 69 (3):89-95, Mar 2001.
4 Y. Bengio, A. Courville, and P. Vincent. Representation Learning: A Review andNew Perspectives. IEEE Transactions on Pattern Analysis and MachineIntelligence, 35 (8): 1798-1828, Aug 2013.
5 S.V. Desai, T.J. Law, and D.M. Needham. Long-term complications of criticalcare. Crit Care Med, 39 (2): 371-9, Feb 2011.
6 C.A. McKiernan and S.A. Lieberman. Circulatory shock in children: an overview.Pediatrics in Review, 26 (12): 451-460, Dec 2005.
7 M. Denil, et al. Predicting Parameters in Deep Learning. Advances in NeuralInformation Processing Systems 26: 2148-2156, 2013.
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 13 / 22
Conclusion
Deep learning a powerful tool for computational phenotyping. Next steps:− More experiments: looking for collaborators to share data, validate results!− Incorporate treatment effects; handle missing data, irregular sampling.− Apply to mobile health data and multitask learning.− Other architectures: recurrent and convolutional networks.
Dave Kale: http://www-scf.usc.edu/~dkale/
Yan Liu: http://www-bcf.usc.edu/~liu32/
Melady Lab: time series (health, climate, etc.); active/transfer learning
Whitter Virtual PICU (VPICU): http://vpicu.org/Meaningful Use of Complex Medical Data Symposium: http://mucmd.org/
Support: NSF research grants IIS-1134990 and IIS-1254206, Okawa Foundation Research
Award, Alfred E. Mann Innovation in Engineering Fellowship, Laura P. and Leland K. Whittier
Foundation grant, Children’s Hospital Los Angeles.
Thank you and fight on!
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 14 / 22
Appendix A: critical care medicine
Among the most important areas of medicine.
• >5 million patients admitted to US ICUs annually.3
• Cost: $81.7 billion in US in 2005: 13.4% hospital costs, ∼1% GDP.1
• Mortality rates up to 30%, depending on condition, care, age.1
• Long-term impact: physical impairment, pain, depression [5].
Keys to improving outcomes and reducing costs:
• Rapid, early recognition and diagnosis of underlying illness.
• Targeted, personalized treatment.
3Society of Critical Care Medicine website, Statistics page.Kale (USC/VPICU) Computational Phenotyping November 15, 2014 15 / 22
Appendix A: critical care medicine
Among the most important areas of medicine.
• >5 million patients admitted to US ICUs annually.3
• Cost: $81.7 billion in US in 2005: 13.4% hospital costs, ∼1% GDP.1
• Mortality rates up to 30%, depending on condition, care, age.1
• Long-term impact: physical impairment, pain, depression [5].
Keys to improving outcomes and reducing costs:
• Rapid, early recognition and diagnosis of underlying illness.
• Targeted, personalized treatment.
3Society of Critical Care Medicine website, Statistics page.Kale (USC/VPICU) Computational Phenotyping November 15, 2014 15 / 22
Appendix A: challenges of critical care medicine
Among the most challenging clinical decision-making environments.
• Time-pressure: patients can deteriorate rapidly; require fast response.
• Patients generate a data deluge (100’s of data points/patient/hour).
• Symptoms and signs often complex, overlap, vary across patients.
• Diverse conditions: respiratory distress, traumas, cardiovascularfailure, sepsis, shock, etc.
• Patient physiology/responses highly individualized.
Potential solution: computer-aided diagnosis?Requires new definitions of disease and new methods of discovery.
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 16 / 22
Appendix A: challenges of critical care medicine
Among the most challenging clinical decision-making environments.
• Time-pressure: patients can deteriorate rapidly; require fast response.
• Patients generate a data deluge (100’s of data points/patient/hour).
• Symptoms and signs often complex, overlap, vary across patients.
• Diverse conditions: respiratory distress, traumas, cardiovascularfailure, sepsis, shock, etc.
• Patient physiology/responses highly individualized.
Potential solution: computer-aided diagnosis?
Requires new definitions of disease and new methods of discovery.
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 16 / 22
Appendix A: challenges of critical care medicine
Among the most challenging clinical decision-making environments.
• Time-pressure: patients can deteriorate rapidly; require fast response.
• Patients generate a data deluge (100’s of data points/patient/hour).
• Symptoms and signs often complex, overlap, vary across patients.
• Diverse conditions: respiratory distress, traumas, cardiovascularfailure, sepsis, shock, etc.
• Patient physiology/responses highly individualized.
Potential solution: computer-aided diagnosis?Requires new definitions of disease and new methods of discovery.
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 16 / 22
Appendix B: septic shock phenotypes [6, Table 1]
Pathophysiology Signs and Symptoms Treatment
↑ CO, ↓ SVR↑ HR, ↓ BP, ↑ pulses, delayedCR, hyperpnea, MS changes,third-spacing, edema
Repeat boluses of 20 mL/kg crystalloid; may need>60 mL/kg in first hour
Consider colloid if poor response to crystalloidPharmacologic support of BP with dopamine or
norepinephrine
↓ CO, ↑ SVR↑ HR, normal to ↓ BP, ↓ pulses,delayed CR, hyperpnea, MSchanges, third-spacing, edema
Repeat boluses of 20 mL/kg crystalloid; may need>60 mL/kg in first hour
Consider colloid if poor response to crystalloidPharmacologic support of CO with dopamine or
epinephrine
↓ CO, ↓ SVR↑ HR, ↓ BP, ↓ pulses, delayedCR, hyperpnea, MS changes,third-spacing, edema
Repeat boluses of 20 mL/kg crystalloid; may need>60 mL/kg in first hour
Consider colloid if poor response to crystalloidPharmacologic support of BP and CO with
dopamine or epinephrine
+ Parsimonious, easy to remember.
+ Abstract, flexible.
– Imprecise, overlap, ignore other data.
– Difficult to automate.
Traditional: targeted to human caregivers, discovery driven by anecdote. [2]
Alternative: computational discovery, traditional verification. [1] aaaaaaaaaa
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 17 / 22
Appendix B: septic shock phenotypes [6, Table 1]
Pathophysiology Signs and Symptoms Treatment
↑ CO, ↓ SVR↑ HR, ↓ BP, ↑ pulses, delayedCR, hyperpnea, MS changes,third-spacing, edema
Repeat boluses of 20 mL/kg crystalloid; may need>60 mL/kg in first hour
Consider colloid if poor response to crystalloidPharmacologic support of BP with dopamine or
norepinephrine
↓ CO, ↑ SVR↑ HR, normal to ↓ BP, ↓ pulses,delayed CR, hyperpnea, MSchanges, third-spacing, edema
Repeat boluses of 20 mL/kg crystalloid; may need>60 mL/kg in first hour
Consider colloid if poor response to crystalloidPharmacologic support of CO with dopamine or
epinephrine
↓ CO, ↓ SVR↑ HR, ↓ BP, ↓ pulses, delayedCR, hyperpnea, MS changes,third-spacing, edema
Repeat boluses of 20 mL/kg crystalloid; may need>60 mL/kg in first hour
Consider colloid if poor response to crystalloidPharmacologic support of BP and CO with
dopamine or epinephrine
+ Parsimonious, easy to remember.
+ Abstract, flexible.
– Imprecise, overlap, ignore other data.
– Difficult to automate.
Traditional: targeted to human caregivers, discovery driven by anecdote. [2]
Alternative: computational discovery, traditional verification. [1] aaaaaaaaaa
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 17 / 22
Appendix B: septic shock phenotypes [6, Table 1]
Pathophysiology Signs and Symptoms Treatment
↑ CO, ↓ SVR↑ HR, ↓ BP, ↑ pulses, delayedCR, hyperpnea, MS changes,third-spacing, edema
Repeat boluses of 20 mL/kg crystalloid; may need>60 mL/kg in first hour
Consider colloid if poor response to crystalloidPharmacologic support of BP with dopamine or
norepinephrine
↓ CO, ↑ SVR↑ HR, normal to ↓ BP, ↓ pulses,delayed CR, hyperpnea, MSchanges, third-spacing, edema
Repeat boluses of 20 mL/kg crystalloid; may need>60 mL/kg in first hour
Consider colloid if poor response to crystalloidPharmacologic support of CO with dopamine or
epinephrine
↓ CO, ↓ SVR↑ HR, ↓ BP, ↓ pulses, delayedCR, hyperpnea, MS changes,third-spacing, edema
Repeat boluses of 20 mL/kg crystalloid; may need>60 mL/kg in first hour
Consider colloid if poor response to crystalloidPharmacologic support of BP and CO with
dopamine or epinephrine
+ Parsimonious, easy to remember.
+ Abstract, flexible.
– Imprecise, overlap, ignore other data.
– Difficult to automate.
Traditional: targeted to human caregivers, discovery driven by anecdote. [2]
Alternative: computational discovery, traditional verification. [1] aaaaaaaaaa
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 17 / 22
Appendix B: septic shock phenotypes [6, Table 1]
Pathophysiology Signs and Symptoms Treatment
↑ CO, ↓ SVR↑ HR, ↓ BP, ↑ pulses, delayedCR, hyperpnea, MS changes,third-spacing, edema
Repeat boluses of 20 mL/kg crystalloid; may need>60 mL/kg in first hour
Consider colloid if poor response to crystalloidPharmacologic support of BP with dopamine or
norepinephrine
↓ CO, ↑ SVR↑ HR, normal to ↓ BP, ↓ pulses,delayed CR, hyperpnea, MSchanges, third-spacing, edema
Repeat boluses of 20 mL/kg crystalloid; may need>60 mL/kg in first hour
Consider colloid if poor response to crystalloidPharmacologic support of CO with dopamine or
epinephrine
↓ CO, ↓ SVR↑ HR, ↓ BP, ↓ pulses, delayedCR, hyperpnea, MS changes,third-spacing, edema
Repeat boluses of 20 mL/kg crystalloid; may need>60 mL/kg in first hour
Consider colloid if poor response to crystalloidPharmacologic support of BP and CO with
dopamine or epinephrine
+ Parsimonious, easy to remember.
+ Abstract, flexible.
– Imprecise, overlap, ignore other data.
– Difficult to automate.
Traditional: targeted to human caregivers, discovery driven by anecdote. [2]
Alternative: computational discovery, traditional verification. [1] aaaaaaaaaa
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 17 / 22
Appendix B: septic shock phenotypes [6, Table 1]
Pathophysiology Signs and Symptoms Treatment
↑ CO, ↓ SVR↑ HR, ↓ BP, ↑ pulses, delayedCR, hyperpnea, MS changes,third-spacing, edema
Repeat boluses of 20 mL/kg crystalloid; may need>60 mL/kg in first hour
Consider colloid if poor response to crystalloidPharmacologic support of BP with dopamine or
norepinephrine
↓ CO, ↑ SVR↑ HR, normal to ↓ BP, ↓ pulses,delayed CR, hyperpnea, MSchanges, third-spacing, edema
Repeat boluses of 20 mL/kg crystalloid; may need>60 mL/kg in first hour
Consider colloid if poor response to crystalloidPharmacologic support of CO with dopamine or
epinephrine
↓ CO, ↓ SVR↑ HR, ↓ BP, ↓ pulses, delayedCR, hyperpnea, MS changes,third-spacing, edema
Repeat boluses of 20 mL/kg crystalloid; may need>60 mL/kg in first hour
Consider colloid if poor response to crystalloidPharmacologic support of BP and CO with
dopamine or epinephrine
+ Parsimonious, easy to remember.
+ Abstract, flexible.
– Imprecise, overlap, ignore other data.
– Difficult to automate.
Traditional: targeted to human caregivers, discovery driven by anecdote. [2]
Alternative: computational discovery, traditional verification. [1] aaaaaaaaaa
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 17 / 22
Appendix C: Back-propagation training of neural networks
Loss L(y, y) for predicted output y:
e.g., − (y log(y) + (1− y) log(1− y))
Cost C({W (`)}`) =∑
(x,y)∈D L(y, y)
for y predicted by parameters {W (`)}`.
Update for (j, i)th weight for layer `:
w(`)ij = w
(`)ji − γ
∂C
∂w(`)ji
where γ is learning rate
We compute gradients in topologicalorder, starting at root (output node) andmoving downward toward leaves.
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 18 / 22
Appendix D: varying length feature extractors
Combining feature extractors of different window lengths T :
Classify time series of any length T :
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 19 / 22
Appendix D: incremental training of neural networks
Question: can we incrementally learn neural nets of increasing size?
• x(1) = f(W (1)x(0) + b(1))
• Features x(1)i exploit smoothness in data.
• Weights W often low-rank W = UV forW ∈ Rnh×nv , U ∈ Rnh×nα , V ∈ Rnα×nvand nα � nh, nv.
• Denil et al. [7]: W = k>α(Kα + λI)−1
)Wα
• Kα,kα similarity matrices (e.g., covariance)
• Applies to higher layers, too.
Observation: Time series exhibit temporal structure and smoothness.Strategy: Use smaller network’s parameters to initialize larger network.
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 20 / 22
Appendix D: incremental training of neural networks
Question: can we incrementally learn neural nets of increasing size?
• x(1) = f(W (1)x(0) + b(1))
• Features x(1)i exploit smoothness in data.
• Weights W often low-rank W = UV forW ∈ Rnh×nv , U ∈ Rnh×nα , V ∈ Rnα×nvand nα � nh, nv.
• Denil et al. [7]: W = k>α(Kα + λI)−1
)Wα
• Kα,kα similarity matrices (e.g., covariance)
• Applies to higher layers, too.
Observation: Time series exhibit temporal structure and smoothness.Strategy: Use smaller network’s parameters to initialize larger network.
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 20 / 22
Appendix D: incremental training of neural networks
Question: can we incrementally learn neural nets of increasing size?
• x(1) = f(W (1)x(0) + b(1))
• Features x(1)i exploit smoothness in data.
• Weights W often low-rank W = UV forW ∈ Rnh×nv , U ∈ Rnh×nα , V ∈ Rnα×nvand nα � nh, nv.
• Denil et al. [7]: W = k>α(Kα + λI)−1
)Wα
• Kα,kα similarity matrices (e.g., covariance)
• Applies to higher layers, too.
Observation: Time series exhibit temporal structure and smoothness.Strategy: Use smaller network’s parameters to initialize larger network.
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 20 / 22
Appendix D: incremental training of neural networks
Insight: overlap in time series =⇒ overlap in parameters
𝑾1 Δ𝑾𝑛𝑒
Δ𝑾𝑒𝑛 Δ𝑾𝑛𝑛
𝑁1
𝑛
𝐷 𝑑
𝑥
Δ𝑥
𝐷
𝑑
ℎ1
Δℎ
𝑁1
𝑛
𝑏1
Δ𝑏
𝑁1
𝑛
= 𝑓 +
Initialize training for D + d-window with D-window parameters:
1 W1: weight matrix from D-window neural net.
2 ∆Wne: similarity-weighted linear combinations of columns in W1.
3 ∆Wen,∆Wnn: sample from empirical distribution of W1 weights.
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 21 / 22
Appendix D: incremental vs. classic training
0.7
0.72
0.74
0.76
0.78
0.8
0.82
8 12 16 20
NNet NNetInc(Ft) NNetInc(FtOnly) NNetInc(Pt)
Method # Epochs Training Time Val. Error Test Error
NNetInc(FtOnly) 73 396.23 28.64% 28.89%NNetInc(Pt) 24.4 419.71 31.08% 30.85%NNet 23.6 471.74 32.23% 32.07%NNetInc(Ft) 51.4 671.24 29.88% 29.91%
Kale (USC/VPICU) Computational Phenotyping November 15, 2014 22 / 22