Upload
dan-starr
View
103
Download
0
Tags:
Embed Size (px)
DESCRIPTION
Caltech 20090903 Talk on T.C.P. for LSST/PTF workshop
Citation preview
Interesting near galaxy sources
• identified by TCP in the last 2 days
• (last epoch observed 1 week ago)
• Classification triggered by latest epoch added to the source
PI: Josh Bloom
Overview
• TCP Software & Data Architecture
• Classifiers & Cutting out “Junk”
• Continuing work...
PTF spectroscopically confirmed SN,subsequently classified by TCP as SN
Transients Classification Pipeline
Parallelized source correlation and classification
• Difference objects are retrieved from LBL
• Each difference-object is passed to an IPython client
• Each parallel IPython client performs:
• Source creation or correlation with existing sources
• “Feature” generation (or re-generation) for that source
• Classification of that source
featuregeneration
sourcegeneration
sourceclassification
Parallelized source correlation and classification
• Realtime TCP runs on 22 dedicated cores
• LCOGT’s 96 core beowulf
• non run-time tasks
• Classifier generation
• Additional resources
• To be used for future timeseries classification work
• Yahoo’s 4000 core Hadoop academic cluster
• Amazon EC2 cluster
featuregeneration
sourcegeneration
sourceclassification
Warehouse of light-curves
• Need representative light-curves for all science
• With these we can model each science class
• We’ve built a warehouse of example light-curves
TCP-TUTORinternal interface
DotAstro.orgpublic interface
Confusion Matrix
• C
different ways of quantifying effeciencies - using original good training set, and train/evaluate efficencies via folding - using “noisified”, simulated sources matching survey shedule, cadences, limits
“Noisification”(resampling light-curves)
• For PTF, the Noisification code references:
• 1000s of PTF pointing and survey observing plans
• This allows simulation of PTF cadenced light-curves
• Occasionally PTF observes using a faster cadence:
• 7.5 minutes between revisiting an RA, Dec
• This requires a separate set of noisified light-curves and classifiers.
• Other pointing and observing plans could be used.
• This means we can easily generate noisified light-curves for any survey.
• Thus we can generate science classifiers for any survey.
Constructing Light Curvesfrom subtractions ain’t easy
mag
time
true
reference[assumes template doesn’t
update]
= 3 σ limiting mag
Constructing Light Curvesfrom subtractions ain’t easy
mag
time
true
reference[assumes template doesn’t
update]
= 3 σ limiting mag
Constructing Light Curvesfrom subtractions ain’t easy
mag
time
true
reference[assumes template doesn’t
update]
detected in:pos_sub?neg_sub?
= 3 σ limiting mag
Constructing Light Curvesfrom subtractions ain’t easy
mag
time
true
reference[assumes template doesn’t
update]
5σ exclusion band
detected in:pos_sub?neg_sub?
= 3 σ limiting mag
Constructing Light Curvesfrom subtractions ain’t easy
mag
time
true
reference[assumes template doesn’t
update]
5σ exclusion band
detected in:pos_sub?neg_sub?
Constructing Light Curvesfrom subtractions ain’t easy
for some source at RA,DEC & ti, determine
best ref_mag at t=ti
detection in positive sub?
total mag = TM+
[detection]
limit_mag fainter than ref_mag?
total mag = limit_mag[upper limit]
detection in the negative sub?
total mag = ref_mag[detection]
mag in negative sub < limit_mag - ref_mag?
total mag = TM-
[detection]
total mag = limit_mag[upper limit]
yes
no
no
yes
no
yes
yes
no
TM+ = 2.5 log10( f_aper × 10-0.4(sub_zp-ref_zp) + flux_aper ) + ub1_ref_zpTM- = 2.5 log10( -f_aper × 10-0.4(sub_zp-ref_zp) + flux_aper ) + ub1_ref_zp
Classifiers
• General Classifier
• Filter out: poorly subtracted sources
• Filter out: minor planets / rocks
• Filter out: long-time sampled (periodic & nonperiodic)
• Identify interesting sources near known galaxies
• Identify periodic variable science class when confidence is high
• Timeseries Classifier
• Weighted combination of machine learning classifiers
• Astronomer crafted classifiers for specific science types
• Microlens, Super Nova
Interesting with nearby galaxy context
Interesting without context information
Uninteresting
General Classification(Source)
Rock class
(general) Periodic variable class
Poor subtraction JUNK class
SN, AGN of various quality
classes
Nicely subtracted, non-galaxy,
non-periodic variable classes
• Three general classification groups.
• Periodic variables are contained within the “uninteresting” group, although more specific sub-classifications are known.
Interesting with nearby galaxy context
Uninteresting
General Classification• Applied to ~80 spectroscopically confirmed
user classified (SN, AGN, galaxy) sources.
• SN lightcurve classifier is needed when galaxy context is not available, and to improve confidence in SN classification.
(Source)
faint, poorly subtracted
(11 SN)
SN, AGN, galaxy
(58 SN)
Interesting without context information
General Classifier: components & cuts
• Crowd source modeled “RealBogus” metric
• Cut on: average RealBogus, derivatives of RB components
• Cut on: % epochs in source with good RealBogus
• PSF statistics
• Cuts on: PSF symmetry, eccentricity (averages)
• Neighboring object comparisons
• Cuts on significance of above metrics when compared to neighboring pixels
• Minor Planet check
• Does an epoch intersect a Minor Planet? (PyMPChecker)
• Well sampled source
• Cuts on: well sampled periodic & nonperiodic sources
PyEphem
PyMPChecker
Evaluating and Combining Classifiers
• Issues when using multiple classifiers:
• How to combine Classifiers using weights or tree-hierarchy
• How to generate final classification “probabilities” when using:
• Widely varying types of classifiers
• Each classifier may contain sub-classifications with their own class probabilities.
• Evaluate the final combination of classifiers
• We classify PTF09xxx user classified sources
• We display success / failure cases for each general class
• Update classifier weights & cuts, try again.
• OR: Iteratively & algorithmically find best weights.
The “Netflix Prize” was won using a combination of ~1000 different classifiers.
~0.4 day period RR Lyrae using
10 epoch noisification
• Currently, science classes are determined by combining the weighted probabilities generated by different classification models, for a source.
• Each machine-learned classification model is trained using “noisified” lightcurves which were generated using different parameters.
0.1 - 0.17 day period RR Lyrae using 15 epoch noisification
Clicking on a class for one of dozens of ML models...
...shows highest classification probability sources for that
model::class
~0.14 day period RR Lyrae using
20 epoch noisification
Periodic variable classifiers
Overplotting of period-folded model
still needs work
period-fold plotting probably failed here
Continuing Work
• Test, improve general classifier cuts
• Push general classifications to Followup Marshal
• Push specific variable science class identified sources to Followup Marshal
• Explore other timeseries classifiers for periodic variable classification.
TCP Explorer