Upload
sylvester-potts
View
28
Download
1
Embed Size (px)
DESCRIPTION
hitArbiter Specifications. Multiplex 5 inputs into 1 provide address of hit provide multiple hits during when first hit is processed (pileup) constant latency for each pixel channel for rising and falling edge know inefficiences: multiple address bits missing address bits (after hit) - PowerPoint PPT Presentation
Citation preview
hitArbiter Specifications• Multiplex 5 inputs into 1• provide address of hit• provide multiple hits during when first hit is
processed (pileup)• constant latency for each pixel channel for rising
and falling edge• know inefficiences:– multiple address bits– missing address bits (after hit)– missing pileup bits
change libraries to 1.2V• change timing libraries to 1.2 V
– in: /projects/IBM_CMOS8/gtk2010/V3.0/workAreas/akluge/digital/hitArbiterAndController2010a/pnr/work• ln -s /vlsicad/micsoft/IBM_CMOS8_V1.7_DM_vcad/ibm_cmos8rf/std_cell/relDM/synopsys_1.2/ synopsys_std_cell_1.2
– change oa.conf in pnr/script• # original timing files• #set rda_Input(ui_timelib,max) { ./synopsys_std_cell/slow_v140_t125/IBM_CMOS8RF_CMOS8RF_SC_SLOW_V140_T125.lib \• #
./synopsys_short_io/slow_v135_t125_pv162/IBM_CMOS8RF_BASE_SHORT_IO_SLOW_V135_T125_PV162.lib }• #
• #set rda_Input(ui_timelib,min) { ./synopsys_std_cell/fast_v160_tm55/IBM_CMOS8RF_CMOS8RF_SC_FAST_V160_TM55.lib \• #
./synopsys_short_io/fast_v165_tm40_pv363/IBM_CMOS8RF_BASE_SHORT_IO_FAST_V165_TM40_PV363.lib }• # end original timing files
• # 1.2 V timing files• set rda_Input(ui_timelib,max) { ./synopsys_std_cell_1.2/slow_v110_t125/IBM_CMOS8RF_STD_SLOW_V110_T125.lib \•
./synopsys_short_io/slow_v108_t125_pv162/IBM_CMOS8RF_BASE_SHORT_IO_SLOW_V108_T125_PV162.lib }•
• set rda_Input(ui_timelib,min) { ./synopsys_std_cell_1.2/fast_v130_tm55/IBM_CMOS8RF_STD_V12_FAST_V130_TM55.lib \•
./synopsys_short_io/fast_v132_tm40_pv363/IBM_CMOS8RF_BASE_SHORT_IO_FAST_V132_TM40_PV363.lib }
• # end 1.2 V timing files
– change init.tcl in syn/script• # original settings• #set ec::LIBRARY "$IBM_PDK/ibm_cmos8rf/std_cell/rel$PDK_OPT/synopsys/slow_v140_t125/IBM_CMOS8RF_CMOS8RF_SC_SLOW_V140_T125.lib \• #
$IBM_PDK/ibm_cmos8rf/short_io/rel$PDK_OPT/synopsys/slow_v135_t125_pv162/IBM_CMOS8RF_BASE_SHORT_IO_SLOW_V135_T125_PV162.lib "
• #• #end original setting• #1.2 V libraries• set ec::LIBRARY "$IBM_PDK/ibm_cmos8rf/std_cell/rel$PDK_OPT/synopsys_1.2/slow_v110_t125/IBM_CMOS8RF_STD_SLOW_V110_T125.lib \•
$IBM_PDK/ibm_cmos8rf/short_io/rel$PDK_OPT/synopsys/slow_v108_t125_pv162/IBM_CMOS8RF_BASE_SHORT_IO_SLOW_V108_T125_PV162.lib "
• #end 1.2V libraries
work flow• run gp select gtk2010 version 1.7 / version 3• run all procedures on the new window, except HDL simulator
• hitArbiter2010a– schematic is in hitArbiterManual:hitArbiter2010a– verilog file created via ‘launch simulation verilog’, saves to ./hitArbiter2010a_run/ihnl/cds0/netlist– file is copied to – digital/source/hitArbiterManual2010a_noGndVddNS.v is created by removing gndVDDwellSub and
replacing GNDVDD with 1’b0 and 1’b1, module is called hitArbiterManual2010a
• hitArbiterController2010a syn– file in ./digital/sources and linked to ./digital/hitArbiterAndController2010a/syn/verilog– constraints file is in syn/sdc and need optimisation– output is in syn/output/r2g.v
work flow hitArbiterAndController2010a• digital/hitArbiterAndController2010a/pnr/scripts/import.tcl merges hitArbiter2010a and hitArbiterController2010a
– catch {exec rm ../../syn/output/r2gMergeFiles.v}– catch {exec cat ../../syn/output/r2g.v
/projects/IBM_CMOS8/gtk2010/V3.0/workAreas/akluge/digital/source/hitArbiterManual2010a_noGndVddNS.v > tmpCat.v}
– catch {exec cat tmpCat.v /projects/IBM_CMOS8/gtk2010/V3.0/workAreas/akluge/digital/source/hitArbiterAndController2010a.v > tmpCat1.v}
– catch {exec cat catInfoFile.txt tmpCat1.v > tmpCat2.v}– catch {exec rm tmpCat.v}– catch {exec rm tmpCat1.v}– catch {exec mv tmpCat2.v ../../syn/output/r2gMergeFiles.v}
• synthesize hitArbiterController2010a– digital/syn/work run run_rc
• in digital/hitArbiterAndController2010a/pnr/work– run velocity –replay ../scripts/all.tcl
• simulation with backannotation– verilog output from pnr is dfm v and linked to hitArbiterAndController2010a_dfm.v ->
../hitArbiterAndController2010a/pnr/output/dfm.v– /projects/IBM_CMOS8/gtk2010/V3.0/workAreas/akluge/hdlCompile.script
• compiles all required files and starts simulations• in ./log ncsim.log & logVerifyArbiter.txt are log files of simulations. in ./verilog/sourcres simulationPkg.vhd controls logFile details
constant logMessage : boolean := TRUE;constant logSuccessfulHit : boolean := TRUE;
• ./digiital/sdf_backannotate contains backannotations scripts: check that they are error/warning free; at present is not error free
work flow virtuoso• open dfm layout and convert to layout
– Launch > layout L– from layout view: tools > remaster Instances– search for view name “abstract” and update to view name “layout”– save as different view from dfm
• Assura DRC– IBM_PDK > checking > Assura > DRC– rundirectory ./DRC– rules file:
/vlsicad/micsoft/IBM_CMOS8_V1.7_DM_vcad/IBM_PDK/cmrf8sf/V1.7.0.2DM/Assura/DRC/drc.rul (echo $techdir)
– options: GridCheck BEOL_STACK_323 CELL• Assura density antenna checks
– Assura run DRC– run directory ./ANT– change technology field co cmos8sfTech– is greyed out and thus not possible ?[check Rule Set field to antenna]– click switch ames button and set the BEOL_STACK_323, ALL_CHECKS options
work flow virtuoso• IBM_PDK Checking Calibre DRC menu• BEL_STACK:3_2_3, density local: off, design_type:
celllast metal:MA, nummetal:8 OK
• Rules Tab: /vlsicad/micsoft/IBM_CMOS8_V1.7_DM_vcad/IBM_PDK/cmrf8sf/relDM/Calibre/DRC/cmrf8sf.drc.cal
• Input tab: new directory OK• Output tab:• RunDRC:• Look at summary file
workflow LVS• assura runLVS• set rundirectory field to ./LVS• change technology field to cmos8sfTech• set switches NO_SUBC_IN_GRLOGIC & SBAR_feature• ruleset field to default• click OK• click watch logFile• after run click YES• From LVS debug
– click on the Nets/Devices to peon Nets mismatch tool or devices mismatch tool
– click view LVS Error Report• Go to LVS dir and check different files (.err, .cls, .sum)
workflow LVS with CDL• from schematic
– IBM_PDK netlist CDL menu• Rundirectory ./• check that an include “subcircuit.cdl” file is present int the “include file” field• click OK a CDL netlist is written into the ./”moduleName”.netlist file• From schematic IBM_PDFnetlistCDL Processor for LVS• select the netlist file and put it into the “files to process field”• click OK a modified netlist is written into “moduleName”.netlist .lvs• from the layout view click assura run LVS• From the “Run Assura LVS” window change the schematic design source from DFII to netlist• click add and add the .netlist.lvs• enter cell name in schematic source• From the ASSuar LVS window click “use Existing extracted netlist” into the layout design
source• ckeck the run directory field to ./LVS• Set the rule set field to default_cdl (does not work)• click OK•
is not error free
Calibre LVS• IBM_PDK Checking Calibre LVS Menu• Default Runset (never appeared)• BEOL_STACK:3_2_3, lastMetal: MA, #layers =8,
No_subc_in_grlogic = true, use_resistance_multipliers: true
• Rules tab: /vlsicad/micsoft/IBM_CMOS8_V1.7_DM_vcad/IBM_PDK/cmrf8sf/relDM/Calibre/DRC/cmrf8sf.drc.cal
• Set DRCrundirectory• click inputs tab• Run LVS• in rundirectory check files
hitArbiterController2010a
• It is the circuit which originally produced the block signals for the hitArbiter only and now is modified also to produce the lead/trail_edge_trigger signals.
• I think that this circuit will also help your circuit for the coarse counter.
• The parallel_load signal is activated after the first rising edge of the clock after the hit for one clock cycle, as it assumes that the hitRegisters (fine and coarse) have been loaded and are ready to be transferred after 1 clk cycles to the synchronisation registers. In your present scheme this is not true for the coarse counters. I wonder whether you could not simply take the parallel_load_requ signal to enable the coarse counter storage.
• The lead/trail_edge_trigger signals are sending the hit trail/fall edges with constant latency and are reset upon parallel_load & rising edge clock, thus have at least 1 clock cycle length, maximum 2 clk cycles, if daq_ready is active.
• Lead/trail_edge_trigger is at least one clock cycle and up to two clk cycles long, but depending on clk phase relation can be active during one or two clock rising edges.
>D
Q
R
Shit_in=set_lead_edge_int
lead_edge_int=set_block
>D
Q
R
Sclk_int
block_int=lead_edge_trigger=block_pileup
para
llel_
load
lead_edge_trigger
block_pileupreset
>D
Q
R
Sset_
trai
l_ed
ge_i
nt
reset
parallel_load_requ
rese
t_le
ad_t
rail_
edge
_int
trail_edge_int=set_trail_edge_trigger_present
>D
Q
R
Sclk_int
trail_edge_trigger_present=block_hit=trail_edge_trigger
parallel_load
trail_edge_trigger
block_hit
reset >D
Q
R
Sclk_intreset
parallel_load_requ
>D
Q
R
Sclk_intreset
daq_ready
parallel_load
parallel_load_i_out
reset_pileup_int
reset
reset_pileup_int_intreset_pileup_i
clk_ro_int=clk_int
resethit_in
daq_ready
latency = ~0
pulsewidth= 1-2 clk cycles
latency = 0-1 clk cycles
pulsewidth= 1-2 clk cycles
latency = 1-2 clk cycles
pulsewidth= 1 clk cycles
para
llel_
load
para
llel_
load
_req
u
trai
l_ed
ge_i
nt
hit_
int
hitArbiterController2010a
• verifications of asynchronous circuit,• based on a list of hits entries, which are cross
checked against outputs of the hitArbiter• and the delay of each hit is histogrammed
verifyHitArbiter• entity verifyhitarbiter is• port (• i_in : in std_logic_vector(4 downto 0);• hit : in std_logic;• address : in std_logic_vector(4 downto 0);• pileup : in std_logic_vector(4 downto 0);• clk : in std_logic;• parallel_load_i : in std_logic• );•
• end verifyhitarbiter;
verifyHitArbiter
• 5 x• proc_store_hit_in0:process• begin• wait on i_in(0);• store_hit_in (pixel_address => 0,• i_in => i_in(0));• end process;
• type hit_record_type is record• hit_in_count :integer;• pixel_number :integer range 0 to 4;• in_rise_time :time;• in_fall_time :time;• out_rise_time :time;• out_fall_time :time;• hit_out_count :integer;• active :boolean;• end record;
verifyHitArbiter• proc_evaluate_hit:process• begin• wait on trigger_evaluation_delayed;• if (trigger_evaluation_delayed = '1') then• evaluate_hit;
• read_hit_arbiter_fifo(time_rising,time_falling, address, pileup);– assign_hit ( time_rising => time_rising,– time_falling => time_falling,– address_int => address_int,– address_or_pileup => "ADDRESS”);
– assign_hit ( time_rising => time_rising,– time_falling => time_falling,– address_int => pileup_loop,– address_or_pileup => "PILEUP”);
• end if;• end process;
verifyHitArbiter
• Assign_hit
• time_since_in_fall_time := time_falling - hit_record(search_pointer).in_fall_time;– if (time_since_in_fall_time > hit_arbiter_time_out) then– hit_record(search_pointer).active := FALSE;– if (address_or_pileup = "ADDRESS") then– send_hit_to_file(search_pointer,–
"TIMEOUT",–
-1,–
-1);– elsif (address_or_pileup = "PILEUP") then– send_hit_to_file(search_pointer,–
"PILEUP_TIMEOUT",–
-1,–
-1);– assert (FALSE) report "TIME_OUT in pileup
search, this should not happen as time_outs should have been found already in address search" severity failure;– else – assert (FALSE) report "string error" severity
failure;– end if;
verifyHitArbiter
• Assign_hit: assign_hit
• --hit cannot be assigned to• if (search_pointer = shared_write_pointer) then• send_not_assigned_hit_to_file(search_pointer,• address_or_pileup,• address_int,• time_rising,• time_falling• );
• found_match := (((hit_record(search_pointer).active = TRUE) and• (hit_record(search_pointer).pixel_number = address_int)• and (time_since_in_fall_time <= hit_arbiter_time_out)• )• or (search_pointer = shared_write_pointer)) ;
verifyHitArbiter
• Assign_hit:
• --hit successfully assigned• hit_record(search_pointer).active := FALSE;
• hit_record(search_pointer).hit_out_count := hit_out_count;• hit_record(search_pointer).out_rise_time := time_rising;• hit_record(search_pointer).out_fall_time := time_falling;• if (address_or_pileup = "ADDRESS") then• send_hit_to_file(search_pointer,• "SUCCESS",• latency_loop_counter_rising,• latency_loop_counter_falling• );• elsif (address_or_pileup = "PILEUP") then• send_hit_to_file(search_pointer,• "PILEUP",• latency_loop_counter_rising,• latency_loop_counter_falling• );• else • assert (FALSE) report "string error" severity failure;• end if;
verifyHitArbiter
• Assign_hit:
• fill_latency_histogram(address_int,• search_pointer,• time_rising,• "RISING",•
latency_histogramm_rising,•
latency_loop_counter_rising);•
• fill_latency_histogram(address_int,• search_pointer,• time_falling,• "FALLING",•
latency_histogramm_falling,•
latency_loop_counter_falling• );
hitArbiter2010a 18.03.11
hitArbiter2010a 17.03.11
hitArbiter2010a 17.3.2010• *************** hit counters8 **********• HitArbiter received hit from column (hit_in_count:) 21303• Hits send to front-end (hit_count_before_frontend(index):) 21339• ---• HitArbiter successfully assigns address (hit_out_count): 21144• hit_out_count/hit_count_after_frontend: 9.925363E-01• hit_out_count/hit_count_before_frontend: 9.908618E-01• ---• Input to hitArbiter was not treated (hit_arbiter_time_out_counter): 16• out of which are double_hit_shared: 5• ---• Arbiter found pileup (pileup_input_counter): 143• Successfully assigned pileupAddress (hit_out_pileup_count): 143• hit_arbiter_not_assigned_pileup_counter: 0• hit_arbiter_not_assigned_address_counter: 5
************************************* Proc. report_latency_histogram RISING: Pixel: 0 Bin: 0 Number_of_entries: 7247 Latency: 1.341 ns Pixel: 0 Bin: 1 Number_of_entries: 0 Latency: 0 ns Pixel: 0 Bin: 2 Number_of_entries: 0 Latency: 0 ns Pixel: 0 Bin: 3 Number_of_entries: 0 Latency: 0 ns Pixel: 1 Bin: 0 Number_of_entries: 5655 Latency: 1.326 ns Pixel: 1 Bin: 1 Number_of_entries: 0 Latency: 0 ns Pixel: 1 Bin: 2 Number_of_entries: 0 Latency: 0 ns Pixel: 1 Bin: 3 Number_of_entries: 0 Latency: 0 ns Pixel: 2 Bin: 0 Number_of_entries: 4092 Latency: 1.313 ns Pixel: 2 Bin: 1 Number_of_entries: 0 Latency: 0 ns Pixel: 2 Bin: 2 Number_of_entries: 0 Latency: 0 ns Pixel: 2 Bin: 3 Number_of_entries: 0 Latency: 0 ns Pixel: 3 Bin: 0 Number_of_entries: 2598 Latency: 1.309 ns Pixel: 3 Bin: 1 Number_of_entries: 0 Latency: 0 ns Pixel: 3 Bin: 2 Number_of_entries: 0 Latency: 0 ns Pixel: 3 Bin: 3 Number_of_entries: 0 Latency: 0 ns Pixel: 4 Bin: 0 Number_of_entries: 1552 Latency: 1.306 ns Pixel: 4 Bin: 1 Number_of_entries: 0 Latency: 0 ns Pixel: 4 Bin: 2 Number_of_entries: 0 Latency: 0 ns Pixel: 4 Bin: 3 Number_of_entries: 0 Latency: 0 ns ** Proc. report_latency_histogram FALLING: Pixel: 0 Bin: 0 Number_of_entries: 7247 Latency: 0.473 ns Pixel: 0 Bin: 1 Number_of_entries: 0 Latency: 0 ns Pixel: 0 Bin: 2 Number_of_entries: 0 Latency: 0 ns Pixel: 0 Bin: 3 Number_of_entries: 0 Latency: 0 ns Pixel: 1 Bin: 0 Number_of_entries: 5655 Latency: 0.47 ns Pixel: 1 Bin: 1 Number_of_entries: 0 Latency: 0 ns Pixel: 1 Bin: 2 Number_of_entries: 0 Latency: 0 ns Pixel: 1 Bin: 3 Number_of_entries: 0 Latency: 0 ns Pixel: 2 Bin: 0 Number_of_entries: 4092 Latency: 0.46 ns Pixel: 2 Bin: 1 Number_of_entries: 0 Latency: 0 ns Pixel: 2 Bin: 2 Number_of_entries: 0 Latency: 0 ns Pixel: 2 Bin: 3 Number_of_entries: 0 Latency: 0 ns Pixel: 3 Bin: 0 Number_of_entries: 2598 Latency: 0.459 ns Pixel: 3 Bin: 1 Number_of_entries: 0 Latency: 0 ns Pixel: 3 Bin: 2 Number_of_entries: 0 Latency: 0 ns Pixel: 3 Bin: 3 Number_of_entries: 0 Latency: 0 ns Pixel: 4 Bin: 0 Number_of_entries: 1552 Latency: 0.439 ns Pixel: 4 Bin: 1 Number_of_entries: 0 Latency: 0 ns Pixel: 4 Bin: 2 Number_of_entries: 0 Latency: 0 ns Pixel: 4 Bin: 3 Number_of_entries: 0 Latency: 0 ns***********************************
8 pileups missing3 double hits in address (similar to pileup)2 hit/pileup mixup------143 +3 +2 = 148 pileups8 hits missing-------8 missing/21339=0.037%148 pileups/21339=0.69%21144 correct/21339=99.09(21144 correct+148pileups)/21339=99.78%
hitArbiter2010a 17.3.2010• 2nd i_in arrives slightly before clk edge, thus pileup is not registered early enough to be sent and hit is still blocked• is a hit which is not seen at all, neither as address nor as pileup• 8/21339=3.7e-4• (4516 2 10592004.435472ns
5977 2 14074866.797683ns10388 3 24380982.30046ns11063 0 25950847.9229 ns12721 1 29870235.41273 ns14829 1 34870941.847717 ns15343 3 36071488.918378 ns16283 0 38233623.290805 ns)
hitArbiter2010a 17.3.2010• Double i_in at almost same time• two hits but only one time tag pair time measurement ambigoussimilar to pileup but with time info• 3/21339=1.4E-4• produces three errors in log file not assigned + 2xtimeout
– (10085 2 23673143.738033+10086 0 23673143.935083 ns+-1 15 0 ns 0 ns 23673145.051033 ns-1 15 0 ns 0 ns 35384704.987393 ns+15054 2 35384704.223075 ns +15055 3 35384703.678393 ns-1 15 0 ns 0 ns 47231833.95059 ns+20096 4 47231832.63459 ns+20097 1 47231832.825357 ns)
hitArbiter2010a 17.3.2010• hit on same pixel shortly after end of first can provoke short spike on hit• that way correct address value is cleared (no hit bit) and address value is copied to pileup value• for data analysis prolem indicated by no bit in address (=0, but pileup states address) time measurement correct• analysis reports 1 noassign, 1 pileup, 1 timeout• 2nd hit passes via the and gate after first FF.• 2/21339=9E-4• 19657 2 46244212.462846 ns
19703 0 46365878.985321 ns• block_pileup connected to clk in of addres_prebuf FF should cure this problem, as address is only latched when actually a read-out is initiated
• Modification G removes these two occurances hitArbiter2010a 21.3.2010
hitArbiter2010a 18.3.2010• *************** hit counters8 **********• HitArbiter received hit from column (hit_in_count:) 21303• Hits send to front-end (hit_count_before_frontend(index):) 21339• ---• HitArbiter successfully assigns address (hit_out_count): 21146• hit_out_count/hit_count_after_frontend: 9.926301E-01• hit_out_count/hit_count_before_frontend: 9.909555E-01• ---• Input to hitArbiter was not treated (hit_arbiter_time_out_counter): 14• out of which are double_hit_shared: 3• and out of which are no_hit_shared: 0• each double_hit produces one not_assigned error + 2 timout/pileup,• as the 2 inputs to the hitArbiter seem not to be treated and the output of
the hitArbiter is not assigned.• In the data stream the address indicated the double hit and thus they can
be treated as pileup.• ---• Arbiter found pileup (pileup_input_counter): 143• Successfully assigned pileupAddress (hit_out_pileup_count): 143• hit_arbiter_not_assigned_pileup_counter: 0• hit_arbiter_not_assigned_address_counter: 3
** Proc. report_latency_histogram RISING: Pixel: 0 Bin: 0 Number_of_entries: 7248 Latency: 1.341 ns Pixel: 0 Bin: 1 Number_of_entries: 0 Latency: 0 ns Pixel: 0 Bin: 2 Number_of_entries: 0 Latency: 0 ns Pixel: 0 Bin: 3 Number_of_entries: 0 Latency: 0 ns Pixel: 1 Bin: 0 Number_of_entries: 5655 Latency: 1.326 ns Pixel: 1 Bin: 1 Number_of_entries: 0 Latency: 0 ns Pixel: 1 Bin: 2 Number_of_entries: 0 Latency: 0 ns Pixel: 1 Bin: 3 Number_of_entries: 0 Latency: 0 ns Pixel: 2 Bin: 0 Number_of_entries: 4093 Latency: 1.313 ns Pixel: 2 Bin: 1 Number_of_entries: 0 Latency: 0 ns Pixel: 2 Bin: 2 Number_of_entries: 0 Latency: 0 ns Pixel: 2 Bin: 3 Number_of_entries: 0 Latency: 0 ns Pixel: 3 Bin: 0 Number_of_entries: 2598 Latency: 1.309 ns Pixel: 3 Bin: 1 Number_of_entries: 0 Latency: 0 ns Pixel: 3 Bin: 2 Number_of_entries: 0 Latency: 0 ns Pixel: 3 Bin: 3 Number_of_entries: 0 Latency: 0 ns Pixel: 4 Bin: 0 Number_of_entries: 1552 Latency: 1.306 ns Pixel: 4 Bin: 1 Number_of_entries: 0 Latency: 0 ns Pixel: 4 Bin: 2 Number_of_entries: 0 Latency: 0 ns Pixel: 4 Bin: 3 Number_of_entries: 0 Latency: 0 ns ** Proc. report_latency_histogram FALLING: Pixel: 0 Bin: 0 Number_of_entries: 7248 Latency: 0.473 ns Pixel: 0 Bin: 1 Number_of_entries: 0 Latency: 0 ns Pixel: 0 Bin: 2 Number_of_entries: 0 Latency: 0 ns Pixel: 0 Bin: 3 Number_of_entries: 0 Latency: 0 ns Pixel: 1 Bin: 0 Number_of_entries: 5655 Latency: 0.47 ns Pixel: 1 Bin: 1 Number_of_entries: 0 Latency: 0 ns Pixel: 1 Bin: 2 Number_of_entries: 0 Latency: 0 ns Pixel: 1 Bin: 3 Number_of_entries: 0 Latency: 0 ns Pixel: 2 Bin: 0 Number_of_entries: 4093 Latency: 0.46 ns Pixel: 2 Bin: 1 Number_of_entries: 0 Latency: 0 ns Pixel: 2 Bin: 2 Number_of_entries: 0 Latency: 0 ns Pixel: 2 Bin: 3 Number_of_entries: 0 Latency: 0 ns Pixel: 3 Bin: 0 Number_of_entries: 2598 Latency: 0.459 ns Pixel: 3 Bin: 1 Number_of_entries: 0 Latency: 0 ns Pixel: 3 Bin: 2 Number_of_entries: 0 Latency: 0 ns Pixel: 3 Bin: 3 Number_of_entries: 0 Latency: 0 ns Pixel: 4 Bin: 0 Number_of_entries: 1552 Latency: 0.439 ns Pixel: 4 Bin: 1 Number_of_entries: 0 Latency: 0 ns Pixel: 4 Bin: 2 Number_of_entries: 0 Latency: 0 ns Pixel: 4 Bin: 3 Number_of_entries: 0 Latency: 0 ns***********************************
8 pileups missing3 double hits in address (similar to pileup)2 hit/pileup mixup------143 +3 +2 = 148 pileups8 hits missing-------8 missing/21339=0.037%148 pileups/21339=0.69%21144 correct/21339=99.09(21144 correct+148pileups)/21339=99.78%
hitArbiter2009
hitArbiter2009 Problems
Problem Y (2 different delays):if same pixel has been hit before -> FlipFlop is already ‘1’ and hit routed directly throughbypassing delay element -> longer pulse and different timinginput pulse width normal i_in 16.016424 ->hit 15.182424 (diff 834 ps)when FF already ‘1’ 16.418527 -> hit pulse 16.506527 (diff -88 ps)i_in ->hit normal 1321 ps, if FF already ‘1’ 399 ps.offline correction possibleReset needed after each hitall timing from VCAD librarymodification B solved.
Problem Z:pileup info only from time after trail_edge trigger, 2nd hit during time when pulse is ‘1’ not considered for pileup as block signal only is activated with trail_edge_triggerChange to two block signals, 1 for hit output (trailedge) the other for pileup (upon lead_edge_trigger)
Solution Z:block_pileup which should better be called enable_pileup is now separated from block_hit and starts with rising edge of hit.
• Problem A:• pileup reset pulse comes during pileup input is still active -> pileup is not cleared
and still active for next hit -> spurious pileup -> cannot be assigned• modification A. • reset_pileup_in is now connected to async reset and adds thus to dead time for
pileup detection -> with edge detector pulse can be made shorter. see next slide A1
• reset_pileup_in is still active while new hit comes
• Solution A:• modification A. • reset_pileup_in is now connected to async reset and adds
thus to dead time for pileup detection -> with edge detector pulse can be made shorter.
Problem B
When 2nd signal arrives while in block state and the block signal releases while the discriminator pulse is still ‘1’ -> remaining pulse is still sent to hit output.
• Problem B:• block hit is released too early and does not stop hit from entering first
FF, when first hit is already low, but hit has not been synchronized and parallel_loaded, 2nd hit is sent through but too short in pulse length.
• modification B• possibly blockhit into last or with (hit/hit_i) is then not needed
• Solution B:• modification B• possibly blockhit into last or with (hit/hit_i) is
then not needed
• Problem E:• When same pixel is activated again after the first hit trailing
edge but before block_hit deactivation, pileup is not registered• probability: < 10E-3• modification E
• Solution E:• modification E• pileup is only set for the not active pixel and can be set for the
active pixel only after block_pileup is inactive, set next slide• no pileup bit set for active pixel anymore
• Solution E:• modification E• no pileup bit set for active pixel anymore
• Problem F:• can the delay cell be avoided?• Solution F: yes, but is reaction time shorter than delay?
Reaction time defines time suseptible to double hits
• Question is F beneficial?• if turnaround time of hit signal is longer than
delay cell then uncertainty for double hits is longer
• Following simulations done with modified verilog backannotations for manually modified code not present
• *************** hit counters8 **********• HitArbiter received hit from column (hit_in_count:) 425018• Hits send to front-end (hit_count_before_frontend(index):) 425714• ---• HitArbiter successfully assigns address (hit_out_count): 422009• hit_out_count/hit_count_after_frontend: 9.929203E-01• hit_out_count/hit_count_before_frontend: 9.912970E-01• ---• Input to hitArbiter was not treated (hit_arbiter_time_out_counter): 184• out of which are double_hit_shared: 40• ---• Arbiter found pileup (pileup_input_counter): 2829• Successfully assigned pileupAddress (hit_out_pileup_count): 2825• hit_arbiter_not_assigned_pileup_counter: 4• ***********************************• ** Proc. report_latency_histogram RISING: • Pixel: 0 Bin: 0 Number_of_entries: 143578 Latency: 0.721 ns• Pixel: 0 Bin: 1 Number_of_entries: 0 Latency: 0 ns• Pixel: 0 Bin: 2 Number_of_entries: 0 Latency: 0 ns• Pixel: 0 Bin: 3 Number_of_entries: 0 Latency: 0 ns• Pixel: 1 Bin: 0 Number_of_entries: 114694 Latency: 0.716 ns• Pixel: 1 Bin: 1 Number_of_entries: 0 Latency: 0 ns• Pixel: 1 Bin: 2 Number_of_entries: 0 Latency: 0 ns• Pixel: 1 Bin: 3 Number_of_entries: 0 Latency: 0 ns• Pixel: 2 Bin: 0 Number_of_entries: 81475 Latency: 0.716 ns• Pixel: 2 Bin: 1 Number_of_entries: 0 Latency: 0 ns• Pixel: 2 Bin: 2 Number_of_entries: 0 Latency: 0 ns• Pixel: 2 Bin: 3 Number_of_entries: 0 Latency: 0 ns• Pixel: 3 Bin: 0 Number_of_entries: 52218 Latency: 0.712 ns• Pixel: 3 Bin: 1 Number_of_entries: 0 Latency: 0 ns• Pixel: 3 Bin: 2 Number_of_entries: 0 Latency: 0 ns• Pixel: 3 Bin: 3 Number_of_entries: 0 Latency: 0 ns• Pixel: 4 Bin: 0 Number_of_entries: 30044 Latency: 0.705 ns• Pixel: 4 Bin: 1 Number_of_entries: 0 Latency: 0 ns• Pixel: 4 Bin: 2 Number_of_entries: 0 Latency: 0 ns• Pixel: 4 Bin: 3 Number_of_entries: 0 Latency: 0 ns• • ** Proc. report_latency_histogram FALLING: • Pixel: 0 Bin: 0 Number_of_entries: 143578 Latency: 0.489 ns• Pixel: 0 Bin: 1 Number_of_entries: 0 Latency: 0 ns• Pixel: 0 Bin: 2 Number_of_entries: 0 Latency: 0 ns• Pixel: 0 Bin: 3 Number_of_entries: 0 Latency: 0 ns• Pixel: 1 Bin: 0 Number_of_entries: 114694 Latency: 0.487 ns• Pixel: 1 Bin: 1 Number_of_entries: 0 Latency: 0 ns• Pixel: 1 Bin: 2 Number_of_entries: 0 Latency: 0 ns• Pixel: 1 Bin: 3 Number_of_entries: 0 Latency: 0 ns• Pixel: 2 Bin: 0 Number_of_entries: 81475 Latency: 0.476 ns• Pixel: 2 Bin: 1 Number_of_entries: 0 Latency: 0 ns• Pixel: 2 Bin: 2 Number_of_entries: 0 Latency: 0 ns• Pixel: 2 Bin: 3 Number_of_entries: 0 Latency: 0 ns• Pixel: 3 Bin: 0 Number_of_entries: 52218 Latency: 0.478 ns• Pixel: 3 Bin: 1 Number_of_entries: 0 Latency: 0 ns• Pixel: 3 Bin: 2 Number_of_entries: 0 Latency: 0 ns• Pixel: 3 Bin: 3 Number_of_entries: 0 Latency: 0 ns• Pixel: 4 Bin: 0 Number_of_entries: 30044 Latency: 0.458 ns• Pixel: 4 Bin: 1 Number_of_entries: 0 Latency: 0 ns• Pixel: 4 Bin: 2 Number_of_entries: 0 Latency: 0 ns• Pixel: 4 Bin: 3 Number_of_entries: 0 Latency: 0 ns• ***********************************• *************** Total hit counters afer all runs
• If Mod F: and a piled up signal is followed by a hit the hit goes trough correctly (good) but the pile up is again signalled (bad)(probability 4/400000) and cannot be assigned to (hit_arbiter_not_assigned_pileup_counter)
• hit_arbiter_time_out needs to be bigger than the max difference of TOT pulses, otherwise search procedure does not wait for address and pileup to be read out.
• If F: when a hit arrives almost at the end of block_hit and block_pileup it is neither seen as hit nor as pileup (184-2*40=104/400000, 184 also counts double hits, each double hit 2 counter entries), need to verify real occurance with back annotation.
• If F: double hit provokes three errors:• -1 15 0 ns 0 ns 23673144.454033 ns 23673160.05226 ns 10010 FALSE
NOTASSIGNED• 10085 2 23673143.738033 ns 23673157.765934 ns 0 ns 0 ns 9017 OK TIMEOUT• 10086 0 23673143.935083 ns 23673159.56326 ns 0 ns 0 ns 9018 OK TIMEOUT• is part of 184 errors• 40 double errors counted
• Runs with ABCDE (no F), modifications have not backannotations.
ABCDE• Hits to all arbiters: 3299002/ 3293867• *************** hit counters8 **********• HitArbiter received hit from column (hit_in_count:) 425018• Hits send to front-end (hit_count_before_frontend(index):) 425714• ---• HitArbiter successfully assigns address (hit_out_count): 422062• hit_out_count/hit_count_after_frontend: 9.930450E-01• hit_out_count/hit_count_before_frontend: 9.914215E-01• ---• Input to hitArbiter was not treated (hit_arbiter_time_out_counter): 169• out of which are double_hit_shared: 82• ---• Arbiter found pileup (pileup_input_counter): 2787• Successfully assigned pileupAddress (hit_out_pileup_count): 2787• hit_arbiter_not_assigned_pileup_counter: 0• ***********************************• ** Proc. report_latency_histogram RISING: • Pixel: 0 Bin: 0 Number_of_entries: 143595 Latency: 1.333 ns• Pixel: 0 Bin: 1 Number_of_entries: 0 Latency: 0 ns• Pixel: 0 Bin: 2 Number_of_entries: 0 Latency: 0 ns• Pixel: 0 Bin: 3 Number_of_entries: 0 Latency: 0 ns• Pixel: 1 Bin: 0 Number_of_entries: 114707 Latency: 1.321 ns• Pixel: 1 Bin: 1 Number_of_entries: 0 Latency: 0 ns• Pixel: 1 Bin: 2 Number_of_entries: 0 Latency: 0 ns• Pixel: 1 Bin: 3 Number_of_entries: 0 Latency: 0 ns• Pixel: 2 Bin: 0 Number_of_entries: 81492 Latency: 1.312 ns• Pixel: 2 Bin: 1 Number_of_entries: 0 Latency: 0 ns• Pixel: 2 Bin: 2 Number_of_entries: 0 Latency: 0 ns• Pixel: 2 Bin: 3 Number_of_entries: 0 Latency: 0 ns• Pixel: 3 Bin: 0 Number_of_entries: 52225 Latency: 1.311 ns• Pixel: 3 Bin: 1 Number_of_entries: 0 Latency: 0 ns• Pixel: 3 Bin: 2 Number_of_entries: 0 Latency: 0 ns• Pixel: 3 Bin: 3 Number_of_entries: 0 Latency: 0 ns• Pixel: 4 Bin: 0 Number_of_entries: 30043 Latency: 1.298 ns• Pixel: 4 Bin: 1 Number_of_entries: 0 Latency: 0 ns• Pixel: 4 Bin: 2 Number_of_entries: 0 Latency: 0 ns• Pixel: 4 Bin: 3 Number_of_entries: 0 Latency: 0 ns• • ** Proc. report_latency_histogram FALLING: • Pixel: 0 Bin: 0 Number_of_entries: 143595 Latency: 0.489 ns• Pixel: 0 Bin: 1 Number_of_entries: 0 Latency: 0 ns• Pixel: 0 Bin: 2 Number_of_entries: 0 Latency: 0 ns• Pixel: 0 Bin: 3 Number_of_entries: 0 Latency: 0 ns• Pixel: 1 Bin: 0 Number_of_entries: 114707 Latency: 0.487 ns• Pixel: 1 Bin: 1 Number_of_entries: 0 Latency: 0 ns• Pixel: 1 Bin: 2 Number_of_entries: 0 Latency: 0 ns• Pixel: 1 Bin: 3 Number_of_entries: 0 Latency: 0 ns• Pixel: 2 Bin: 0 Number_of_entries: 81492 Latency: 0.476 ns• Pixel: 2 Bin: 1 Number_of_entries: 0 Latency: 0 ns• Pixel: 2 Bin: 2 Number_of_entries: 0 Latency: 0 ns• Pixel: 2 Bin: 3 Number_of_entries: 0 Latency: 0 ns• Pixel: 3 Bin: 0 Number_of_entries: 52225 Latency: 0.478 ns• Pixel: 3 Bin: 1 Number_of_entries: 0 Latency: 0 ns• Pixel: 3 Bin: 2 Number_of_entries: 0 Latency: 0 ns• Pixel: 3 Bin: 3 Number_of_entries: 0 Latency: 0 ns• Pixel: 4 Bin: 0 Number_of_entries: 30043 Latency: 0.458 ns• Pixel: 4 Bin: 1 Number_of_entries: 0 Latency: 0 ns• Pixel: 4 Bin: 2 Number_of_entries: 0 Latency: 0 ns• Pixel: 4 Bin: 3 Number_of_entries: 0 Latency: 0 ns• ***********************************
• In ABCDE (version a) when a pileup hit is longer than real hit and again piledup after read-out started then 2nd pileup is not seen.
• 6/400000
• same pixel hit with in few hundreds of ps• 1/400000
• backannotated simulation must show• real occurance of errors and pileup occurance
Notes on simulation of hitArbiter2010a
• setup and hold violation of all FF modified,• still indicated but does not produce x states
which are propagated• this is needed as asynchronously arriving hits
can arrive close to clock– DFFn are modified and –binding option overrides
library definition
notes from hitArbiter transfer from gtk2009(ARM) to IBM
na62_demo_manual hitArbitrer
Artisan Buf
vcad clk buffer
VCAD BUF
artisanDFFRHQX8TF
VCAD DFFR
c
min pulse width is missing!
Artisan NOR3X8TF
VCAD NOR3
artisan NAND2
VCAD NAND2
VCAD NAND2
DLY1X1TF Artisan
setup/hold VCAD DFFR
delay element ARTISAN• NOR3: typ delay:
20ps• NAND2: typ delay :
20ps• Setup/hold DFFR typ: 90ps/-43 ps• delay typ
102 ps
• 20 + 20 + 102 = 142 -> 90 => 60%
delay element VCAD
• NOR3: min delay fast/slow: 33 ps/56 ps• NAND2: min delay fast/slow: 21 ps/39 ps• Setup/hold DFFR: slow: 208 ps/-112ps• delay
• fast gates with slow setup:33 +21+delay = x -> / 1.6 = 208 => x = 332, delay = 278
• slow gates with slow setup:56 + 39 + delay = x -> /1.6 = 208 => x = 332, delay = 237
VCAD delay6
delay element VCAD
• NOR3: min delay fast/typ/slow: 33 ps/43ps/56 ps• NAND2: min delay fast/typ/slow: 21 ps/31ps/39 ps• Setup/hold DFFR: worst case: 208 ps/-
112ps• delay delay6 C: fast/typ/slow
438/294/218ps
• fast gates with wc setup:33 +21+218= 272 -> setup worst case 208 => 31%
• slow gates with wc setup:56 + 39 + 438= 492-> setup worst case 208 => 136%
• typ gates with wc setup:43+ 31+ 294= 368-> setup worst case 208 => 78%
ARTISAN OR3
VCAD OR3
VCAD OR3
Artisan INV
Artisan INV
VCAD inverter
Artisan AND2
VCAD AND2
VCAD AND2
Artisan DFFSRHQX8
VCADDFFSRHQX8
set is active high instead of active low in Artisan->invert logic -> set with reset load logic 1 with reset_pile_upand use qbar output
VCADDFFSRHQX8
ARTISANDFFHQX8
VCADDFFHQX8
VCADDFFHQX8