90
APPENDIX A1 1 Preliminary Claim Infringement Chart of U.S. Patent No. 6,532,505 (“505 Patent”) Preliminary Contentions: These contentions are by their nature preliminary. Discovery is ongoing and, to date, NVIDIA has not produced any documents or provided substantive interrogatory responses, and deposition and expert discovery has not begun. Thus, this chart is based on publicly available information, and based upon information and reasonable belief in light of such evidence. Polaris will supplement and/or amend these contentions in view of discovery in this case. Claim Construction: This chart is prepared without the benefit of the parties’ proposed claim constructions or the Court’s claim constructions. Polaris reserves the right to supplement or amend its contentions to address any issues arising from any subsequent claim construction proceedings. Doctrine of Equivalents: Except where specifically noted, Polaris contends that each element of each asserted claim is literally met. Polaris reserves its right to supplement and/or amend its doctrine of equivalents contentions to the extent that Defendants contend that any limitation is not literally met. 505 Accused Instrumentalities include any NVIDIA Tegra processors or system-on-chip (“SOC”) devices configured to control and support Low-Power Double Data Rate (“LPDDR”) memory, or tablets, products, devices, systems, and/or components of systems that include such processors or SOC devices configured to control and support LPDDR memory (or that include processors or SOC devices with substantially the same functionality with respect to Polaris’s infringement contentions as an NVIDIA Tegra processors or SOC devices configured to control and support LPDDR memory) that were or are designed, developed, tested, made, used, offered for sale, sold in the United States, imported into the United States, or that have a nexus to the United States (“NVIDIA 505 Accused Instrumentalities”). NVIDIA products that fall within the scope of this definition include, for example and without limitation, the following groups: Tegra Group 1: NVIDIA Tegra processors or SOC devices configured to control and support LPDDR memory, including, but not limited to the following models of processors: Tegra K1, Tegra X1, Tegra 2, Tegra 3, Tegra 4, and Tegra 4i. The representative product of this group is the NVIDIA Tegra K1 processor. Tegra Group 2: System products that include NVIDIA Tegra processors or SOC devices configured to control and support LPDDR memory, including, but not limited to the following models of system products: NVIDIA Shield Tablets, NVIDIA Shield TV/TV Pro, Google Nexus 9, Acer Chromebook 13 CB5-311-T1UU, HP Chromebook 14 G3, and Nabi Big NVIDIA 1012

NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

  • Upload
    others

  • View
    10

  • Download
    0

Embed Size (px)

Citation preview

Page 1: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

1

Preliminary Claim Infringement Chart of U.S. Patent No. 6,532,505 (“505 Patent”)

Preliminary Contentions: These contentions are by their nature preliminary. Discovery is ongoing and, to date, NVIDIA has not

produced any documents or provided substantive interrogatory responses, and deposition and expert discovery has not begun. Thus,

this chart is based on publicly available information, and based upon information and reasonable belief in light of such evidence.

Polaris will supplement and/or amend these contentions in view of discovery in this case.

Claim Construction: This chart is prepared without the benefit of the parties’ proposed claim constructions or the Court’s claim

constructions. Polaris reserves the right to supplement or amend its contentions to address any issues arising from any subsequent

claim construction proceedings.

Doctrine of Equivalents: Except where specifically noted, Polaris contends that each element of each asserted claim is literally met.

Polaris reserves its right to supplement and/or amend its doctrine of equivalents contentions to the extent that Defendants contend that

any limitation is not literally met.

505 Accused Instrumentalities include any NVIDIA Tegra processors or system-on-chip (“SOC”) devices configured to control and

support Low-Power Double Data Rate (“LPDDR”) memory, or tablets, products, devices, systems, and/or components of systems that

include such processors or SOC devices configured to control and support LPDDR memory (or that include processors or SOC

devices with substantially the same functionality with respect to Polaris’s infringement contentions as an NVIDIA Tegra processors or

SOC devices configured to control and support LPDDR memory) that were or are designed, developed, tested, made, used, offered for

sale, sold in the United States, imported into the United States, or that have a nexus to the United States (“NVIDIA 505 Accused

Instrumentalities”). NVIDIA products that fall within the scope of this definition include, for example and without limitation, the

following groups:

•! Tegra Group 1: NVIDIA Tegra processors or SOC devices configured to control and support LPDDR memory,

including, but not limited to the following models of processors: Tegra K1, Tegra X1, Tegra 2, Tegra 3, Tegra 4, and Tegra 4i. The representative product of this group is the NVIDIA Tegra K1 processor.

•! Tegra Group 2: System products that include NVIDIA Tegra processors or SOC devices configured to control and support LPDDR memory, including, but not limited to the following models of system products: NVIDIA Shield Tablets, NVIDIA Shield TV/TV Pro, Google Nexus 9, Acer Chromebook 13 CB5-311-T1UU, HP Chromebook 14 G3, and Nabi Big

NVIDIA 1012

Page 2: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

2

Tab that were or are designed, developed, tested, made, used, offered for sale, sold in the United States, imported into the United States, or that have a nexus to the United States.

1 The representative product of this group is the NVIDIA Shield Tablet.

Infringement by NVIDIA:

•! NVIDIA’S DIRECT INFRINGEMENT: NVIDIA has directly infringed and continues to directly infringe Claims 1-3, 11-

12, and 14-18 of the 505 Patent under 35 U.S.C. § 271(a) by making, using, offering to sell, selling, within the United States, or importing into the United States without authorization the Tegra Group 1 and 2 products. This direct infringement is described in relation to the Tegra Group 1 and 2 products in the chart below.

•! NVIDIA’S INDIRECT INFRINGEMENT:

o! NVIDIA has induced and continues to induce infringement by others of Claims 1-3, 11-12, and 14-18 of the 505 Patent

under 35 U.S.C. § 271(b) by encouraging its customers (including, but not limited to Dell and other third parties, such as manufacturers of game consoles, smartphones, virtual reality headsets, smart and mobile devices, and automotive and navigation systems using the Tegra Group 1 and 2 products, as well as manufacturers, distributors, importers, and retailers of Group 2 products), distributors, wholesalers, and retailers, and end users to make, use, sell, offer to sell, and import in the United States without authorization the Tegra Group 1 and 2 products, and their own products incorporating the Tegra Group 1 and 2 products. The Tegra Accused Instrumentalities comprises the instrumentalities of the underlying direct infringement.

o! NVIDIA has contributed to and continues to contribute to infringement by others of Claims 1-3, 11-12, and 14-18 of

the 505 Patent under 35 U.S.C. § 271(c) by selling, offering to sell, importing, and/or supplying in the United States without authority components of the products that infringe Claims 1-3, 11-12, and 14-18 of the 505 Patent, including but not limited to Tegra Group 1 and 2 products as well as third-party products including game consoles, smartphones, virtual reality headsets, smart and mobile devices, and automotive and navigation systems. The Tegra Accused Instrumentalities comprises the instrumentalities of the underlying direct infringement.

1 These products are identified based on information currently available on NVIDIA’s website and other public information. Polaris

reserves the right to supplement, revise or otherwise amend this list, when additional model numbers are identified and new

information becomes available during discovery.

Page 3: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

3

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

1. A universal resource

access controller coupled

to a requesting system and

a resource, wherein

To the extent the preamble is found to be a limitation, as discussed below, the Tegra Group 1 and 2

products comprise a universal resource access controller coupled to a requesting system and a resource.

Tegra Group 1 Products (NVIDIA Tegra Processors configured to control and support LPDDR

memory):

Group 1 comprises Tegra processors or SOC devices configured to control and support LPDDR

memory. The NVIDIA Tegra K1 processor is representative of Group 1 products because it comprises

and uses LPDDR memory, and complies with the JEDEC LPDDR standard (JESD209B, JESD2093C,

and/or JESD209-4), as do all Group 1 products. An image of the representative product, NVIDIA

Tegra K1 processor, which is connected to the NVIDIA Shield Tablet, is reproduced from NVIDIA’s

website below:

Page 4: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

4

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

See Mile High Milestone: Tegra K1 “Denver” Will Be First 64-bit ARM Processor for Android,

PND00015126-29, http://blogs.nvidia.com/blog/2014/08/11/tegra-k1-denver-64-bit-for-android/.

See also:

Page 5: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

5

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

NVIDIA Tegra K1 Preview & Architecture Analysis, PND00015130-141

(http://www.anandtech.com/show/7622/nvidia-tegrak1).

See also Tegra-K1-Mobile-Processor-Technical-Reference-Manual-DP06905001, v02p (“Tegra K1

TRM”), PND00017947-020289 at 17958:

Page 6: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

6

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

Each Group 1 product has a memory controller that “handles memory requests from internal clients

Page 7: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

7

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

and arbitrates among them to allocate memory bandwidth” for DRAMs (e.g., “DDR3L and LPDDR3

SDRAMs”).

Tegra K1 TRM at PND00018589.

Each Group 1 product also has memory controller clients:

1.3 Memory Controller and Internal Bus Architecture

The Tegra K1 mobile processor has a highly optimized 64-bit memory controller, supporting

low latency access for the CPU, optimized high bandwidth access for the graphics and video

devices, and controlled latency for real time devices such as display.

There is a three-level hierarchy of memory clients:

1.! Memory controller clients: The memory controller directly arbitrates between these

Page 8: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

8

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

using a complex algorithm optimizing DRAM efficiency. The highest bandwidth

clients fall into this class, and they communicate directly with the memory controller

using a proprietary high-speed bus.

2.! AHB devices: These generally have a built-in DMA engine, and share a single memory

client using the AHB bus protocol.

3.! APB devices: All APB devices are slaves, and are serviced by a shared multi-channel

APB DMA controller which is also an APB device.

Special provisions are made for the CPU to bypass parts of the memory controller arbitration to

help achieve a lower latency.

Tegra K1 TRM at PND00017959 (emphasis added).

Each Group 1 product also has at least one resource (e.g., the “pending request buffer” and the

“DRAM”):

Page 9: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

9

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

Tegra K1 TRM at PND00018590.

Page 10: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

10

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

Because DRAM and request buffers are each a resource and because the memory controller of each

Group 1 product controls access to the resources, each Group 1 product has a universal resource access

controller that is coupled to at least one resource.

Because the memory controller of each Group 1 product “handles” and “arbitrates among” two or more

“internal clients” that make “memory requests,” each Group 1 product has at least one requesting

system that is coupled to the universal resource access controller.

Tegra Group 2 Products (NVIDIA system products that include NVIDIA Tegra processors or

SOC devices configured to control and support LPDDR memory):

As discussed above, Group 2 comprises NVIDIA system products that include Group 1 products

configured to control and support LPDDR memory, which complies with the JEDEC LPDDR standard.

As shown above in connection with Group 1, each Group 2 product comprises a universal resource

access controller coupled to a requesting system and a resource.

[a] when the requesting

system desires access to

the resource, the

requesting system

generates a resource access

request which is passed to

the universal resource

controller which, in turn,

The requesting system of Tegra Group 1 and 2 products generates a resource access request which is

passed to the universal resource controller when the requesting system desires access to the resource.

Tegra Group 1 Products (NVIDIA Tegra Processors or SOC devices configured to control and

support LPDDR memory):

As shown above, each Group 1 product is configured to control and support LPDDR memory. In

compliance with the LPDDR standard, the requesting system (e.g., one of the memory controller

clients) of Group 1 generates a resource access request (e.g., memory client request) which is passed to

the universal resource controller when the requesting system desires access to the resource (e.g., shared

memory):

Page 11: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

11

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

The Tegra K1 device supports a single physical channel of memory interface, which operates in

either single x32 or single x64 configurations. It implements two protocol arbiters and has a

single Arbitration Domain. There is an alternate path from the Arbitration Domain that

provides a path to IRAM via AHB re-direction.

Figure 28 below is a simplified view of how memory requests are arbitrated in the Tegra K1

MSS. Memory client requests are first arbitrated through a sequence of ring arbiters which

perform a type of round-robin arbitration. There are three ring arbiters referred to as ring0,

ring1, and ring2. Ring1 arbiter clients are the ISO clients (display and camera) and the winner

of the ring2 arbiter. Each ring has a rate control mechanism referred to as Priority Tier Snap

Arbiter (PTSA). The client’s bandwidth guarantee is specified by the PTSA rate (also referred

to as “DDA”).

The block labelled “Row Sorter” is a pending request buffer (it sorts requests by the DRAM

row that it refers to). This row sorter is made up of many “bank queues” which hold the

requests made to the same DRAM bank/row. The number of requests pending in the row sorter

can affect whether ring1 or ring2 arbiters are throttled (slowed down) based on thresholds.

Tegra K1 TRM at PND00018590 (emphasis added).

The resource access request (e.g., “[m]emory client request”) is passed to the universal resource

controller because it is received by the memory controller of Group 1 product.

Tegra Group 2 Products (NVIDIA system products that include NVIDIA Tegra processors or

SOC devices configured to control and support LPDDR memory):

As discussed above, Group 2 comprises NVIDIA system products that include Group 1 products

Page 12: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

12

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

configured to control and support LPDDR memory, which complies with the JEDEC LPDDR standard.

As shown above in connection with Group 1, each Group 2 product generates a resource access request

which is passed to the universal resource controller when the requesting system desires access to the

resource.

[b] uses a specific

characteristic operating

parameter of the requested

resource, a current state of

the requested resource, and

a requested state of the

requested resource

The universal resource controller of Tegra Group 1 and 2 products uses a specific characteristic

operating parameter of the requested resource, a current state of the requested resource, and a requested

state of the requested resource.

Tegra Group 1 Products (NVIDIA Tegra Processors or SOC devices configured to control and

support LPDDR memory):

As shown above, each Group 1 product is configured to control and support LPDDR memory. In

compliance with the LPDDR standard, the universal resource controller (e.g., memory controller) of

Group 1 product uses a specific characteristic operating parameter of the requested resource, a current

state of the requested resource, and a requested state of the requested resource:

Device/Rank Geometry

DRAM devices come in many sizes, widths, etc. The arbiter must be programmed to drive the

correct combination of address bits, data bits, and protocol.

Tegra K1 TRM at PND00018593 (emphasis added).

Device Timing and Arbiter Timing

Page 13: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

13

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

A DRAM arbiter has configuration related to timing parameters specific to the attached

device. The JEDEC timing specifications for the device have to be converted from

nanoseconds to cycle counts. The controller must also be programmed with the cycles per tick

number for their particular clock domain (see also the subsection on “Global Memory

Subsystem Configuration” below).

Tegra K1 TRM at PND00018594 (emphasis added).

As shown in the evidence cited above, the memory controller of the 505 Representative Product uses

device geometry (e.g., different sizes and widths) which is a characteristic of the operation of the

shared memory. As further reflected in the evidence cited above, the memory controller of the 505

Representative Product uses timing parameters that are also specific to the operation of the shared

memory.

The universal resource controller (e.g., memory controller) of Group 1 products also uses a current

state of the requested resource.

Other Arbiter Parameters

Other parameters related to arbitration that need to be configured include:

•! overall number of requests outstanding

•! which clients are considered isochronous

•! which clients participate in power-saving hysteresis operations

Tegra K1 TRM at PND00018594 (emphasis added).

The Tegra K1 device supports a single physical channel of memory interface, which operates in

either single x32 or single x64 configurations. It implements two protocol arbiters and has a

Page 14: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

14

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

single Arbitration Domain. There is an alternate path from the Arbitration Domain that

provides a path to IRAM via AHB re-direction.

Figure 28 below is a simplified view of how memory requests are arbitrated in the Tegra K1

MSS. Memory client requests are first arbitrated through a sequence of ring arbiters which

perform a type of round-robin arbitration. There are three ring arbiters referred to as ring0,

ring1, and ring2. Ring1 arbiter clients are the ISO clients (display and camera) and the winner

of the ring2 arbiter. Each ring has a rate control mechanism referred to as Priority Tier Snap

Arbiter (PTSA). The client’s bandwidth guarantee is specified by the PTSA rate (also referred

to as “DDA”).

The block labelled “Row Sorter” is a pending request buffer (it sorts requests by the DRAM

row that it refers to). This row sorter is made up of many “bank queues” which hold the

requests made to the same DRAM bank/row. The number of requests pending in the row

sorter can affect whether ring1 or ring2 arbiters are throttled (slowed down) based on

thresholds.

Tegra K1 TRM at PND00018590 (emphasis added).

The memory controller of each Group 1 product uses the number of overall requests outstanding and

the number of requests pending in the “bank queues” of the “row sorter,” which reflect the current state

of the shared memory.

The universal resource controller (e.g., memory controller) of each Group 1 product also uses a

requested state of the requested resource. Examples of such requested states are enumerated under a

section of the Tegra K1 TRM entitled “DRAM Protocol Arbiter Features.” Tegra K1 TRM at

PND00018590-91. In particular:

Page 15: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

15

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

•! Single channel 64-bit data bus

o! 4 chip selects

o! 4 individually controllable clock-enables

o! 4 individually controllable ODTs (DDR3L)

o! Operates in either single x32 or single x64 configuration

o! Supports per-byte data masks

Tegra K1 TRM at PND00018590-91 (emphasis added).

•! Per-byte data masks

Tegra K1 TRM at PND00018590-91 (emphasis added).

•! Deadline-based arbitration with latency allowance that can be specified per-client, and

under some circumstances dynamically adjusted for a given client.

Tegra K1 TRM at PND00018591 (emphasis added).

As shown in the evidence cited above, the memory controller of each Group 1 product arbitrates on the

basis of masked data and client-specific deadlines, which reflect the requested states of the shared

memory.

Tegra Group 2 Products (NVIDIA system products that include NVIDIA Tegra processors or

SOC devices configured to control and support LPDDR memory):

As discussed above, Group 2 comprises NVIDIA system products that include Group 1 products

configured to control and support LPDDR memory, which complies with the JEDEC LPDDR standard.

Page 16: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

16

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

As shown above in connection with Group 1, the universal resource controller (e.g., memory

controller) of Group 2 products use a specific characteristic operating parameter of the requested

resource, a current state of the requested resource, and a requested state of the requested resource.

[c] to generate a

corresponding sequenced

command suitable for

accessing the resource as

required by the requesting

system.

The universal resource controller of Tegra Group 1 and 2 products generates a corresponding

sequenced command suitable for accessing the resource as required by the requesting system.

Tegra Group 1 Products (NVIDIA Tegra Processors or SOC devices configured to control and

support LPDDR memory):

As shown above, each Group 1 product is configured to control and support LPDDR memory. In

compliance with the LPDDR standard, the universal resource controller of Group 1 products generates

a corresponding sequenced command suitable for accessing the resource as required by the requesting

system:

The Tegra K1 device supports a single physical channel of memory interface, which operates in

either single x32 or single x64 configurations. It implements two protocol arbiters and has a

single Arbitration Domain. There is an alternate path from the Arbitration Domain that

provides a path to IRAM via AHB re-direction.

Figure 28 below is a simplified view of how memory requests are arbitrated in the Tegra K1

MSS. Memory client requests are first arbitrated through a sequence of ring arbiters which

perform a type of round-robin arbitration. There are three ring arbiters referred to as ring0,

ring1, and ring2. Ring1 arbiter clients are the ISO clients (display and camera) and the winner

of the ring2 arbiter. Each ring has a rate control mechanism referred to as Priority Tier Snap

Arbiter (PTSA). The client’s bandwidth guarantee is specified by the PTSA rate (also referred

Page 17: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

17

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

to as “DDA”).

The block labelled “Row Sorter” is a pending request buffer (it sorts requests by the DRAM

row that it refers to). This row sorter is made up of many “bank queues” which hold the

requests made to the same DRAM bank/row. The number of requests pending in the row

sorter can affect whether ring1 or ring2 arbiters are throttled (slowed down) based on

thresholds.

Tegra K1 TRM at PND00018590 (emphasis added).

As shown in the evidence cited above, the memory controller of each Group 1 product moderates the

sequence and timing of memory access requests, thereby slowing down the memory controller’s

issuance of commands to the shared memory (e.g., commands to the DDR3L memory that is the shared

memory of the Group 1 product).

Further, in each Group 1 product, the sequenced commands (e.g., commands whose timing and

sequence are moderated by the memory controller) are suitable for accessing (e.g., reading from and

writing to) the resource (e.g., the shared memory) as required by the requesting system (e.g., the

memory client that is reading from or writing to the shared memory):

15.6.1.18 MC_EMEM_ARB_TIMING_RC_0

Boot requirements:

•! This register should be parameterized in the BCT and written by the Boot ROM during

cold boot.

•! This arbitration configuration register should be saved to SDRAM and restored by the

OS during warm boot.

Page 18: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

18

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

•! During Boot ROM section of warm boot, this register may be derived from EMC

Settings using the given equations.

This register is shadowed: see usage not at the top of Section [Document Error]

External Memory Arbitration Configuration: DRAM Timing : tRC

Offset: 0xa0 | Read/Write: R/W | Reset: 0x0000007f

(0bxxxxxxxxxxxxxxxxxxxxxxxxx1111111)

Bit Reset Description

6:0 0x7f RC: This is the minimum number of cycles between activate

commands to the same bank.

Program to ceil (max(EMC.RC,(EMC.RAS+EMC.RP))/DIV) - 1

Tegra K1 TRM at PND00018617-18 (emphasis added).

15.6.1.24 MC_EMEM_ARB_TIMING_R2R_0

Boot requirements:

•! This register should be parameterized in the BCT and written by the Boot ROM during

cold boot.

•! This arbitration configuration register should be saved to SDRAM and restored by the

OS during warm boot.

•! During Boot ROM section of warm boot, this register may be derived from EMC

Settings using the given equations.

This register is shadowed: see usage not at the top of Section [Document Error]

Page 19: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

19

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

External Memory Arbitration Configuration: DRAM Timing : tR2R

Offset: 0xb8 | Read/Write: R/W | Reset: 0x0000001f

(0bxxxxxxxxxxxxxxxxxxxxxxxxxxx11111)

Bit Reset Description

4:0 0x1f R2R: The number of cycles between consecutive read commands to

different devices (different chip selects).

Program to ceil (EMC.REXT/DIV)-1+OTFA+SFA.

Tegra K1 TRM at PND00018619-620 (emphasis added).

15.6.1.25 MC_EMEM_ARB_TIMING_W2W_0

Boot requirements:

•! This register should be parameterized in the BCT and written by the Boot ROM during

cold boot.

•! This arbitration configuration register should be saved to SDRAM and restored by the

OS during warm boot.

•! During Boot ROM section of warm boot, this register may be derived from EMC

Settings using the given equations.

This register is shadowed: see usage not at the top of Section [Document Error]

External Memory Arbitration Configuration: DRAM Timing : tW2W

Offset: 0xbc | Read/Write: R/W | Reset: 0x0000001f

Page 20: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

20

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

(0bxxxxxxxxxxxxxxxxxxxxxxxxxxx11111)

Bit Reset Description

4:0 0x1f W2W: The number of cycles between consecutive write commands to

different devices (different chip selects).

Program to ceil (EMC.WEXT/DIV)-1+SFA.

Tegra K1 TRM at PND00018620 (emphasis added).

15.6.1.26 MC_EMEM_ARB_TIMING_R2W_0

Boot requirements:

•! This register should be parameterized in the BCT and written by the Boot ROM during

cold boot.

•! This arbitration configuration register should be saved to SDRAM and restored by the

OS during warm boot.

•! During Boot ROM section of warm boot, this register may be derived from EMC

Settings using the given equations.

This register is shadowed: see usage not at the top of Section [Document Error]

External Memory Arbitration Configuration: DRAM Timing : tR2W

Offset: 0xc0 | Read/Write: R/W | Reset: 0x0000003f

(0bxxxxxxxxxxxxxxxxxxxxxxxxxx111111)

Bit Reset Description

5:0 0x3f R2W: The number of cycles to turn the bus from reads to writes.

Program to ceil (EMC.R2W/DIV)-1+OTFA+SFA.

Page 21: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

21

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

Tegra K1 TRM at PND00018620 (emphasis added).

15.6.1.27 MC_EMEM_ARB_TIMING_W2R_0

Boot requirements:

•! This register should be parameterized in the BCT and written by the Boot ROM during

cold boot.

•! This arbitration configuration register should be saved to SDRAM and restored by the

OS during warm boot.

•! During Boot ROM section of warm boot, this register may be derived from EMC

Settings using the given equations.

This register is shadowed: see usage not at the top of Section [Document Error]

External Memory Arbitration Configuration: DRAM Timing : tW2R

Offset: 0xc4 | Read/Write: R/W | Reset: 0x0000003f

(0bxxxxxxxxxxxxxxxxxxxxxxxxxx111111)

Bit Reset Description

5:0 0x3f W2R: The number of cycles to turn the bus from reads to writes. [sic2]

Program to ceil (EMC.W2R/DIV)-1+SFA.

2 The context of the discussion suggests that the register ought to store the number of cycles to the bus from writes to reads (as

contrasted from what is stored in R2W).

Page 22: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

22

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

[#PRODUCTION_RANGE_C]

Tegra K1 TRM at PND00018620-21 (emphasis added).

The memory controller of each Group 1 product moderates the sequence and timing of commands to

the shared memory (e.g., reads and writes) and these commands are suitable for accessing the resource

as required by the requesting system because, in the course of the proper operation of each Group 1

product, these commands result in the intended behavior from the resource (e.g., the shared memory is

read, if a read command is required by the requesting system; the shared memory is written to, if a

write command is required by the requesting system).

Tegra Group 2 Products (NVIDIA system products that include NVIDIA Tegra processors or

SOC devices configured to control and support LPDDR memory):

As discussed above, Group 2 comprises NVIDIA system products that include Group 1 products

configured to control and support LPDDR memory, which complies with the JEDEC LPDDR standard.

As shown above in connection with Group 1, the universal resource controller of Group 2 products

generates a corresponding sequenced command suitable for accessing the resource as required by the

requesting system.

2. A universal resource

access controller as recited

in claim 1, wherein

the universal resource

controller comprises:

Tegra Group 1 and 2 products comprise a universal resource access controller. See preamble, Claim 1.

Page 23: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

23

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

[a] a configurable system

interface coupled to the

requesting system suitably

arranged to both receive

the resource access request

and to generate a

corresponding universal

command;

The universal resource access controller of Tegra Group 1 and 2 products comprises a configurable

system interface coupled to the requesting system suitably arranged to both receive the resource access

request and to generate a corresponding universal command.

Tegra Group 1 Products (NVIDIA Tegra Processors or SOC devices configured to control and

support LPDDR memory):

As shown above, each Group 1 product is configured to control and support LPDDR memory. In

compliance with the LPDDR standard, the universal resource access controller (e.g., memory

controller) of each Group 1 product comprises a configurable system interface coupled to the

requesting system suitably arranged to both receive the resource access request and to generate a

corresponding universal command:

15.1 Memory Controller Architecture

The memory controller architecturally consists of the following parts:

•! Arbitration Domains (ADs), which can handle a single request or response per clock

from a group of clients. Typically, a system has a single Arbitration Domain, but an

implementation may divide the client space into multiple Arbitration Domains to

increase the effective system bandwidth. Multiple Traffic Classes within a single

Arbitration Domain and Protocol Arbiter are allowed.

•! Protocol Arbiters (PAs), which manage a related pool of memory devices. A system

may have a single Protocol Arbiter or multiple Protocol Arbiters.

•! Memory Crossbar, which routes request and responses between Arbitration Domains

and Protocol Arbiters. In the simplest version of the system, the Memory Crossbar is

just a pass through between a single Arbitration Domain and a single Protocol Arbiter.

•! Global Resources, which include entities such as configuration registers which are

Page 24: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

24

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

shared across the Memory Subsystem.

•! Write CAMs (WCAMs), which improves performance and throughput for PCIe ordered

clients (PCIe, SATA, HDA, and USB3), CPU writes, and CPU copies.

•! Memory Controller Client Interface (MCCIF), which provides a standardized

interface for access to the Memory Controller.

•! Translation Unit, which handles virtual-to-physical address translation, aperture decode,

physical address security checks, and protocol arbiter-specific decodes (such as external

DRAM address decodes).

Tegra K1 TRM at PND00018589 (emphasis added).

MCCIF or

MC-CIF

Memory Controller Client InterFace, the standard interface block between

the memory controller sub-system fabric and the client device. Note that

some modules may have multiple client interfaces.

Tegra K1 TRM at PND00017962 (defining “MCCIF” in the Glossary).

20.3 Display Controller Description

20.3.1 MEMFETCH

Figure 56 shows the top-level block diagram of the display memory interface for Tegra K1

devices. The diagram shows the memory fetch engines associated with the three full-featured

windows and two simple windows in the display controller. Each window memory fetch

engine has a line buffer and MCCIF. In addition to the windows, display also uses a memfetch

for cursor.

Page 25: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

25

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

Requests are issued to the memory and data is returned via MCCIF to the window line buffer.

Data from the line buffer is subsequently read out, unpacked and passed to the window scaler

and filter units. Windows D/T and the cursor fetch engines do not contain line buffers.

Tegra K1 TRM at PND00019317 (emphasis added).

Page 26: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

26

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

Tegra K1 TRM at PND00019318 (Figure 56, depicting along the left side: MCCIF-A, MCCIF-B,

MCCIF-C, MCCIF-D, MCCIF-T, and MCCIF-HC).

Page 27: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

27

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

20.3.1.1 Request Engine

The request engine is responsible for fetching the window image from memory. Logic to

translate the X, Y coordinates to a linear memory address is used to make requests to memory.

A pulse signal is sent to the request engine from the display timing generator at the beginning

of the vertical blanking period to alert the request engine to start requesting the image data.

Tagged requests are sent to the MCCIF. The thread ID specifying the line location of the

return data is passed to the tag FIFO between the request and response engines. The thread ID

is read by the data packer block when the return data is received and the data is steered to the

appropriate line(s) in the line buffer.

Tegra K1 TRM at PND00019318 (emphasis added)

Page 28: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

28

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

Tegra K1 TRM at PND00019810 (Figure 102, depicting an interface between System Memory and

Page 29: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

29

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

Camera Modules A and B).

The still and video capture models for Raw data utilize both ISPA and ISPB simultaneously

under most conditions. During the stereo video capture, the image streams being captured by

the left and right camera modules can each be assigned to either ISPA or ISPB, allowing a

preview to be extracted from either processed stream with the stereo video being available to

the compression resources. The ZSL stereo capture model also utilizes both ISPA and ISPB,

however, only one ISP is used for managing the preview, and the second is idle until a still

capture is initiated. Both high-resolution Raw streams and provided to system memory

through the MCCIF to maintain the circular buffer for the left and right channels. Once the

ZSL still capture is initiated, the idle ISP can be utilized to process the still images while the

other is available to maintain a preview and allow a new ZSL sequence to be initiated with little

delay. The basic sequence for the Raw stereo capture is shown in the following figure [Figure

102].

Tegra K1 TRM at PND00019809 (emphasis added).

In conjunction with the proper and intended operation of the MCCIF of each Group 1 product, the

memory controller of each Group 1 product must be properly configured:

15.3.1.1 Global Memory Subsystem Configuration

Some parts of the Memory Subsystem must be configured before any transfers are allowed

into the system. The address map must be configured to set which portions of the physical

address map are allocated to which Protocol Arbiters, and what portions of it are protected by

the physical address protection mechanisms.

The Arbitration Domains operate off of a unified arbitration clock. This clock is further

Page 30: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

30

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

divided-down to produce a clock-rate independent clock for counting latency intervals across

the Memory System (also known as “ticks”). This divide-down should be programmed to a

constant interval at initialization (and any other clock-rate change). A 30 ns interval is

suggested since it is an even divide-down of many common DRAM clocks, but any convenient

granularity may be chosen.

There are several global memory system tuning options that tend to shape the memory

performance. Some of these can be set statically, at initialization time, others depend on clock

frequencies. For example, with SMMU configuration, the SMMU requires software to set up

and maintain page tables in memory, enable translation for clients, and assign clients to address

space identifiers.

15.3.1.2 Arbitration Domain Configuration

Physical implementation decisions (such as the mapping of clients to partition clients) are

default client bandwidth allocations require configuration options in the Arbitration Domain.

15.3.1.3 Protocol Arbiter Configuration

Each Protocol Arbiter has its own requirements for initialization. Timing parameters specific to

the Arbiter, the DRAM and the current operating clock speed have to be written into the

Protocol Arbiter configuration registers. The type and geometry of the attached DRAM also

have to be programmed. Afterwards, the DRAM will require a set of initialization cycles to be

issued.

Device/Rank Geometry

DRAM devices come in many sizes, widths, etc. The arbiter must be programmed to drive the

Page 31: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

31

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

correct combination of address bits, data bits, and protocol.

Tegra K1 TRM at PND00018593 (emphasis added).

15.1 Memory Controller Architecture

The memory controller architecturally consists of the following parts:

•! Arbitration Domains (ADs), which can handle a single request or response per clock

from a group of clients. Typically, a system has a single Arbitration Domain, but an

implementation may divide the client space into multiple Arbitration Domains to

increase the effective system bandwidth. Multiple Traffic Classes within a single

Arbitration Domain and Protocol Arbiter are allowed.

•! Protocol Arbiters (PAs), which manage a related pool of memory devices. A system

may have a single Protocol Arbiter or multiple Protocol Arbiters.

•! Memory Crossbar, which routes request and responses between Arbitration Domains

and Protocol Arbiters. In the simplest version of the system, the Memory Crossbar is

just a pass through between a single Arbitration Domain and a single Protocol Arbiter.

•! Global Resources, which include entities such as configuration registers which are

shared across the Memory Subsystem.

•! Write CAMs (WCAMs), which improves performance and throughput for PCIe ordered

clients (PCIe, SATA, HDA, and USB3), CPU writes, and CPU copies.

•! Memory Controller Client Interface (MCCIF), which provides a standardized interface

for access to the Memory Controller.

•! Translation Unit, which handles virtual-to-physical address translation, aperture decode,

physical address security checks, and protocol arbiter-specific decodes (such as external

DRAM address decodes).

Page 32: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

32

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

Tegra K1 TRM at PND00018589 (emphasis added).

As shown in the evidence cited above, each Group 1 product comprises the MMCIF and memory

controller functionality that affords the configurability necessary for using the memory subsystem.

Therefore, each Group 1 product comprises a configurable system interface (e.g., a properly configured

memory controller having an MCCIF).

In each Group 1 product, the configurable system interface is suitably arranged to receive a resource

access request.

20.3 Display Controller Description

20.3.1 MEMFETCH

Figure 56 shows the top-level block diagram of the display memory interface for Tegra K1

devices. The diagram shows the memory fetch engines associated with the three full-featured

windows and two simple windows in the display controller. Each window memory fetch

engine has a line buffer and MCCIF. In addition to the windows, display also uses a memfetch

for cursor.

Requests are issued to the memory and data is returned via MCCIF to the window line buffer.

Data from the line buffer is subsequently read out, unpacked and passed to the window scaler

and filter units. Windows D/T and the cursor fetch engines do not contain line buffers.

Tegra K1 TRM at PND00019317 (emphasis added).

20.3.1.1 Request Engine

Page 33: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

33

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

The request engine is responsible for fetching the window image from memory. Logic to

translate the X, Y coordinates to a linear memory address is used to make requests to memory.

A pulse signal is sent to the request engine from the display timing generator at the beginning

of the vertical blanking period to alert the request engine to start requesting the image data.

Tagged requests are sent to the MCCIF. The thread ID specifying the line location of the

return data is passed to the tag FIFO between the request and response engines. The thread ID

is read by the data packer block when the return data is received and the data is steered to the

appropriate line(s) in the line buffer.

Tegra K1 TRM at PND00019318 (emphasis added).

As shown in the evidence cited above, each Group 1 product comprises the MMCIF and configurable

functionality are arranged to suitably receive resource access requests (e.g., requests for data stored in

the memory). Therefore, each Group 1 product comprises a configurable system interface (e.g., a

properly configured memory controller having an MCCIF) suitably arranged to receive the resource

access request.

In each Group 1 product, the configurable system interface is suitably arranged to generate a

corresponding universal command:

Page 34: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

34

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

Tegra K1 TRM at PND00018589 (explaining that the “memory controller (MC) . . . allocate[s]

memory bandwidth for DDR3L and LPDDR3 SDRAMs” and that “[t]he external memory controller

(EMC) communicates with external DDR3L and LPDDR3 devices.”).

Each Group 1 product comprises the memory controller arranged suitably to generate corresponding

universal commands (e.g., commands to the DDR3L and LPDDR3 memory). Therefore, each Group 1

product comprises a configurable system interface (e.g., a properly configured memory controller

having an MCCIF) suitably arranged to generate corresponding universal commands.

Tegra Group 2 Products (NVIDIA system products that include NVIDIA Tegra processors or

SOC devices configured to control and support LPDDR memory):

As discussed above, Group 2 comprises NVIDIA system products that include Group 1 products

configured to control and support LPDDR memory, which complies with the JEDEC LPDDR standard.

Page 35: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

35

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

As shown above in connection with Group 1, the universal resource access controller of Group 2

products comprises a configurable system interface coupled to the requesting system suitably arranged

to both receive the resource access request and to generate a corresponding universal command.

[b] a universal command

sequencer coupled to the

configurable system

interface;

The universal resource access controller of Tegra Group 1 and 2 products comprises a universal

command sequencer coupled to the configurable system interface.

Tegra Group 1 Products (NVIDIA Tegra Processors or SOC devices configured to control and

support LPDDR memory):

As shown above, each Group 1 product is configured to control and support LPDDR memory. In

compliance with the LPDDR standard, the universal resource access controller of each Group 1

product comprises a universal command sequencer coupled to the configurable system interface:

The Tegra K1 device supports a single physical channel of memory interface, which operates in

either single x32 or single x64 configurations. It implements two protocol arbiters and has a

single Arbitration Domain. There is an alternate path from the Arbitration Domain that

provides a path to IRAM via AHB re-direction.

Figure 28 below is a simplified view of how memory requests are arbitrated in the Tegra K1

MSS. Memory client requests are first arbitrated through a sequence of ring arbiters which

perform a type of round-robin arbitration. There are three ring arbiters referred to as ring0,

ring1, and ring2. Ring1 arbiter clients are the ISO clients (display and camera) and the winner

of the ring2 arbiter. Each ring has a rate control mechanism referred to as Priority Tier Snap

Arbiter (PTSA). The client’s bandwidth guarantee is specified by the PTSA rate (also referred

to as “DDA”).

Page 36: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

36

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

The block labelled “Row Sorter” is a pending request buffer (it sorts requests by the DRAM

row that it refers to). This row sorter is made up of many “bank queues” which hold the

requests made to the same DRAM bank/row. The number of requests pending in the row

sorter can affect whether ring1 or ring2 arbiters are throttled (slowed down) based on

thresholds.

Tegra K1 TRM at PND00018590 (emphasis added).

The Row Sorter functionality described above, for example, can be a universal command sequencer

because it moderates the sequence and timing of memory access requests, as explained above in

connection with limitation 1c.

In each Group 1 product, the universal command sequencer (e.g., Row Sorter functionality) is coupled

to the configurable system interface (e.g., a properly configured memory controller having an MCCIF):

Page 37: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

37

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

Tegra K1 TRM at PND00018590 (depicting the Row Sorter as within the Memory Controller).

Because the Row Sorter functionality is a part of the Memory Controller functionality, as shown in the

Page 38: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

38

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

evidence above, the universal command sequencer (e.g., Row Sorter functionality) is coupled to the

configurable system interface (e.g., a properly configured memory controller having an MCCIF).

Tegra Group 2 Products (NVIDIA system products that include NVIDIA Tegra processors or

SOC devices configured to control and support LPDDR memory):

As discussed above, Group 2 comprises NVIDIA system products that include Group 1 products

configured to control and support LPDDR memory, which complies with the JEDEC LPDDR standard.

As shown above in connection with Group 1, the universal resource access controller of Group 2

products comprises a universal command sequencer coupled to the configurable system interface.

[c] a resource tag buffer

coupled to the command

sequencer arranged to

store a resource tag

arranged to identify the

current state of the

requested resource and a

resource tag arranged to

identify the requested state

of the requested resource;

and

The universal resource access controller of Tegra Group 1 and 2 products comprises a resource tag

buffer coupled to the command sequencer arranged to store a resource tag arranged to identify the

current state of the requested resource and a resource tag arranged to identify the requested state of the

requested resource.

Tegra Group 1 Products (NVIDIA Tegra Processors or SOC devices configured to control and

support LPDDR memory):

As shown above, each Group 1 product is configured to control and support LPDDR memory. In

compliance with the LPDDR standard, the universal resource access controller of each Group 1

product comprises a resource tag buffer coupled to the command sequencer arranged to store a

resource tag arranged to identify the current state of the requested resource and a resource tag arranged

to identify the requested state of the requested resource:

The Tegra K1 device supports a single physical channel of memory interface, which operates in

Page 39: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

39

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

either single x32 or single x64 configurations. It implements two protocol arbiters and has a

single Arbitration Domain. There is an alternate path from the Arbitration Domain that

provides a path to IRAM via AHB re-direction.

Figure 28 below is a simplified view of how memory requests are arbitrated in the Tegra K1

MSS. Memory client requests are first arbitrated through a sequence of ring arbiters which

perform a type of round-robin arbitration. There are three ring arbiters referred to as ring0,

ring1, and ring2. Ring1 arbiter clients are the ISO clients (display and camera) and the winner

of the ring2 arbiter. Each ring has a rate control mechanism referred to as Priority Tier Snap

Arbiter (PTSA). The client’s bandwidth guarantee is specified by the PTSA rate (also referred

to as “DDA”).

The block labelled “Row Sorter” is a pending request buffer (it sorts requests by the DRAM

row that it refers to). This row sorter is made up of many “bank queues” which hold the

requests made to the same DRAM bank/row. The number of requests pending in the row

sorter can affect whether ring1 or ring2 arbiters are throttled (slowed down) based on

thresholds.

Tegra K1 TRM at PND00018590 (emphasis added).

The “bank queues” described above are a resource tag buffer because, as explained below, the bank

queues are arranged to store resource tags.

In each Group 1 product, the resource tag buffer (e.g., “bank queues” of the “Row Sorter”) is arranged

to store a resource tag arranged to identify the current state of the requested resource:

The Tegra K1 device supports a single physical channel of memory interface, which operates in

either single x32 or single x64 configurations. It implements two protocol arbiters and has a

Page 40: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

40

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

single Arbitration Domain. There is an alternate path from the Arbitration Domain that

provides a path to IRAM via AHB re-direction.

Figure 28 below is a simplified view of how memory requests are arbitrated in the Tegra K1

MSS. Memory client requests are first arbitrated through a sequence of ring arbiters which

perform a type of round-robin arbitration. There are three ring arbiters referred to as ring0,

ring1, and ring2. Ring1 arbiter clients are the ISO clients (display and camera) and the winner

of the ring2 arbiter. Each ring has a rate control mechanism referred to as Priority Tier Snap

Arbiter (PTSA). The client’s bandwidth guarantee is specified by the PTSA rate (also referred

to as “DDA”).

The block labelled “Row Sorter” is a pending request buffer (it sorts requests by the DRAM

row that it refers to). This row sorter is made up of many “bank queues” which hold the

requests made to the same DRAM bank/row. The number of requests pending in the row

sorter can affect whether ring1 or ring2 arbiters are throttled (slowed down) based on

thresholds.

Tegra K1 TRM at PND00018590 (emphasis added).

As explained above in connection with limitation 1b, the contents of the bank queues are arranged in a

manner such that the current state of the requested resource (e.g., the shared memory) is identifiable.

Therefore, the bank queues store a resource tag arranged to identify the current state of the requested

resource.

In each Group 1 product, the resource tag buffer (e.g., “bank queues” of the “Row Sorter”) is arranged

to store a resource tag arranged to identify the requested state of the requested resource:

The Tegra K1 device supports a single physical channel of memory interface, which operates in

Page 41: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

41

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

either single x32 or single x64 configurations. It implements two protocol arbiters and has a

single Arbitration Domain. There is an alternate path from the Arbitration Domain that

provides a path to IRAM via AHB re-direction.

Figure 28 below is a simplified view of how memory requests are arbitrated in the Tegra K1

MSS. Memory client requests are first arbitrated through a sequence of ring arbiters which

perform a type of round-robin arbitration. There are three ring arbiters referred to as ring0,

ring1, and ring2. Ring1 arbiter clients are the ISO clients (display and camera) and the winner

of the ring2 arbiter. Each ring has a rate control mechanism referred to as Priority Tier Snap

Arbiter (PTSA). The client’s bandwidth guarantee is specified by the PTSA rate (also referred

to as “DDA”).

The block labelled “Row Sorter” is a pending request buffer (it sorts requests by the DRAM

row that it refers to). This row sorter is made up of many “bank queues” which hold the

requests made to the same DRAM bank/row. The number of requests pending in the row sorter

can affect whether ring1 or ring2 arbiters are throttled (slowed down) based on thresholds.

Tegra K1 TRM at PND00018590 (emphasis added).

The requested state of the requested resource is identifiable from the requests that are sorted and stored

by the bank queues. Examples of such requested states of requested resources are identified above

with respect to limitation 1b. Therefore, such requests are resource tags arranged to identify the

requested state of the requested resource and the bank queues store a resource tag arranged to identify

the requested state of the requested resource.

In each Group 1 product, the resource tag buffer (e.g., “bank queues” of the “Row Sorter”) is coupled

to the command sequencer (e.g., the “Row Sorter”).

Page 42: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

42

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

Tegra K1 TRM at PND00018590 (depicting the Row Sorter, which has the “bank queues,” as within

the Memory Controller).

Because the resource tag buffer (“bank queues” of the Row Sorter functionality) is a part of the

command sequencer (e.g., the “Row Sorter” functionality), it is also coupled to the command

Page 43: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

43

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

sequencer.

Tegra Group 2 Products (NVIDIA system products that include NVIDIA Tegra processors or

SOC devices configured to control and support LPDDR memory):

As discussed above, Group 2 comprises NVIDIA system products that include Group 1 products

configured to control and support LPDDR memory, which complies with the JEDEC LPDDR standard.

As shown above in connection with Group 1, the universal resource access controller of each Group 2

product comprises a resource tag buffer coupled to the command sequencer arranged to store a

resource tag arranged to identify the current state of the requested resource and a resource tag arranged

to identify the requested state of the requested resource.

[d] a characteristic

operating parameter buffer

coupled to the command

sequencer arranged to

store the characteristic

operating parameter

associated with the

requested resource,

Tegra Group 1 and 2 products comprise a trimming register connected to the control element.

Tegra Group 1 Products (NVIDIA Tegra Processors or SOC devices configured to control and

support LPDDR memory):

As shown above, each Group 1 product is configured to control and support LPDDR memory. In

compliance with the LPDDR standard, the universal resource controller (e.g., memory controller)

comprises a characteristic operating parameter buffer arranged to store the characteristic operating

parameter associated with the requested resource:

Device/Rank Geometry

DRAM devices come in many sizes, widths, etc. The arbiter must be programmed to drive the

correct combination of address bits, data bits, and protocol.

Page 44: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

44

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

Tegra K1 TRM at PND00018593 (emphasis added).

Device Timing and Arbiter Timing

A DRAM arbiter has configuration related to timing parameters specific to the attached

device. The JEDEC timing specifications for the device have to be converted from

nanoseconds to cycle counts. The controller must also be programmed with the cycles per tick

number for their particular clock domain (see also the subsection on “Global Memory

Subsystem Configuration” below).

Tegra K1 TRM at PND00018594 (emphasis added).

As shown above, device geometry and timing parameters are “programmed” and “configured” in the

memory controller. As explained above with respect to limitation 1b, device geometry and timing

parameters are examples of specific characteristic operating parameters associated with the requested

resource. Therefore, the locations where these characteristic operating parameters are stored are

characteristic operating parameter buffers arranged to store the characteristic operating parameter

associated with the requested resource.

In each Group 1 product, the characteristic operating parameter buffer (e.g., a location where device

geometry or a timing parameter is stored) is coupled to the command sequencer (e.g., the “Row Sorter”

functionality):

Page 45: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

45

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

Tegra K1 TRM at PND00018590 (depicting the Row Sorter as within the Memory Controller).

Page 46: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

46

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

Because both the Row Sorter functionality and the arbiter functionality are a part of the Memory

Controller functionality, as shown in the evidence above, the locations of the device geometry and the

timing parameters that are utilized in connection with the arbiter functionality are coupled to the Row

Sorter functionality. Therefore, a characteristic operating parameter buffer (e.g., locations where

device geometry and timing parameters are stored) is coupled to the command sequencer (e.g., Row

Sorter functionality).

Tegra Group 2 Products (NVIDIA system products that include NVIDIA Tegra processors or

SOC devices configured to control and support LPDDR memory):

As discussed above, Group 2 comprises NVIDIA system products that include Group 1 products

configured to control and support LPDDR memory, which complies with the JEDEC LPDDR standard.

As shown above in connection with Group 1, the universal resource controller (e.g., memory

controller) of each Group 2 product comprises a characteristic operating parameter buffer arranged to

store the characteristic operating parameter associated with the requested resource.

[e] wherein the universal

command sequencer uses

the respective resource

tags that identify the

current state and the

requested state of the

requested resource and the

characteristic operating

parameter associated with

the requested resource to

The universal command sequencer of Tegra Group 1 and 2 products uses the respective resource tags

that identify the current state and the requested state of the requested resource and the characteristic

operating parameter associated with the requested resource to generate the sequenced universal

command.

Tegra Group 1 Products (NVIDIA Tegra Processors or SOC devices configured to control and

support LPDDR memory):

As shown above, each Group 1 product is configured to control and support LPDDR memory. In

compliance with the LPDDR standard, the universal command sequencer of each Group 1 product uses

Page 47: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

47

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

generate the sequenced

universal command.

the respective resource tags that identify the current state and the requested state of the requested

resource and the characteristic operating parameter associated with the requested resource to generate

the sequenced universal command.

For substantially the same reasons explained based on the evidence cited above with respect to the “a

current state of the requested resource, and a requested state of the requested resource” recited in

limitation 1b, the current state and the requested state of the requested resource are used by the

universal command sequencer (e.g., “Row Sorter” functionality).

For substantially the same reasons explained based on the evidence cited above with respect to the

“specific characteristic operating parameter of the requested resource” recited in limitation 1b, a

characteristic operating parameter associated with the requested resource is used by the universal

command sequencer (e.g., “Row Sorter” functionality).

For substantially the same reasons explained based on the evidence cited above with respect to the

“uses . . . to generate” recited in limitation 1b and with respect to the “corresponding sequenced

command suitable for accessing the resource as required by the requesting system” recited in limitation

1c, the universal command sequencer uses the respective resource tags that identify the current state

and the requested state of the requested resource and the characteristic operating parameter associated

with the requested resource to generate a sequenced universal command.

Tegra Group 2 Products (NVIDIA system products that include NVIDIA Tegra processors or

SOC devices configured to control and support LPDDR memory):

As discussed above, Group 2 comprises NVIDIA system products that include Group 1 products

configured to control and support LPDDR memory, which complies with the JEDEC LPDDR standard.

As shown above in connection with Group 1, the universal command sequencer of each Group 2

Page 48: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

48

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

product uses the respective resource tags that identify the current state and the requested state of the

requested resource and the characteristic operating parameter associated with the requested resource to

generate the sequenced universal command.

3. A universal resource

access controller as recited

in claim 2,

Tegra Group 1 and 2 products comprise a universal resource access controller. See preamble, Claim 1;

Claim 2.

[a] wherein the requesting

system is one of a plurality

of processors included in a

multiprocessor computing

system,

The requesting system of Tegra Group 1 and 2 products is one of a plurality of processors included in a

multiprocessor computing system.

Tegra Group 1 Products (NVIDIA Tegra Processors or SOC devices configured to control and

support LPDDR memory):

As shown above, each Group 1 product is configured to control and support LPDDR memory. In

compliance with the LPDDR standard, the requesting system of each Group 1 product is one of a

plurality of processors included in a multiprocessor computing system:

1.3 Memory Controller and Internal Bus Architecture

The Tegra K1 mobile processor has a highly optimized 64-bit memory controller, supporting

low latency access for the CPU, optimized high bandwidth access for the graphics and video

devices, and controlled latency for real time devices such as display.

There is a three-level hierarchy of memory clients:

Page 49: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

49

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

1.! Memory controller clients: The memory controller directly arbitrates between these

using a complex algorithm optimizing DRAM efficiency. The highest bandwidth

clients fall into this class, and they communicate directly with the memory controller

using a proprietary high-speed bus.

2.! AHB devices: These generally have a built-in DMA engine, and share a single memory

client using the AHB bus protocol.

3.! APB devices: All APB devices are slaves, and are serviced by a shared multi-channel

APB DMA controller which is also an APB device.

Special provisions are made for the CPU to bypass parts of the memory controller arbitration

to help achieve a lower latency.

Tegra K1 TRM at PND00017959 (emphasis added).

As shown above, among these “memory controller clients” is the Central Processing Unit, which is a

multiprocessor computing system:

Page 50: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

50

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

Tegra K1 TRM at PND00017958 (depicting the “CPU Complex” as a “4-PLUS-1™ Quad-Core

Processor”).

Page 51: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

51

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

1.0 INTRODUCTION

The NVIDIA®

Tegra®

K1 mobile processor is a complete applications and digital media system

built around several powerful hardware elements:

•! Graphics: NVIDIA®

GeForce®

Kepler Graphics Processing Unit (GPU). The GPU fully

supports DX11, Sharder Model 4, and OpenGL 4.3 as well as OpenGL ES 3.0. It

supports Unified shaders and is GPU computer capable with 192 CUDA Cores. The

GPU supports all the same features as discrete NVIDIA GPUs, including PhysX,

CUDA, OpenCL, and DX computer. It is highly power optimized for best performance

in mobile use cases.

•! CPU Complex: Quad Cortex®

–A15 Symmetric Multi-Processing ARM®

Cores in a 4-

PLUS-1™ configuration with a quad-core fast CPU complex and a fifth Battery Saver

Core. The Cortex-A15 core features triple instruction issue and both out-of-order and

speculative execution. It has full cache coherency support for the quad symmetric

processors. All processors have 32 KB Instruction and 32 KB Data Level 1 caches; and

there is a 2 MB shared Level 2 cache for the quad-core complex and a 512 KB Level 2

cache for the fifth core. The NVIDIA 4-PLUS-1 architecture uses the fifth Battery

Saver Core, which operates exclusively with the main CPU complex, for very low-

power, low-leakage operation at the light CPU loads common to multimedia and lightly

loaded use situations.

Tegra K1 TRM at PND00017957 (emphasis added).

Because each of the cores of the CPU Complex is one of a plurality of processors included in a

multiprocessor computing system, and because the CPU cores are requesting systems, the requesting

Page 52: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

52

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

system (e.g., one of the CPU cores) is one of a plurality of processors (e.g., the five cores) included in a

multiprocessor computing system (e.g., the CPU Complex).

Tegra Group 2 Products (NVIDIA system products that include NVIDIA Tegra processors or

SOC devices configured to control and support LPDDR memory):

As discussed above, Group 2 comprises NVIDIA system products that include Group 1 products

configured to control and support LPDDR memory, which complies with the JEDEC LPDDR standard.

As shown above in connection with Group 1, the requesting system of each Group 2 product is one of a

plurality of processors included in a multiprocessor computing system.

[b] wherein the

configurable system

interface is configurable to

be able to accept and

process resource requests

from any of the plurality of

processors.

The configurable system interface of Tegra Group 1 and 2 products is configurable to be able to accept

and process resource requests from any of the plurality of processors.

Tegra Group 1 Products (NVIDIA Tegra Processors or SOC devices configured to control and

support LPDDR memory):

As shown above, each Group 1 product is configured to control and support LPDDR memory. In

compliance with the LPDDR standard, the configurable system interface (e.g., a properly configured

memory controller having an MCCIF) of each Group 1 product is configurable to be able to accept and

process resource requests from any of the plurality of processors (e.g., the five cores of the CPU

Complex):

Page 53: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

53

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

Tegra K1 TRM at PND00018589 (explaining that the “Tegra K1 memory controller (MC) handles

memory requests from internal clients”).

Key features in the Tegra K1 memory controller include:

•! Enhanced arbiter design for higher memory efficiency

•! System Memory Management Unit (SMMU)/Translation Unit (TU) for virtual to

physical address mapping for any device

•! Support for low-voltage DDR3 and LPDDR3 SDRAMs

•! 8 burst transfers per transaction (BL8)

•! Support for two DRAM ranks of unequal device densities

•! Operates in either single x32 or single x64 configuration

•! Variable transaction sizes based on the requests from the clients (one 64-byte

transaction with variable dimensions, two 32-byte transactions with variable

Page 54: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

54

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

dimensions, etc.)

•! QUSE functionality for qualification of tristatable DQS clock in SDRAMs

The memory interface speed varies with memory type, and the specific Tegra K1 SKU, so is

not stated in this document.

Tegra K1 TRM at PND00018589 (emphasis added)

15.4.10 Microcode Carveout Protection Region (Tegra K1 64-Only)

Microcode carveout is a protected region of physical memory required by the CPU architecture.

Only native CPU accesses from the CCPlex are allowed to read or write microcode carveout.

The protected region is defined to start at MTS_CARVEOU_BOM and ends at

MTS_CARVEOUT_SIZE. It can start and be extended in multiples of 1 MB. The only clients

allowed to access the protected region are the CPU read and write clients:

•! MPCORER

•! MPCOREW

•! MPCORELPR

•! MPCORELPW

All other clients attempting to access this region will cause the MC to issue an interrupt and

log the details of the request (address, client ID). After the details are logged, write requests

are dropped and read requests are forced to return all 1s, thus protecting the region from

corruption by the insecure source.

Tegra K1 TRM at PND00018604-05 (emphasis added).

Page 55: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

55

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

As shown above, the configurable system interface (the properly configured memory controller having

a MCCIF) is configurable to be able to accept and process resource requests from any of the plurality

of processors.

Tegra Group 2 Products (NVIDIA system products that include NVIDIA Tegra processors or

SOC devices configured to control and support LPDDR memory):

As discussed above, Group 2 comprises NVIDIA system products that include Group 1 products

configured to control and support LPDDR memory, which complies with the JEDEC LPDDR standard.

As shown above in connection with Group 1, the configurable system interface (e.g., a properly

configured memory controller having an MCCIF) of each Group 2 product is configurable to be able to

accept and process resource requests from any of the plurality of processors (e.g., the five cores of the

CPU Complex).

11. A multi-processor

computer system,

comprising:

When a Tegra Group 1 product is used as configured to control memory in a system, the system is a

mluti-processor computer system. Tegra Group 2 products comprise a multi-processor computer

system.

Tegra Group 1 Products (NVIDIA Tegra Processors or SOC devices configured to control and

support LPDDR memory):

As explained below with respect to Tegra Group 2 products, when a Tegra Group 1 product is used as

configured to control memory in a system, the system is a multi-processor computer system.

Tegra Group 2 Products (NVIDIA system products that include NVIDIA Tegra processors or

Page 56: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

56

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

SOC devices configured to control and support LPDDR memory):

Group 2 comprises system products that include NVIDIA Tegra processors or SOC devices configured

to control and support LPDDR memory. The NVIDIA Shield Tablet is representative of Group 2

products because it comprises and uses LPDDR memory, and complies with the JEDEC LPDDR

standard (JESD209B, JESD2093C, and/or JESD209-4), as do all Group 1 products. An image of the

representative product of Group 2, the NVIDIA Shield Tablet, are reproduced from NVIDIA’s website

below:

Page 57: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

57

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

See https://www.nvidia.com/en-us/shield/tablet/.

[a] a system bus; When a Tegra Group 1 product is used as configured to control memory in a system, the system

Page 58: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

58

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

comprises a system bus. Tegra Group 2 products comprise a system bus.

Tegra Group 1 Products (NVIDIA Tegra Processors or SOC devices configured to control and

support LPDDR memory):

As explained below with respect to Tegra Group 2 products, when a Tegra Group 1 product is used as

configured to control memory in a system, the system comprises a system bus.

Tegra Group 2 Products (NVIDIA system products that include NVIDIA Tegra processors or

SOC devices configured to control and support LPDDR memory):

As shown above, Group 2 comprises NVIDIA system products that include Group 1 products

configured to control and support LPDDR memory, which complies with the JEDEC LPDDR standard.

In compliance with the LPDDR standard, Group 2 comprises a system bus:

1.3 Memory Controller and Internal Bus Architecture

The Tegra K1 mobile processor has a highly optimized 64-bit memory controller, supporting

low latency access for the CPU, optimized high bandwidth access for the graphics and video

devices, and controlled latency for real time devices such as display.

There is a three-level hierarchy of memory clients:

1.! Memory controller clients: The memory controller directly arbitrates between these

using a complex algorithm optimizing DRAM efficiency. The highest bandwidth

clients fall into this class, and they communicate directly with the memory controller

using a proprietary high-speed bus.

2.! AHB devices: These generally have a built-in DMA engine, and share a single memory

Page 59: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

59

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

client using the AHB bus protocol.

3.! APB devices: All APB devices are slaves, and are serviced by a shared multi-channel

APB DMA controller which is also an APB device.

Special provisions are made for the CPU to bypass parts of the memory controller arbitration to

help achieve a lower latency.

Tegra K1 TRM at PND00017959 (emphasis added).

AHB AMBA High-Speed Bus, a multi-master high-speed (relative to APB) bus

supporting arbitration and split transactions, defined as part of AMBA 2.

Tegra K1 TRM at PND00017961 (emphasis added).

As shown above, each Group 2 product has a system bus (e.g., the Internal bus, or alternatively, the

AHB, or alternatively, the proprietary high-speed bus).

[b] a plurality of

processors each capable of

issuing a processor

command and an

associated data burst to the

system bus;

When a Tegra Group 1 product is used as configured to control memory in a system, the system

comprises a plurality of processors each capable of issuing a processor command and an associated

data burst to the system bus. Tegra Group 2 products comprise a plurality of processors each capable

of issuing a processor command and an associated data burst to the system bus.

See contentions on limitations 3[a] and 3[b].

Tegra Group 1 Products (NVIDIA Tegra Processors or SOC devices configured to control and

support LPDDR memory):

Page 60: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

60

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

As explained below with respect to Tegra Group 2 products, when a Tegra Group 1 product is used as

configured to control memory in a system, the system comprises a plurality of processors each capable

of issuing a processor command and an associated data burst to the system bus.

Tegra Group 2 Products (NVIDIA system products that include NVIDIA Tegra processors or

SOC devices configured to control and support LPDDR memory):

As shown above, Group 2 comprises NVIDIA system products that include Group 1 products

configured to control and support LPDDR memory, which complies with the JEDEC LPDDR standard.

In compliance with the LPDDR standard, Group 2 comprises a plurality of processors each capable of

issuing a data burst, associated with the processor command, to the system bus:

Key features in the Tegra K1 memory controller include:

•! Enhanced arbiter design for higher memory efficiency

•! System Memory Management Unit (SMMU)/Translation Unit (TU) for virtual to

physical address mapping for any device

•! Support for low-voltage DDR3 and LPDDR3 SDRAMs

•! 8 burst transfers per transaction (BL8)

•! Support for two DRAM ranks of unequal device densities

•! Operates in either single x32 or single x64 configuration

•! Variable transaction sizes based on the requests from the clients (one 64-byte

transaction with variable dimensions, two 32-byte transactions with variable

dimensions, etc.)

•! QUSE functionality for qualification of tristatable DQS clock in SDRAMs

The memory interface speed varies with memory type, and the specific Tegra K1 SKU, so is

Page 61: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

61

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

not stated in this document.

Tegra K1 TRM at PND00018589 (emphasis added)

As shown above, the processor commands that result in data reads or data writes to the memory are

transactional (“burst transfers”) in nature, and are “bursts.” These “bursts” travel over the system bus:

Page 62: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

62

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

Tegra K1 TRM at PND00018590 (depicting two buses—one for each DRAM in the block diagram).

Page 63: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

63

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

[c] a memory controller as

recited in claim

1 connected to the plurality

of processors by way of

the system bus, the

memory controller being

arranged to receive the

processor commands and

associated data bursts

issued by the processors

and to issue corresponding

shared memory

commands;

When a Tegra Group 1 product is used as configured to control memory in a system, the system

comprises a memory controller as recited in claim 1 connected to the plurality of processors by way of

the system bus, the memory controller being arranged to receive the processor commands and

associated data bursts issued by the processors and to issue corresponding shared memory commands.

The memory controller of Tegra Group 2 products is connected to the plurality of processors by way of

the system bus, the memory controller being arranged to receive the processor commands and

associated data bursts issued by the processors and to issue corresponding shared memory commands.

See contentions on limitations 1[b], 1[c], and 3[b].

[d] a shared memory

device;

When a Tegra Group 1 product is used as configured to control memory in a system, the system

comprises a shared memory device. Tegra Group 2 products comprise a shared memory device.

Tegra Group 1 Products (NVIDIA Tegra Processors or SOC devices configured to control and

support LPDDR memory):

As explained below with respect to Tegra Group 2 products, when a Tegra Group 1 product is used as

configured to control memory in a system, the system comprises a shared memory device.

Tegra Group 2 Products (NVIDIA system products that include NVIDIA Tegra processors or

SOC devices configured to control and support LPDDR memory):

As shown above, Group 2 comprises NVIDIA system products that include Group 1 products

Page 64: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

64

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

configured to control and support LPDDR memory, which complies with the JEDEC LPDDR standard.

In compliance with the LPDDR standard, Group 2 comprises a shared memory device (e.g., when

Group 1 products are coupled to DRAM, e.g., the 2 GB of RAM in Group 2 products). As another

example, NVIDIA customers (e.g., Nintendo) couple the Group 1 products to SDRAM (e.g., the 16 GB

of Samsung K4F6E304HB-MGCH LPDDR4 SDRAM) in third-party products such as the Nintendo

Switch).

See PND00024819-PND00024838.

[e] a command bus

connecting the shared

memory device to the

memory controller

arranged to carry the

issued shared memory

commands according to a

minimum issue time

generated by the memory

controller; and

When a Tegra Group 1 product is used as configured to control memory in a system, the system

comprises a command bus connecting the shared memory device to the memory controller arranged to

carry the issued shared memory commands according to a minimum issue time generated by the

memory controller. Tegra Group 2 products comprise a command bus connecting the shared memory

device to the memory controller arranged to carry the issued shared memory commands according to a

minimum issue time generated by the memory controller.

Tegra Group 1 Products (NVIDIA Tegra Processors or SOC devices configured to control and

support LPDDR memory):

As explained below with respect to Tegra Group 2 products, when a Tegra Group 1 product is used as

configured to control memory in a system, the system comprises a command bus connecting the shared

memory device to the memory controller arranged to carry the issued shared memory commands

according to a minimum issue time generated by the memory controller.

Tegra Group 2 Products (NVIDIA system products that include NVIDIA Tegra processors or

SOC devices configured to control and support LPDDR memory):

Page 65: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

65

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

As shown above, Group 2 comprises NVIDIA system products that include Group 1 products

configured to control and support LPDDR memory, which complies with the JEDEC LPDDR standard.

In compliance with the LPDDR standard, Group 2 comprises a command bus connecting the shared

memory device (e.g., DDR3L or LPDDR3) to the memory controller arranged to carry the issued

shared memory commands according to a minimum issue time generated by the memory controller:

Page 66: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

66

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

JEDEC Standard Low Power Double Data Rate 3 (LPDDR3) JESD209-3 (“JEDEC No. JESD209-3”),

PND00013574-3713 at 13574.

JESD209-3 at PND00013579 (depicting the 216-ball pinout); see also JESD209-3 at 7-10 (depicting

different pinouts).

Page 67: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

67

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

JESD209-3 at PND00013584 (depicting separate pad sequences for Command/Address pads and for

Page 68: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

68

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

DQ pads).

JESD209-3 at PND00013585 (defining ten pads—CA0 thru CA9—as DDR Command/Address

Inputs).

Each Group 2 product has a command bus (e.g., the “bus inputs” corresponding to CA0-CA9 necessary

to comply with the LPDDR3 standard) and the command bus connects the memory device to the

memory controller. For substantially similar reasons, a command bus is also necessary when the

shared memory device complies with the pertinent standard for LPDDR2. JEDEC Standard Low Power

Double Data Rate 2 (LPDDR2) JESD209-2B (“JEDEC No. JESD209-2B”), PND00013336-3573 at

13352, 13353. For substantially similar reasons, a command bus is also necessary when the shared

memory device complies with the pertinent standard for DDR3L. JEDEC Standard Addendum No. 1 to

Page 69: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

69

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

JESD79-3 (“JEDEC No. JESD79-3-1A”), PND00013086-3109; JEDEC Standard JESD79-3F

(“JEDEC No. JESD79-3F”), PND00013110-3335 at 13126-134 (depicting pinouts for DDR3 devices,

including RAS, CAS, WE, and CS) and 27 (defining RAS, CAS, WE, and CS as “Command Inputs”

or “Chip Select” that “define the command being entered).

The command bus of each Group 2 product is arranged to carry the issued shared memory commands

according to a minimum issue time generated by the memory controller.

•! Deadline-based arbitration with latency allowance that can be specified per-client, and

under some circumstances dynamically adjusted for a given client.

Tegra K1 TRM at PND00018591 (emphasis added).

The issued memory commands, carried by the command bus, is generated (e.g., by the memory

controller) according to a minimum issue time (e.g., deadlines with latency allowance).

[f] a data bus connecting

the shared memory to the

memory controller

arranged to carry read data

from the shared memory to

the memory controller

based upon the data offset

generated by the memory

controller.

When a Tegra Group 1 product is used as configured to control memory in a system, the system

comprises a data bus connecting the shared memory to the memory controller arranged to carry read

data from the shared memory to the memory controller based upon the data offset generated by the

memory controller. Tegra Group 2 products comprise a data bus connecting the shared memory to the

memory controller arranged to carry read data from the shared memory to the memory controller based

upon the data offset generated by the memory controller.

Tegra Group 1 Products (NVIDIA Tegra Processors or SOC devices configured to control and

support LPDDR memory):

As explained below with respect to Tegra Group 2 products, when a Tegra Group 1 product is used as

Page 70: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

70

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

configured to control memory in a system, the system comprises a data bus connecting the shared

memory to the memory controller arranged to carry read data from the shared memory to the memory

controller based upon the data offset generated by the memory controller.

Tegra Group 2 Products (NVIDIA system products that include NVIDIA Tegra processors or

SOC devices configured to control and support LPDDR memory):

As shown above, Group 2 comprises NVIDIA system products that include Group 1 products

configured to control and support LPDDR memory, which complies with the JEDEC LPDDR standard.

In compliance with the LPDDR standard, Group 2 comprises a data bus connecting the shared memory

(e.g., the set of addressable DDR3L, LPDDR3, and similar such memory) to the memory controller

arranged to carry read data from the shared memory to the memory controller based upon the data

offset generated by the memory controller:

Page 71: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

71

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

JESD209-3 at PND00013579 (depicting the 216-ball pinout); see also JESD209-3 at PND00013580-83

(depicting different pinouts).

Page 72: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

72

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

JESD209-3 at PND00013584 (depicting separate pad sequences for Command/Address pads and for

DQ pads).

Page 73: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

73

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

JESD209-3 at PND00013585 (defining sixteen or thirty-two pads—DQ0 thru DQ15 or DQ0 thru

DQ31—as bi-directional bus defined as Data Inputs/Output).

Each Group 2 product has a data bus (e.g., the “data inputs/output” corresponding to DQ0-DQ15 or

DQ0-DQ31 necessary to comply with the LPDDR3 standard) and the data bus connects the shared

memory to the memory controller. For substantially similar reasons, a data bus is also necessary when

Page 74: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

74

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

the shared memory device complies with the pertinent standard for LPDDR2. JEDEC No. JESD209-

2B at PND00013352, PND00013353. For substantially similar reasons, a command bus is also

necessary when the shared memory device complies with the pertinent standard for DDR3L. JEDEC

No. JESD79-3-1A; JEDEC No. JESD79-3F at PND00013126-134 (depicting pinouts for DDR3

devices, including DQ, DQU, DQL, DQS, DQS#, DQSU, DQSU#, DQSL, and DQSL#) and

PND00013136 (defining DQ as a “Bi-directional data bus”, and DQU, DQL, DQS, DQS#, DQSU,

DQSU#, DQSL, and DQSL# as “data strobe” lines).

The data bus of each Group 2 product is arranged to carry read data from the shared memory to the

memory controller based upon the data offset generated by the memory controller:

Device/Rank Geometry

DRAM devices come in many sizes, widths, etc. The arbiter must be programmed to drive the

correct combination of address bits, data bits, and protocol.

Tegra K1 TRM at PND00018593 (emphasis added).

The data bus of each Group 2 Product carries read data (e.g., data read from a correct combination of

address bits) from the shared memory to the memory controller, and it is based upon the data offset

generated by the memory controller (e.g., if the combination of bits, which include offset bits, is not

correct then the correct data will not be read from memory).

12. The multi-processor

computer system as recited

in claim 11, wherein the

The shared memory, when a Tegra Group 1 product is used as configured to control memory in a

system, is an SDRAM. The shared memory of Tegra Group 2 products is an SDRAM.

Page 75: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

75

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

shared memory is an

SDRAM.

Tegra Group 1 Products (NVIDIA Tegra Processors or SOC devices configured to control and

support LPDDR memory):

As explained below with respect to Tegra Group 2 products, when a Tegra Group 1 product is used as

configured to control memory in a system, the shared memory is an SDRAM.

Tegra Group 2 Products (NVIDIA system products that include NVIDIA Tegra processors or

SOC devices configured to control and support LPDDR memory):

As shown above, Group 2 comprises NVIDIA system products that include Group 1 products

configured to control and support LPDDR memory, which complies with the JEDEC LPDDR standard.

In compliance with the LPDDR standard, the shared memory of Tegra Group 2 products is an SDRAM

(e.g., 2 GB of SDRAM):

Above is the side cover of the box containing the NVIDIA Shield Tablet, confirming that 2 GB of

RAM is included, PND00013069.

As another example, NVIDIA customers (e.g., Nintendo) couple the Group 1 products to SDRAM

(e.g., the 16 GB of Samsung K4F6E304HB-MGCH LPDDR4 SDRAM) in third-party products such as

the Nintendo Switch).

Page 76: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

76

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

PND000024819-PND00024838 at PND000024820.

Page 77: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

77

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

Id.

14. In a universal resource

access controller, a method

of controlling access to a

resource by a requesting

system, comprising:

To the extent the preamble is found to be a limitation, as discussed below, the universal resource access

controller of Tegra Group 1 and 2 products controls access to a resource by a requesting system.

Tegra Group 1 Products (NVIDIA Tegra Processors configured to control and support LPDDR

memory):

Group 1 comprises Tegra processors or SOC devices configured to control and support LPDDR

memory. The NVIDIA Tegra K1 processor is representative of Group 1 products because it comprises

Page 78: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

78

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

and uses LPDDR memory, and complies with the JEDEC LPDDR standard (JESD209B, JESD2093C,

and/or JESD209-4), as do all Group 1 products. An image of the representative product, NVIDIA

Tegra K1 processor is reproduced from NVIDIA’s website below:

See Mile High Milestone: Tegra K1 “Denver” Will Be First 64-bit ARM Processor for Android,

PND00015126-29, http://blogs.nvidia.com/blog/2014/08/11/tegra-k1-denver-64-bit-for-android/.

Page 79: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

79

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

See also:

NVIDIA Tegra K1 Preview & Architecture Analysis, PND00015130-141

(http://www.anandtech.com/show/7622/nvidia-tegrak1).

See also Tegra-K1-Mobile-Processor-Technical-Reference-Manual-DP06905001, v02p (“Tegra K1

TRM”), PND00017947-020289 at 17958:

Page 80: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

80

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

Each Group 1 product has a memory controller that “handles memory requests from internal clients

Page 81: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

81

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

and arbitrates among them to allocate memory bandwidth” for DRAMs (e.g., “DDR3L and LPDDR3

SDRAMs”).

Tegra K1 TRM at PND00018589.

Each Group 1 product also has memory controller clients:

1.3 Memory Controller and Internal Bus Architecture

The Tegra K1 mobile processor has a highly optimized 64-bit memory controller, supporting

low latency access for the CPU, optimized high bandwidth access for the graphics and video

devices, and controlled latency for real time devices such as display.

There is a three-level hierarchy of memory clients:

1.! Memory controller clients: The memory controller directly arbitrates between these

Page 82: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

82

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

using a complex algorithm optimizing DRAM efficiency. The highest bandwidth

clients fall into this class, and they communicate directly with the memory controller

using a proprietary high-speed bus.

2.! AHB devices: These generally have a built-in DMA engine, and share a single memory

client using the AHB bus protocol.

3.! APB devices: All APB devices are slaves, and are serviced by a shared multi-channel

APB DMA controller which is also an APB device.

Special provisions are made for the CPU to bypass parts of the memory controller arbitration to

help achieve a lower latency.

Tegra K1 TRM at PND00017959 (emphasis added).

Each Group 1 product also has at least one resource (e.g., the “pending request buffer” and the

“DRAM”):

Page 83: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

83

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

Tegra K1 TRM at PND00018590.

Page 84: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

84

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

Because DRAM and request buffers are each a resource and because the memory controller of each

Group 1 product controls access to the resources, each Group 1 product has a universal resource access

controller that is coupled to at least one resource.

Because the memory controller of each Group 1 product “handles” and “arbitrates among” two or more

“internal clients” that make “memory requests,” each Group 1 product has at least one requesting

system that is coupled to the universal resource access controller.

Tegra Group 2 Products (NVIDIA system products that include NVIDIA Tegra processors or

SOC devices configured to control and support LPDDR memory):

As discussed above, Group 2 comprises NVIDIA system products that include Group 1 products

configured to control and support LPDDR memory, which complies with the JEDEC LPDDR standard.

As shown above in connection with Group 1, each Group 2 product comprises a universal resource

access controller coupled to a requesting system and a resource.

[14a] generating a resource

access request by the

requesting system;

See contentions on limitation 1[a].

[14b] passing the resource

access request to the

universal resource access

controller; and

See contentions on limitation 1[a].

Page 85: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

85

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

[14c] generating a

sequenced resource access

command suitable for

accessing the resource as

required by the requesting

system and based upon a

specific operating

parameter of the requested

resource, a current state of

the requested resource, and

a requested state of the

requested resource.

See contentions on limitations 1[b] and 1[c].

15. A method as recited

in claim 14, wherein the

universal resource access

controller includes,

Tegra Group 1 and 2 products comprise a universal resource access controller. See the preamble of

Claim 1.

[a] a configurable system

interface coupled to the

requesting system suitably

arranged to both receive

the resource access request

and to generate a

corresponding universal

command,

See contentions on limitation 2[a].

Page 86: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

86

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

[b] a universal command

sequencer coupled to the

configurable system

interface,

See contentions on limitation 2[b].

[c] a resource tag buffer

coupled to the command

sequencer arranged to

store a resource tag

arranged to identify the

current state of the

requested resource and a

resource tag arranged to

identify the requested state

of the requested resource,

and

See contentions on limitation 2[c].

[d] a characteristic

operating parameter buffer

coupled to the command

sequencer arranged to

store the characteristic

operating parameter

associated with the

requested resource.

See contentions on limitation 2[d].

16. A method as recited See contentions on limitations 3[a] and 3[b].

Page 87: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

87

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

in claim 14, wherein the

requesting system is one of

a plurality of processors

included in a

multiprocessor computing

system, wherein the

configurable system

interface is configurable to

be able to accept and

process resource requests

from any of the plurality of

processors.

17. A method as recited

in claim 14, wherein the

requested resource is a

memory device.

See contentions on limitation 11[d].

18. A method as recited

in claim 17, wherein the

memory device is shared

amongst a number of

memory devices each of

which are coupled to the

universal resource access

controller.

The memory device of Tegra Group 1 and 2 products is shared amongst a number of memory devices

each of which are coupled to the universal resource access controller.

Tegra Group 1 Products (NVIDIA Tegra Processors or SOC devices configured to control and

support LPDDR memory):

As shown above, each Group 1 product is configured to control and support LPDDR memory. In

compliance with the LPDDR standard, the memory device (e.g., the set of addressable DDR3L,

LPDDR3, and similar such memory) coupled to each Group 1 product is shared amongst a number of

memory devices (e.g., four SDRAM chips comprising the addressable memory) each of which are

Page 88: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

88

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

coupled to the universal resource access controller:

Page 89: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

89

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

See, e.g., four SKHynix components bearing the Product Number H5TC4G63CFR, which is a memory

device. PND00013070, PND00013071. Each of the four H5TC4G63CFR memory devices are coupled

to the universal resource access controller.

Tegra Group 2 Products (NVIDIA system products that include NVIDIA Tegra processors or

SOC devices configured to control and support LPDDR memory):

Page 90: NVIDIA 1012 - Microsoft · 2017. 5. 31. · and/or JESD209 -4), as do all Group 1 products. An image of the representative product, NVIDIA Tegra K1 processor, which is connected to

APPENDIX A1

90

U.S. Patent No. 6,532,505

Claim Language NVIDIA 505 ACCUSED INSTRUMENTALITIES

As discussed above, Group 2 comprises NVIDIA system products that include Group 1 products

configured to control and support LPDDR memory, which complies with the JEDEC LPDDR standard.

As shown above in connection with Group 1, the memory device of each Group 2 product is shared

amongst a number of memory devices each of which are coupled to the universal resource access

controller.