DCT
1:25-cv-01623
ThroughPuter Inc v. Amazon Web Services Inc
Key Events
Complaint
Table of Contents
complaint Intelligence
I. Executive Summary and Procedural Information
- Parties & Counsel:
- Plaintiff: ThroughPuter, Inc. (Delaware)
- Defendant: Amazon Web Services, Inc. (Delaware)
- Plaintiff's Counsel: Gardella Alciati PA.
- Case Identification: 1:25-cv-01623, W.D. Tex., 10/07/2025
- Venue Allegations: Plaintiff alleges venue is proper in the Western District of Texas because Defendant has a regular and established place of business in the District and has committed acts of infringement there. The complaint specifically identifies an Austin-based facility, formerly Annapurna Labs, as a site where Defendant designs, tests, and builds the accused AWS Nitro Cards.
- Core Dispute: Plaintiff alleges that Defendant's AWS Nitro System, and specifically its Nitro Cards used for accelerating cloud computing functions, infringes five patents related to hardware-based resource management, task switching, and load-adaptive processing in multi-core computing environments.
- Technical Context: The technology concerns specialized hardware accelerators designed to offload networking, storage, and other virtualization tasks from main server processors in large-scale data centers, aiming to improve both performance and resource utilization.
- Key Procedural History: The complaint does not reference prior litigation, licensing history, or administrative patent challenges. It notes that the inventor, Mark Sandstrom, presented the underlying technology at various high-performance and cloud computing conferences between 2012 and 2015, prior to and during the issuance of the asserted patents.
Case Timeline
| Date | Event |
|---|---|
| 2010-01-01 | Approximate start of development of Plaintiff's patented technology |
| 2011-04-16 | Earliest Priority Date (''078 Patent) |
| 2011-11-04 | Earliest Priority Date (''065, ''599, ''902 Patents) |
| 2012-01-01 | Plaintiff begins presenting technology at industry conferences |
| 2013-08-23 | Earliest Priority Date (''353 Patent) |
| 2013-10-15 | '078 Patent Issued |
| 2014-07-22 | '065 Patent Issued |
| 2015-01-01 | AWS acquires Annapurna Labs |
| 2018-11-20 | '599 Patent Issued |
| 2019-06-04 | '902 Patent Issued |
| 2019-06-11 | '353 Patent Issued |
| 2025-10-07 | Complaint Filing Date |
II. Technology and Patent(s)-in-Suit Analysis
U.S. Patent No. 8,561,078 - "Task Switching and Inter-Task Communications for Multi-Core Processors"
The Invention Explained
- Problem Addressed: The complaint describes a fundamental tension in computing architecture between high-performance computing, which maximizes processing speed for a single application on dedicated hardware, and cloud computing, which seeks to efficiently share a pool of hardware resources among many applications Compl. ¶¶20-22 This tension was exacerbated as single CPU core performance plateaued, creating a need to parallelize even mainstream applications on shared cloud infrastructure Compl. ¶24
- The Patented Solution: The invention proposes a data processing system where hardware logic modules, rather than software, manage the allocation and connection of resources Compl. ¶27 A hardware "controller" repeatedly assigns processing cores to individual software tasks, and a hardware "cross-connect" physically connects the assigned core to a memory segment specific to that task '078 Patent, abstract This hardware-centric approach offloads resource management from the main processors, aiming to accelerate processing while optimizing utilization '078 Patent, col. 1:16-2:6 Compl. ¶27
- Technical Importance: The technology claims to provide a solution that enables both accelerated processing speeds for multiple applications and optimized utilization of the underlying processor array Compl. ¶27
Key Claims at a Glance
- The complaint asserts independent Claim 1 Compl. ¶49
- Essential elements of Claim 1 include Compl. ¶50:
- An array of processing cores for processing a set of software programs.
- A hardware logic module, referred to as a controller, for repeatedly assigning individual processing cores to process individual tasks.
- A memory providing task-specific memory segments.
- A hardware logic module, referred to as a cross-connect, for connecting the cores and memory segments, wherein the controller configures either task-specific or core-specific multiplexers within the cross-connect.
- The complaint does not explicitly reserve the right to assert dependent claims of the '078 Patent.
U.S. Patent No. 8,789,065 - "System and Method for Input Data Load Adaptive Parallel Processing"
The Invention Explained
- Problem Addressed: The patent addresses the need to efficiently manage and distribute workloads across a shared pool of processing cores, particularly in environments where the volume of input data for different applications is dynamic and unpredictable Compl. ¶26
- The Patented Solution: The invention describes a system that adapts to data load in hardware. A logic subsystem demultiplexes incoming data packets into program-specific buffers based on header information '065 Patent, abstract A second logic subsystem monitors the volume of data in these buffers and periodically assigns processing cores to program instances based on those volumes. A third logic subsystem then multiplexes the data from the buffers to the specifically assigned cores '065 Patent, col. 4:45-54 '065 Patent, abstract This creates a closed-loop, hardware-based system for allocating processing resources where they are most needed in real-time.
- Technical Importance: This approach allows a multi-core system to dynamically allocate processing power based on real-time data flow, rather than static pre-allocation, improving efficiency in shared cloud environments Compl. ¶27
Key Claims at a Glance
- The complaint asserts independent Claim 1 Compl. ¶90
- Essential elements of Claim 1 include Compl. ¶91:
- A collection of hardware input data ports shared dynamically among data packets.
- An array of hardware buffers, each specific to an individual destination program instance.
- A logic subsystem for demultiplexing input data packets to the destination buffers based on overhead information in the packet.
- A logic subsystem for monitoring data volumes at the buffers and periodically assigning processing cores based on the monitored volumes.
- A logic subsystem for multiplexing data packets from the buffers to the assigned processor cores.
- The complaint does not explicitly reserve the right to assert dependent claims of the '065 Patent.
U.S. Patent No. 10,318,353 - "Concurrent Program Execution Optimization"
- Technology Synopsis: The patent describes a system for processing computer programs using a pipelined architecture composed of a plurality of "processing stages." It focuses on managing inter-task communications (ITC) between these stages using dedicated hardware multiplexers, allowing data to be passed between tasks hosted at different stages Compl. ¶136 Compl. ¶140 Compl. ¶147
- Asserted Claims: Claim 3, which depends on Claim 1 Compl. ¶135
- Accused Features: The complaint alleges that the sequential, pipelined data packet processing operations on the Nitro Cards constitute the claimed "plurality of processing stages" Compl. ¶140 The on-chip SoC network is alleged to be the "group of multiplexers" connecting these stages Compl. ¶147
U.S. Patent No. 10,133,599 - "Application Load Adaptive Multi-stage Parallel Data Processing Architecture"
- Technology Synopsis: The patent discloses a system for dynamic resource management of a pool of processing units that may be of different types (e.g., general-purpose cores, specialized hardware like encryption engines) '599 Patent, col. 27:40-45 Compl. ¶189 The system comprises three subsystems: a first to allocate processing units based on demand and quotas, a second to select the highest priority program instances, and a third to assign those instances to the allocated units '599 Patent, claim 1
- Asserted Claims: Claim 1 Compl. ¶184
- Accused Features: The complaint alleges the Nitro Card, with its mix of "Nitro cores" and "encryption engines," constitutes a pool of at least two types of processing units Compl. ¶189 The Nitro controller firmware is alleged to be the subsystem that allocates these resources based on packet flow demands and opportunistic allowances Compl. ¶¶190, 192
U.S. Patent No. 10,310,902 - "System and Method for Input Data Load Adaptive Parallel Processing"
- Technology Synopsis: This patent, similar to the '065 Patent, describes a system for hosting application programs that allocates an array of cores based on data volume in input buffers and processing quotas '902 Patent, claim 1 It details a three-subsystem approach where a first subsystem allocates cores, a second assigns cores to specific program instances, and a third establishes direct data access from the corresponding input buffer to the assigned core '902 Patent, claim 1
- Asserted Claims: Claim 1 Compl. ¶225
- Accused Features: The complaint alleges the Nitro system's live-updating firmware acts as the first subsystem that reallocates cores based on changing network traffic and processing demands Compl. ¶232 The hardware that routes packets to queues is alleged to be the second subsystem that assigns cores to instances, and the SoC network hardware is alleged to be the third subsystem that establishes data access Compl. ¶234 Compl. ¶239
III. The Accused Instrumentality
Product Identification
- The accused products are the AWS Nitro System and its components, with a specific focus on the "accused Nitro Cards" Compl. ¶37
Functionality and Market Context
- The complaint describes Nitro Cards as "dedicated hardware components" and "specialized Application Specific Integrated Circuits ('ASICs')" that operate independently from a server's main CPU to offload and accelerate tasks related to networking, storage, and virtualization Compl. ¶37 Compl. ¶40
- Technically, the cards are alleged to be systems-on-a-chip (SoCs) containing an array of "Nitro cores," which are identified as Arm-based processors, alongside other specialized hardware Compl. ¶39 Compl. ¶42 The system processes inbound and outbound network packets in a pipeline, performing functions such as evaluating firewall rules, tracking connections, applying rate limits, and handling virtual private cloud (VPC) functions Compl. ¶41
- The complaint alleges these cards use multiple receive (Rx) and transmit (Tx) queues to manage network traffic and perform load balancing across available data paths Compl. ¶40 A diagram described in an exhibit is cited to show a "Hash on 5-tuple" for "Queue Assignment," which results in a "Processor Assignment" Compl. ¶60
- The complaint asserts that the Nitro technology is "essential to every AWS server" and is the "foundation of EC2 instances," enabling AWS to innovate faster and deliver increased security and performance Compl. ¶17
IV. Analysis of Infringement Allegations
No probative visual evidence provided in complaint.
'078 Patent Infringement Allegations
| Claim Element (from Independent Claim 1) | Alleged Infringing Functionality | Complaint Citation | Patent Citation |
|---|---|---|---|
| an array of processing cores for processing a set of software programs... | The accused Nitro Cards each contain an array of "Nitro cores" that handle data packet processing according to specific instructions, which constitute a set of software programs. | ¶53; ¶54 | col. 2:27-38 |
| a hardware logic module, referred to as a controller, for repeatedly assigning individual processing cores... to process individual tasks... | Specialized hardware on the Nitro Card processes data packet headers and directs packets to flow-specific buffers, which are then processed by Nitro cores. This assignment of data is alleged to constitute an assignment of the processing core itself. | ¶55; ¶56 | col. 4:11-19 |
| a memory providing task-specific memory segments | The System-on-a-Chip (SoC) system-level cache on the Nitro Card is segmented into input buffers (queues), with each buffer being specific to a particular processing task or packet flow. | ¶57; ¶58 | col. 4:43-52 |
| a hardware logic module, referred to as a cross-connect, for connecting the array of processing cores and the task-specific memory segments... | The SoC network on the Nitro Card acts as a hardware cross-connect, containing multiplexer hardware that connects the input buffers (task-specific memory segments) to the Nitro cores (processing cores). | ¶59; ¶60 | col. 4:53-60 |
'065 Patent Infringement Allegations
| Claim Element (from Independent Claim 1) | Alleged Infringing Functionality | Complaint Citation | Patent Citation |
|---|---|---|---|
| a collection of hardware input data ports of the processor, where each port of the collection is shared dynamically among data packets... | The Nitro Card has multiple inputs through its PCIe and Ethernet interfaces, which receive continuous streams of individual data packets for various program instances. | ¶97; ¶98 | col. 4:45-49 |
| an array of hardware buffers, where each buffer of the array is specific to an individual destination program instance... | The SoC system-level cache is segmented into an array of input buffers (queues), and each queue is alleged to be specific to a distinct packet flow and its corresponding processing program instance. | ¶99; ¶100 | col. 4:49-54 |
| a logic subsystem for dynamically... demultiplexing input data packets from said input ports to said destination program instance specific buffers... | Specialized packet processing hardware executes a "five tuple hash" on packet header information to distribute and "spread those... loads across multiple queues," which are the program instance specific buffers. | ¶101; ¶102 | col. 4:45-54 |
| a logic subsystem for monitoring volumes of data packets... and for periodically assigning processing cores... based on the respective monitored volumes... | Controller hardware associated with each Nitro core repeatedly polls the input buffers. The controller applies congestion control and fair queuing algorithms based on the queue fullness, which the complaint maps to assigning cores based on monitored volumes. | ¶103; ¶104 | col. 5:55-6:3 |
| a logic subsystem for multiplexing data packets dynamically from the destination program instance specific buffers to the processor cores... | The SoC network includes multiplexers and routes data packets from the various input buffers to the Nitro processor cores that have been assigned to handle the corresponding packet flow. | ¶105; ¶106 | col. 4:50-60 |
Identified Points of Contention
- Definitional Scope of "Assigning": For the '078 Patent, a primary point of contention may be the meaning of the controller "assigning individual processing cores." The complaint alleges that directing data packets to a core-specific queue constitutes assignment of the core. A defendant could argue that this is an assignment of data to a queue, and that the core subsequently "assigns itself" by polling that queue, a distinct technical operation from what the patent may describe.
- Functional Equivalence of "Logic Subsystems": For the '065 Patent, the infringement theory maps the broad term "logic subsystem" to specific functions of the Nitro Card like hashing hardware and congestion control algorithms. A key technical question will be whether these functions perform the specific actions required by the claims. For instance, does applying a fair queuing algorithm based on queue depth meet the limitation of "periodically assigning processing cores... based on the respective monitored volumes of packets," or is this a fundamentally different technical process for managing data flow?
V. Key Claim Terms for Construction
"assigning individual processing cores" ('078 Patent, Claim 1)
- Context and Importance: This term is central to the infringement allegation for the '078 Patent. The complaint's theory depends on construing this term to cover the indirect action of a hardware controller directing data to a buffer that is subsequently polled by a core. The defendant's position may be that this is not "assigning" the core itself. Practitioners may focus on this term because its construction will determine whether the complaint's infringement theory is viable.
- Intrinsic Evidence for Interpretation:
- Evidence for a Broader Interpretation: The '078 Patent specification may describe the controller's role in orchestrating the overall processing flow in a way that suggests its actions are causally responsible for a core processing a task, even if the final connection is made by the core's polling action. Language describing the controller as "directing" or "managing" tasks could be cited.
- Evidence for a Narrower Interpretation: The patent's detailed description or figures may depict an embodiment where the controller directly manipulates a switching fabric to create a hard-wired connection between a specific core and a specific memory segment for a period of time. Such a direct, explicit assignment mechanism could be used to argue for a narrower construction that excludes the indirect "data-steering" model alleged by the plaintiff.
"logic subsystem for monitoring volumes of data packets... and for periodically assigning processing cores" ('065 Patent, Claim 1)
- Context and Importance: The viability of the infringement allegation for the '065 Patent hinges on whether the Nitro Card's congestion control and fair queuing algorithms can be characterized as this claimed "logic subsystem." The definition will determine if a system that manages data flow into queues is equivalent to a system that assigns processors based on the contents of those queues.
- Intrinsic Evidence for Interpretation:
- Evidence for a Broader Interpretation: The patent specification may use broad, functional language to describe this subsystem, stating its purpose is to "adaptively" or "dynamically" allocate resources based on "load." This could support an argument that any hardware mechanism achieving this outcome, including a congestion control algorithm, falls within the claim's scope '065 Patent, abstract
- Evidence for a Narrower Interpretation: The detailed description in the '065 Patent might illustrate this subsystem as a distinct hardware block that literally counts packets or bytes in each buffer, stores these counts in registers, and periodically runs a discrete allocation algorithm to remap a set of cores. This more specific embodiment could support a narrower construction that excludes the continuous, real-time adjustments of a traffic management algorithm.
VI. Other Allegations
- Indirect Infringement: The complaint alleges induced infringement for all five patents. The basis for inducement is the allegation that Defendant encourages and instructs its customers, developers, and users to use the AWS Nitro System in an infringing manner through materials such as "manuals, white papers, and trainings" Compl. ¶78 Compl. ¶122 Compl. ¶172 Compl. ¶213 Compl. ¶255
- Willful Infringement: The complaint alleges willful infringement for all patents, primarily on the basis of post-suit knowledge. It asserts that Defendant has had "actual notice and knowledge" of the patents "by no later than the filing of this Complaint" Compl. ¶64 Compl. ¶109 Compl. ¶158 Compl. ¶199 Compl. ¶241 It also includes a general allegation that Defendant was "willfully blind" to the existence of the patents pre-suit Compl. ¶66 Compl. ¶111 Compl. ¶160 Compl. ¶201 Compl. ¶243
VII. Analyst's Conclusion: Key Questions for the Case
- Architectural Mapping: A core issue will be one of technical and functional mapping: can the complex, real-world operations of the AWS Nitro System-a sophisticated data plane offload architecture using hashing, queuing, and traffic shaping-be accurately mapped onto the specific, and arguably more schematic, hardware architectures described in the patents? The dispute may center on whether the accused product's method of steering data flows is functionally equivalent to the patents' methods of explicitly assigning processing cores.
- Definitional Scope: The case will likely turn on a question of claim construction and scope: will functional terms like "assigning processing cores" and "monitoring volumes" be interpreted broadly to cover the indirect effects of the Nitro system's data management algorithms, or will they be narrowed to the more direct, discrete control mechanisms illustrated in the patent specifications?
- Hardware vs. Firmware Implementation: The patents claim hardware-implemented logic for managing resources. A key evidentiary question will be one of implementation detail: to what extent are the accused functions in the Nitro System performed by fixed-function "specialized hardware," as the patents require, versus by instructions (firmware) executing on the general-purpose "Nitro cores"? Distinguishing between what the hardware is versus what the software does will be central to the infringement analysis.
Analysis metadata