DCT

2:26-cv-00064

Arbor Systems LLC v. Adlink Technology Inc

Key Events
Complaint
complaint

I. Executive Summary and Procedural Information

  • Parties & Counsel:
  • Case Identification: 2:26-cv-00064, E.D. Tex., 01/23/2026
  • Venue Allegations: Plaintiff alleges venue is proper in the Eastern District of Texas because the Defendant is a foreign corporation.
  • Core Dispute: Plaintiff alleges that Defendant’s unspecified computing products infringe a patent related to 5G network architecture that integrates local "edge" computing modules with steerable antennas.
  • Technical Context: The technology relates to 5G network infrastructure, specifically the decentralization of computation to the network edge to manage high-speed, low-latency communications required for applications like autonomous vehicle control.
  • Key Procedural History: The complaint does not mention any prior litigation, inter partes review proceedings, or licensing history related to the patent-in-suit.

Case Timeline

Date Event
2019-05-07 U.S. Patent No. 11,128,045 Priority Date
2020-03-26 U.S. Patent No. 11,128,045 Application Date
2021-09-21 U.S. Patent No. 11,128,045 Issue Date
2026-01-23 Complaint Filing Date

II. Technology and Patent(s)-in-Suit Analysis

U.S. Patent No. 11,128,045 - "Computing system"

The patent-in-suit is U.S. Patent No. 11,128,045, issued on September 21, 2021 (the “’045 Patent”).

The Invention Explained

  • Problem Addressed: The patent’s background section notes that the widespread deployment of 5G technology requires a "huge number of 5G towers" to be installed frequently, particularly in dense urban areas, because the technology’s high-frequency signals often require a direct line of sight. This can result in logistical challenges and visually unappealing infrastructure, or "eyesores nearly everywhere" (’045 Patent, col. 1:34-47).
  • The Patented Solution: The invention proposes a system architecture that integrates an "edge processing module" directly with one or more electrically or mechanically steerable antennas (’045 Patent, Abstract). This module, which can contain components like a GPU or a neural network, is designed to provide "low-latency computation" locally at the antenna site (e.g., embedded in a light pole or building) (’045 Patent, col. 1:57-60; col. 2:5-10). By performing computation at the network's edge, the system aims to more efficiently manage tasks like beam sweeping for specific targets, such as autonomous vehicles or virtual reality devices, reducing the processing load on remote data centers (’045 Patent, col. 2:11-18).
  • Technical Importance: This architectural approach seeks to optimize 5G network performance by decentralizing computation, which may reduce latency for time-sensitive applications and more efficiently allocate network resources (’045 Patent, col. 1:57-60).

Key Claims at a Glance

  • The complaint asserts infringement of one or more claims of the ’045 Patent but does not specify which claims are asserted, instead referring to "Exemplary '’045 Patent Claims" identified in an unprovided exhibit (Compl. ¶11, ¶16). Independent claim 1 is representative of the core technology.
  • The essential elements of independent claim 1 include:
    • A transceiver to communicate with a predetermined target;
    • One or more antennas coupled to the transceiver, each electrically or mechanically steerable to the predetermined target;
    • An edge processing module coupled to the transceiver and antennas to provide computation for the predetermined target; and
    • A cloud processing module located at a cloud data center coupled to the edge processing module to share workload with it.
  • The complaint states that Plaintiff may assert infringement of other claims, including dependent claims, in the future (Compl. ¶11, ¶16).

III. The Accused Instrumentality

Product Identification

The complaint does not name any specific accused products or services (Compl. ¶11). It refers generally to "Exemplary Defendant Products" that are purportedly identified in claim charts attached as Exhibit 2, but this exhibit was not included with the public filing of the complaint (Compl. ¶16).

Functionality and Market Context

The complaint does not provide sufficient detail for analysis of the functionality or market context of the accused instrumentalities.

IV. Analysis of Infringement Allegations

The complaint alleges infringement based on claim charts provided in Exhibit 2, which was not attached to the publicly filed complaint (Compl. ¶17). Therefore, a detailed claim chart analysis based on the pleading is not possible. The complaint’s narrative theory is that the "Exemplary Defendant Products practice the technology claimed by the '’045 Patent" and "satisfy all elements of the Exemplary '’045 Patent Claims" (Compl. ¶16).

No probative visual evidence provided in complaint.

Identified Points of Contention

Lacking specific infringement allegations, any analysis of potential disputes is necessarily grounded in the claim language itself. The central questions in the case may revolve around the architecture of the accused products.

  • Scope Questions: A central dispute may concern whether the accused systems meet the structural requirements of the claims. For example, regarding claim 1, does an accused system possess a distinct "edge processing module" that is "coupled" to a "cloud processing module" in a manner that constitutes "shar[ing] workload"? The definition and nature of this relationship will be a key point.
  • Technical Questions: A key factual question may be what kind of "computation" the accused "edge processing module" performs. The infringement analysis may turn on whether the functionality of the accused device aligns with the patent's focus on "low-latency computation for the predetermined target," as distinguished from more general-purpose processing that might occur locally in any modern network device.

V. Key Claim Terms for Construction

Term: "edge processing module"

  • Context and Importance: This term is central to the claimed invention, distinguishing it from conventional network architectures where processing occurs primarily in a centralized cloud. The construction of this term will likely determine whether Defendant's products fall within the scope of the claims. Practitioners may focus on this term because its definition will dictate how physically co-located and functionally specialized the local processing hardware must be relative to the antenna.
  • Intrinsic Evidence for Interpretation:
    • Evidence for a Broader Interpretation: The specification provides a non-exhaustive list of components for the module, including "at least a processor, a graphical processing unit (GPU), a neural network, a statistical engine, or a programmable logic device (PLD)," which could support a broad definition covering a range of local computing hardware (’045 Patent, col. 2:5-8).
    • Evidence for a Narrower Interpretation: The specification repeatedly links the module’s purpose to providing "low-latency computation" and describes it as potentially "embedded in the antenna housing," a "pole, a building, or a light" (’045 Patent, col. 1:57-60; col. 2:8-10). This context may support a narrower construction requiring a module specifically dedicated to latency-sensitive tasks and physically integrated with the antenna infrastructure.

Term: "to share workload"

  • Context and Importance: This phrase from claim 1 defines the required functional relationship between the local "edge" module and the remote "cloud" module. The case may depend on whether the interaction between components in the accused systems meets this limitation.
  • Intrinsic Evidence for Interpretation:
    • Evidence for a Broader Interpretation: The plain meaning of the term could be argued to cover any architecture where some processing tasks are performed at the edge and others are performed in the cloud.
    • Evidence for a Narrower Interpretation: The specification describes a hierarchy where the "edge processing module shares workload with a core processing module located at a head-end and a cloud module located at a cloud data center, each processing module having increased latency" as one moves away from the edge (’045 Patent, col. 2:25-31). This may support a narrower construction requiring a coordinated, hierarchical system where tasks are divided based on latency requirements, not merely any system with distributed processing.

VI. Other Allegations

Indirect Infringement

The complaint alleges that Defendant has induced infringement "[a]t least since being served by this Complaint" by selling the accused products and distributing "product literature and website materials" that instruct users on how to use them in an infringing manner (Compl. ¶14, ¶15). No specific examples of such materials are provided in the complaint.

Willful Infringement

The basis for the willfulness allegation appears to be post-suit knowledge. The complaint alleges that service of the complaint itself provides Defendant with "actual knowledge" and that Defendant's continued infringement thereafter is willful (Compl. ¶13, ¶14).

VII. Analyst’s Conclusion: Key Questions for the Case

  • A primary question will be one of architectural equivalence: Does the accused system's architecture map onto the claimed combination of a steerable antenna, an "edge processing module," and a "cloud processing module" that "share[s] workload"? The resolution will depend on factual evidence regarding the specific components in the accused products and how they functionally interoperate.
  • A core legal issue will be one of definitional scope: Can the term "edge processing module," described in the patent as providing "low-latency computation," be construed to cover the local processing components in Defendant's systems? The outcome will likely depend on whether the court adopts a broad definition encompassing any local co-processor or a narrower one requiring a more specialized and physically integrated unit as described in the patent's embodiments.