LinSeer MegaCube
AI Workstation on your dest
LinSeer MegaCube equipped with the NVIDIA GB10 Grace Blackwell Superchip, it delivers 1 petaFLOP of computing power in a palm-sized form factor,thoroughly breaking spatial constraints for AI developer desktops. Featuring 128GB of LPDDR5x unified memory, it condenses the capability for single-machine inference of 200B parameter models and fine-tuning of 70B parameter models into a desktop environment. Connecting two units using NVIDIA ConnectX networking enables support for t cutting-edge large models like Llama 3.1 405B. Pre-installed with the NVIDIA AI software stack, it provides research institutions and enterprise teams with an "out-of-the-box" localized large model productivity tool, truly enabling technological accessibility for AI developer communities.
LinSeer MegaCube equipped with the NVIDIA GB10 Grace Blackwell Superchip, it delivers 1 petaFLOP of computing power in a palm-sized form factor,thoroughly breaking spatial constraints for AI developer desktops. Featuring 128GB of LPDDR5x unified memory, it condenses the capability for single-machine inference of 200B parameter models and fine-tuning of 70B parameter models into a desktop environment. Connecting two units using NVIDIA ConnectX networking enables support for t cutting-edge large models like Llama 3.1 405B. Pre-installed with the NVIDIA AI software stack, it provides research institutions and enterprise teams with an "out-of-the-box" localized large model productivity tool, truly enabling technological accessibility for AI developer communities.
  • 1 PetaFLOP FP4 Compute Power

    Natively supports FP4 precision, unleashing 1 petaFLOP of FP4 compute power. The inference speed is significantly enhanced compared to software-simulated FP4 efficiency, delivering ultimate performance.

  • 128GB LPDDR5x
    Unified Memory

    A single LinSeer MegaCube can support
    inference for 200B parameter models.

    A single LinSeer MegaCube can support inference for 200B parameter models.

  • Scalable Performance

    High-performance NVIDIA Connect-X networking
    allows two LinSeer MegaCube systems to be
    interconnected and is compatible with AI models of
    up to 405B parameters.

    High-performance NVIDIA Connect-X networking allows two LinSeer MegaCube systems to be interconnected and is compatible with AI models of up to 405B parameters.

  • Ultra-Compact

    About the size of a palm. Effortless deployment in labs and offices without space constraints.

  • Out-of-the-Box Full-Stack AI Ecosystem

    Pre-installed with NVIDIA DGX OS, CUDA, CUDA-X, RTX toolkit and
    libraries. Compatible with PyTorch, TensorFlow, MATLAB, and more.

    Pre-installed with NVIDIA DGX OS, CUDA, CUDA-X, RTX toolkit and libraries. Compatible with PyTorch, TensorFlow, MATLAB, and more.

Compact Size, Immense Capability
In an era where computing power equals productivity, LinSeer MegaCube deeply integrates top-tier hardware with intelligent scheduling, providing a solid foundation for complex AI scenarios:
Great Things Come in Small Packages
Compact Size
150 x 150 x 51mm
Equipped with NVIDIA GB10 Grace
Blackwell Superchip, Delivering
1 PetaFLOP FP4 Compute Power
Equipped with NVIDIA GB10 Grace Blackwell Superchip, Delivering 1 PetaFLOP FP4 Compute Power
NVIDIA Blackwell architecture GPU within the NVIDIA GB10 Grace Blackwell Superchip contains 6144 CUDA cores, equipped with fifth-generation Tensor Cores, unleashing 1 petaFLOP of FP4 compute power. The GB10 also includes a high-performance 20-core ARM CPU, providing robust data preprocessing and scheduling performance to accelerate model inference and fine-tuning.
NVIDIA NVLink™-C2C Technology
NVIDIA NVLink™-C2C breaks down communication barriers between the CPU and GPU, offering bandwidth up to 5 times that of PCIe 5.0. It provides ultra-high bandwidth, ultra-low latency interconnection, allowing the 128GB system unified memory to truly function as a "large model container," significantly enhancing AI computing efficiency.
128GB LPDDR5x
Unified Memory
128GB LPDDR5x Unified Memory
Provides ample memory for efficient AI model inference, fine-tuning, and other tasks. A single LinSeer MegaCube can support inference for 200B parameter models.
NVIDIA ConnectX-7 2*200G
Network Card
The built-in NVIDIA ConnectX-7 technology, allows two LinSeer MegaCube units to be cascaded, thereby supporting inference for models with up to 405B parameters, such as Llama 3.1 405B.
Out-of-the-Box Full-Stack AI Ecosystem
  • Pre-installed OS and
    Development Environment
    The LinSeer MegaCube comes pre-installed with NVIDIA DGX OS (Ubuntu-based), optimized for AI development environments, eliminating complex configuration needs.
    Pre-installed with the NVIDIA AI software stack, developers can work on projects on their LinSeer MegaCuge right away.
  • Supports Full-Stack AI Software
    LinSeer MegaCube supports AI framework software like PyTorch, TensorFlow, Matlab, AI development platforms like NVIDIA Riva, NVIDIA Holoscan, NVIDIA Metropolis, NVIDIA Isaac as well as access to AI models tuned for LinSeer MegaCube systems with NVIDIA NIM. (Note: "Blueprint" might be a specific term. "NVIDIA AI Enterprise" is common for suite, verify if "Blueprint" is correct).
  • Mainstream Model
    Support
    Supports fine-tuning and inference for the Deepseek R1 70B model.
    Supports inference for large models like Llama 3.1 405B, Qwen3 235B, etc. (requires cascading two LinSeer MegaCube systems).
Rich Applications Scenarios
  • Prototyping
    Develop, test, and validate AI models and applications.
    Leveraging the full NVIDIA AI software stack, it provides developers with a one-stop platform for AI model creation, testing, and validation, enabling easy development of AI-enhanced applications and industry solutions. After local debugging, migrate work to NVIDIA DGX Cloud on leading cloud providers, accelerated data centers, or cloud infrastructure to scale for additional work or deployment.
  • Fine-Tuning
    Customize and optimize the performance of pre-trained models.
    Through 128GB of massive unified system memory, it supports fine-tuning for models with up to 70 billion parameters, precisely adapting to vertical industry needs and specific use cases. Whether infusing knowledge for specialized scenarios like healthcare and finance, or adapting models for low-resource languages or niche industry data, it efficiently enhances the task accuracy and practicality of pre-trained model.
  • Inference
    Supports efficient inference validation for 200 billion parameter models.
    Easily conduct testing, validation, and deployment of 200 billion parameter models. The hardware-native NVFP4 technology reduces memory footprint while maximizing performance while minimizing accuracy loss.
  • Data Science
    High-performance data science at your desk.
    NVIDIA DGX Spark’s combination of 128GB of unified memory and 1 petaFLOP of parallel throughput maximizes performance of large, computationally complex data analytics and machine learning workflows at your desk. Clean your data.
  • Edge Applications
    Develop edge intelligence applications based on NVIDIA professional frameworks.
    Provides an excellent development platform for edge scenarios like robotics, smart cities, and computer vision, deeply compatible with frameworks such as NVIDIA Isaac (robotics), Metropolis (intelligent video analytics), and Holoscan (medical imaging).
Product Specifications
Item
LinSeer MegaCube
Architecture
NVIDIA Grace Blackwell
Processor
CPU: 20-Core Arm (10×Cortex-X925 + 10×Cortex-A725)
GPU: NVIDIA Grace Blackwell (integrated
Compute Cores
CUDA Cores: 6144
Tensor Core: 5th Generation (supports FP4/FP8 precision)
RT Core: 4th Generation
Memory
System Memory: 128GB LPDDR5x (unified addressing)
Memory Bandwidth: 273 GB/s
Flash: 128M NAND Flash
Storage
1TB/2TB/4TB NVMe M.2 (with self-encryption function)
Connectivity
Ethernet: 1×RJ-45 (10GbE)
NIC: ConnectX-7 Smart NIC (2×200G QSFP, RoCE support)
USB: 4×USB 4 Type-C (up to 40Gb/s)
Wireless: Wi-Fi 7, Bluetooth 5.3
Audio/Video: 1×HDMI 2.1a
Media Processing
NVENC: 1× (hardware video encoding)
NVDEC: 1× (hardware video decoding)
Software
Preloaded NVIDIA DGX OS; compatible with NVIDIA AI software ecosystem (PyTorch, TensorFlow, CUDA-X, etc.).
Note: NVIDIA AI Enterprise (NVAIE) requires an additional license for production deployment.
Physical & Power
Dimensions: 150mm (L) × 150mm (W) × 50.5mm (H)
Weight: 1.2Kg
Power Consumption: 170W
  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Intelligent Storage
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
  • Technical Blogs
All Support
  • Become A Partner
  • Partner Policy & Program
  • Global Learning
  • Partner Sales Resources
  • Partner Business Management
  • Service Business
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us