CBRSPrivate

Get Cerebras Systems updates in your inboxBeta

S-1

Registration Statement (IPO)

Filed September 30, 2024

AI Summary

Cerebras builds the world's largest AI chip — a single wafer-scale processor designed to train and run large language models far faster than traditional GPU clusters from NVIDIA. The company targets enterprises, cloud providers, and government "Sovereign AI" initiatives, positioning itself as the primary alternative to NVIDIA's dominance in AI accelerator hardware. As an emerging growth company still pre-IPO with no disclosed revenue or profitability figures in this initial filing, the central risk is extreme customer concentration and dependence on a single third-party manufacturer (TSMC), while competing head-on against NVIDIA's entrenched CUDA software ecosystem. The dual-class share structure (voting Class A and non-voting Class N) signals founder-friendly governance that limits outside shareholder influence post-IPO.

What is Registration Statement (IPO)?

The prospectus filed before an IPO — contains everything an investor needs to evaluate the company: business model, products, customers, competition, financials, risks, and use of proceeds. The most detailed filing a company ever makes.

IPO filing — the company is going public. Contains the fullest picture of the business.

Extracted Milestones (7)

In Progress

IPO Filing on Nasdaq Under Symbol CBRS

Cerebras filed its S-1 registration statement with the SEC on September 30, 2024 for an initial public offering of Class A common stock on the Nasdaq Global Market under the symbol 'CBRS'.

View full milestone →
Shipped

CS-3 AI Compute System Shipped

The third-generation CS-3 system housing the WSE-3 delivers 3x more compute per unit power than leading 8-way GPU systems, connects via standard 100G Ethernet, and is designed to scale up to 2,048 units forming an AI supercomputer supporting models up to 24 trillion parameters.

CS-3 System
View full milestone →
Shipped

CSoft Software Platform with PyTorch Integration Shipped

Cerebras's proprietary CSoft software platform integrates with industry-standard ML frameworks like PyTorch, automatically maps model operations to the WSE via its graph compiler, and eliminates the need for CUDA or distributed programming for both training and inference workloads.

TrainingInference
View full milestone →
Shipped

Cerebras Inference Cloud Launched

Cerebras launched its inference cloud service delivering over 10x faster output generation speeds than GPU-based solutions from top CSPs on leading open-source models, with a dedicated cloud API for hosting both open-source and proprietary customer models.

Inference
View full milestone →
Shipped

WSE-3 Third-Generation Wafer-Scale Engine Shipped

Cerebras's third-generation Wafer-Scale Engine (WSE-3), the largest chip ever sold at 57x larger than the leading GPU with 900,000 cores, 44GB on-chip SRAM, and 21 PB/s memory bandwidth, is being sold to customers as confirmed in the S-1 filing.

WSE Chip
View full milestone →
Shipped

Cerebras AI Supercomputer Scales to 2,048 CS-3 Systems (256 ExaFLOPS)

Cerebras's AI Supercomputer architecture supports scaling from 1 to 2,048 CS-3 systems delivering up to 256 exaFLOPS with near-linear performance scaling, requiring only a single configuration change and 97% fewer lines of code compared to GPU clusters for GPT-3-scale training.

CS-3 SystemTraining
View full milestone →
Shipped

Condor Galaxy Cloud via G42 Partnership Operational

Cerebras sells cloud-based AI compute solutions via the Condor Galaxy Cloud owned by partner G42, providing customers with fast access to AI acceleration hardware for LLM training and ultra-low latency inference.

Condor Galaxy
View full milestone →
Cerebras Systems Registration Statement (IPO) | AlphaPerch