COMPANY
COMPANY
COMPANY OVERVIEW
NEUCHIPS aims to develop purpose-built AI solutions for data center from the ground up that provide the most energy-efficient and cost-efficient deep learning inference accelerators that deliver the lowest TCO.
About NEUCHIPS
Leadership
Partners
PRODUCTS
PRODUCTS
PRODUCT OVERVIEW
By providing hardware Artificial Intelligence Engine (AIE) solution for both edge nodes and cloud servers, NEUCHIPS offers recommendation accelerator with world-beating accuracy especailly for deep learning recommendation system.
P880 Series Gen AI Processor
Llama N3000 AI Accelerator
RecAccel™ N3000
RecAccel™ N3000 PCIe
MLPERF
MEDIA
MEDIA
MEDIA OVERVIEW
We help your company goes mainstream with our recommendation accelerator. Stay up to date with our latest news.
All
News
Press Releases
Videos
Events
CAREERS
CONTACT
Language
English
中文
English
Media
Press Releases
All
News
Press Releases
Videos
Events
Press Releases
Search
Neuchips Introduces Plug-and-Play Enterprise GenAI Solution, Viper GenAI PCIe Card, at COMPUTEX 2024
May 27, 2024
|
Press Releases
Neuchips Introduces Plug-and-Play Enterprise GenAI Solution, Viper GenAI PCIe Card, at COMPUTEX 2024
Seamlessly upgrade traditional PCs into AI-powered systems with enhanced security and efficiency.
Read More
Neuchips to Showcase Industry-Leading Gen AI Inferencing Accelerators at CES 2024
January 04, 2024
|
Press Releases
Neuchips to Showcase Industry-Leading Gen AI Inferencing Accelerators at CES 2024
Neuchips, a leading AI Application-Specific Integrated Circuits (ASIC) solutions provider, will demo its revolutionary Raptor Gen AI accelerator chip (previously named N3000) and Evo PCIe accelerator card LLM solutions at CES 2024. Raptor, the new chip solution, enables enterprises to deploy large language models (LLMs) inference at a fraction of the cost of existing solutions.
Read More
Neuchips Unleashes Breakthrough: LLM Accelerator Achieves 800 Tokens/Second with Just 8 Inferencing Chips Across 8 Cards, All in a Single Server at Max. 800 Watts
November 13, 2023
|
Press Releases
Neuchips Unleashes Breakthrough: LLM Accelerator Achieves 800 Tokens/Second with Just 8 Inferencing Chips Across 8 Cards, All in a Single Server at Max. 800 Watts
Gen AI N3000 Accelerator: A Purpose-Built LLM Accelerator for Inferencing
Read More
NEUCHIPS Secures $20 Million in Series B2 Funding to Deliver AI Inference Platform for Deep Learning Recommendation
October 19, 2022
|
Press Releases
NEUCHIPS Secures $20 Million in Series B2 Funding to Deliver AI Inference Platform for Deep Learning Recommendation
New funding will accelerate delivery of their 7nm RecAccel™ inference platform to cloud service providers.
Read More
Neuchips Tapes Out Groundbreaking AI Accelerator
September 01, 2022
|
Press Releases
Neuchips Tapes Out Groundbreaking AI Accelerator
AI-powered recommendation applications are opening up new avenues to enhance the customer experience. With this technology, online stores can highlight other items to add to digital shopping carts, digital music services can suggest songs based on tunes already in the rotation, and social media channels can offer up content that might fit the user’s interests. When these systems work seamlessly and deliver accurate suggestions, they can also bring more dollars to the bottom line. However, a significant amount of challenging engineering work goes on behind the scenes to produce accurate recommendations.
Read More
Neuchips' Purpose-Built Accelerator Designed to Be Efficient Recommendation Inference Engine
May 31, 2022
|
Press Releases
Neuchips' Purpose-Built Accelerator Designed to Be Efficient Recommendation Inference Engine
NEUCHIPS is excited to announce its first ASIC, RecAccel N3000 using TSMC 7nm process, and specifically designed for accelerating deep learning recommendation models (DLRM). NEUCHIPS has partnered with industry leaders in Taiwan’s semiconductor and cloud server ecosystem and plans to deliver its RecAccel N3000 AI inference platform on Dual M.2 modules for Open Compute Platform compliant servers as well as PCIe Gen 5 cards for standard data center servers during the 2H’2022.
Read More
Latest round Includes over 1,800 performance and 350 power results for leading ML inference systems
October 02, 2021
|
Press Releases
Latest round Includes over 1,800 performance and 350 power results for leading ML inference systems
NEUCHIPS successfully submitted results of its RecAccel system in MLPerf v0.7. Announced early this year, RecAccel is the world’s 1st Hardwired DLRM accelerator and is the only non-GPU/CPU entry
Read More
NEUCHIPS Announces World's First Deep Learning Recommendation Model (DLRM) Accelerator: RecAccel
May 12, 2021
|
Press Releases
NEUCHIPS Announces World's First Deep Learning Recommendation Model (DLRM) Accelerator: RecAccel
SAN JOSE, Calif., May 12, 2020 /PRNewswire/ -- Today, NEUCHIPS Corp., an AI compute company specializing in domain-specific accelerator solutions, announced the world's first recommendation engine - RecAccelTM - that can perform 500,000 inferences per second. Running open-source PyTorch DLRM, RecAccelTM outperforms server-class CPU and inference GPU by 28X and 65X, respectively. It is equipped with an ultra-high-capacity, high-bandwidth memory subsystem for embedding table lookup and a massively parallel compute FPGA for neural network inference. Via a PCIe Gen3 host interface, RecAccelTM is ready for data center adaptation.
Read More
1
Get the latest NEUCHIPS news by email.
Subscribe