MemVerge Launches with World’s First Memory-Converged Infrastructure to Power the Most Demanding AI and Data Science Enterprise Workloads Today and in the Future

Announces Beta Availability of System That Eliminates Boundaries Between Memory and Storage So That Data-intensive AI, Machine Learning, IoT, Big Data Analytics and Data Warehousing Jobs Run Flawlessly at Memory Speed

SAN JOSE, CA — April 2, 2019  MemVerge, the inventor of Memory-Converged Infrastructure (MCI), today launched from stealth and introduced the first system that eliminates the boundaries between memory and storage to power the world’s most demanding data-centric enterprise workloads. Developed by the creators of leading all-flash and hyperconverged infrastructure (HCI) solutions, the MemVerge solution delivers memory and storage services from a single distributed platform while integrating seamlessly with existing applications so that enterprises can process the constant flood of machine-generated data produced by the on-demand economy. Built on top of Intel® Optane™ DC persistent memory technology, a brand new product that collapses the memory-storage barrier at the hardware layer, MemVerge’s MCI system offers 10X more memory size and 10X data I/O speed compared with current state-of-the-art compute and storage solutions in the market. Companies are using MemVerge to train artificial intelligence (AI) models faster and to successfully run complex workloads with more predictability using fewer resources. In a separate release today, MemVerge announced a $24.5 million Series A funding round from Gaorong Capital, Jerusalem Venture Partners, LDV Partners, Lightspeed Venture Partners and Northern Light Venture Capital.

“Data-intensive applications are pushing traditional enterprise infrastructures beyond their limits, especially as organizations scramble to incorporate next-generation technology like AI into their businesses. Edge environments are struggling to keep up, between demanding capacity and processing requirements, the speed at which data must be moved, and the management of all of it,” said Mike Leone, Senior Analyst, Enterprise Strategy Group. “Combining the power of new persistent memory with a modern hyperconverged infrastructure, MemVerge’s Memory-Converged Infrastructure offers a compelling solution for the field of AI and machine learning at the edge, making it possible to meet the real-time requirements of next-generation workloads while satisfying the fast-paced, dynamic needs of the business.”

Memory-Converged Infrastructure Puts Persistent Memory into the Hands of the Enterprise
With MemVerge’s first-of-its-kind MCI architecture, enterprises can finally have both higher capacity computing memory and faster storage at the same time, removing two of the biggest bottlenecks in the processing of machine-generated data. This new architecture is critical for the current and future success of the growing number of data-intensive applications in the enterprise, which existing infrastructures are already failing to support.

The proprietary Distributed Memory Objects (DMO) technology built into the MemVerge system provides a logical memory-storage convergence layer that harnesses Intel’s new persistent memory to allow data-intensive workloads such as AI, machine learning (ML), big data analytics, IoT and data warehousing to run flawlessly at memory speed. MemVerge’s system expands memory seamlessly and stores data consistently across multiple systems so enterprises can analyze an enormous amount of data in real time, processing both large and small files with equal ease.

Enterprise infrastructure, big data, AI and analytics teams as well as data scientists benefit from MemVerge’s innovation in bridging memory and storage into one architecture in the following ways:

  • Massive and complex applications operate smoothly, at memory speed. The MemVerge system presents memory at scale for mission critical AI, big data and IoT applications – such as Presto, Apache Spark and TensorFlow – without the out-of-memory crashes that are common with current solutions. In today’s on-demand economy, where competitive advantage comes from immediate insights from data, enterprises can no longer afford to store data on HDDs or SSDs; memory and storage need to coincide on one system. With MemVerge’s DMO technology, coupled with Intel Optane DC persistent memory, enterprises are able to complete analytics faster and with more predictability.
  • Random access is as fast as sequential access. Data-centric workloads require storage with fast sequential access and random access; random access to the machine-generated data is too slow in today’s systems using HDDs or SSDs. This is particularly true in AI training, where enormous numbers of smaller files are being processed. MemVerge DMO technology makes it so that a large number of small files can now be accessed with speeds as high as those accessing a small number of large files, allowing workloads such as AI training to be completed in less time and with fewer resources.
  • Long-lasting system design offers greater data center reliability. Taking advantage of the high endurance of persistent memory, the MemVerge solution is more reliable than traditional storage systems. As persistent memory can withstand 100X more full writes than NAND flash SSDs, enterprises can worry less about replacement of drives or potential data loss due to the failure of drives. Data is replicated consistently across multiple nodes to ensure there is no data inconsistency in the case of failures.
    “Intel Optane DC persistent memory is a breakthrough in memory capacity, data persistence and affordability,” said Alper Ilkbahar, vice president and general manager of Datacenter Memory and Storage Solutions at Intel. “Collaborating with innovative companies like MemVerge, this revolutionary technology is poised to trigger a whole new level of data analytics and machine learning in the data-centric era.”

MemVerge was founded by former VMware head of Storage BU Charles Fan whose team developed VSAN, the market-leading HCI product; the co-founder of XtremIO and Rainfinity and Moore Professor at Caltech Shuki Bruck whose team developed the market-leading all-flash array; as well as Caltech senior postdoctoral scholar Yue Li.

“The data center is ready for a revolution, and at MemVerge we believe that the future of storage is memory. The commercial availability of Intel’s Optane DC persistent memory allows the storage and memory functions to come together for the first time in the history of computing,” said Charles Fan, CEO and co-founder of MemVerge. “With MemVerge, companies can take full advantage of the larger memory capacity and unprecedented fast I/O from the persistent memory, without changing their application programming models. This new architecture will revolutionize the infrastructure for all data-centric workloads in the next decade and beyond.”

Customer Quotes

“AI powers customer-centric innovation at LinkedIn, from the personalized experiences we provide to our members to behind-the-scenes service optimization. Over 610 million people around the world depend on us for social networking, contextually-relevant news and job recommendations. The demanding AI systems built by our team push the envelope on AI infrastructure,” said Deepak Agarwal, vice president of AI, LinkedIn. “The collaboration with MemVerge opens up new possibilities to realize enhanced AI performance. We look forward to continuing this exciting technical exploration.”

“As an industry leader and global pioneer of innovative, digital-based technology solutions, we are always striving to improve quality of life through internet value-added services. It’s clear from our POC of MemVerge technology that it can enable us to more efficiently and consistently deliver our cloud service to a growing number of enterprise customers,” said Long Wang, vice president of Tencent Cloud and general manager of Big Data and AI Services. “The memory-speed processing power of MemVerge’s Memory-Converged Infrastructure has proven unique value in accelerating speed of OLAP services and combined with the fact that we can store data on the same system, it has huge potential to be a compelling part of the foundation for our next-generation cloud-based data warehouse to provide our services to customers more effectively well into the future.”

“As one of the world’s largest online retailers, our customers rely on us for a flawless end-to-end shopping experience. Entering the next phase of growth, we’ve been searching for a solution that could enhance the performance of data analytics by shortening the time to insights,” said Dennis Weng, vice president of Big Data Platform at JD.com. “We have a highly sophisticated infrastructure, and MemVerge has exceeded our expectations. We have enjoyed the collaboration during this POC and expect the MemVerge technology to bring an even better experience for shoppers.”

Availability
The MemVerge Beta Program will be available in June 2019. Please visit www.memverge.com to sign up.

About MemVerge
MemVerge, the inventor of Memory-Converged Infrastructure (MCI), is the first to eliminate all boundaries between memory and storage to power the world’s most demanding data-centric enterprise workloads. Leveraging Intel® Optane™ DC persistent memory and architected to integrate seamlessly with existing applications, the MemVerge MCI system offers 10X the memory size and 10X the data I/O speed compared to current state-of-the-art computing and storage solutions. Its unique distributed memory objects (DMO) technology provides a logical convergence layer that harnesses Intel’s new memory-storage medium to let data-intensive workloads such as AI, machine learning (ML), big data analytics, IoT and data warehousing run flawlessly at memory speed with guaranteed data consistency across multiple systems. Offering large-scale memory and sub-microsecond response time, MemVerge solves a massive problem in the era of machine-generated data, namely how to process and derive insights from the enormous amount and variety of data in real time, handling small and large files with equal ease. Enterprises using MemVerge no longer contend with failed or painfully slow jobs due to performance bottlenecks, system crashes or worn out flash drives—they can now train AI models faster, analyze bigger states, complete more queries in less time and run complex workloads more predictably with fewer resources. Based in San Jose and backed by Gaorong Capital, Jerusalem Venture Partners, LDV Partners, Lightspeed Venture Partners and Northern Light Venture Capital, MemVerge is used for AI and data science workloads by leading innovators globally including LinkedIn, Tencent Cloud and JD.com.

Media Contact:
Steve Sturgeon
MemVerge
Steve.sturgeon@memverge.com
858.472.5669