Top 10 Things the Industry Needs to Know About Memory Management

This year Flash Memory Summit is covering the emerging category of Big Memory Computing. Session D-3: Bringing Enterprise-Class Management to Big Memory, features presentations from Big Memory hardware vendor Intel, Big Memory software vendor MemVerge, Big Memory Solutions provider Penguin, and a Big Memory user in the 3D animation and visual effects industry.

The presenters, plus analyst Tim Stammers, then got together on panel session D-4 titled, “Top Ten Things You Need to Know about Big Memory Management Today.” Below are the panelists and the Top Ten list they discussed comprised of 2 insights from each panelist.

 

Memory Management Panelists

Dr. Hank Driskill
Industry Expert
Visual Effects

Dr. Charles Fan
CEO
MemVerge

Steve Scargall
Software Architect
Intel Corporation

Tim Stammers
Senior Analyst
451 Research

Dr. Kevin Tubbs
Sr. VP
Penguin Computing

Top Ten Things You Need To Know About Memory Management Today

 

Hank Driskill, Head of CG for Cinesite Animation

1 In the visual effects/animation industries, maximizing the time an artist spends creating their art is key.  With budget challenges and compressed schedules, artists need to be functioning as efficiently as possible when at their workstation, with a minimum of downtime.

2 The instant an artist steps away from their desk, ideally the compute resources they were using could be put to work on generating simulations, rendering frames, and other compute-heavy tasks.  Swapping the artist environment onto and off of the hardware in a seamless, quick manner is becoming more and more important.

Charles Fan, CEO and Co-Founder, MemVerge

3 Memory technologies will become more heterogeneous.  DRAM will continue to be the market leader, but new memory technologies will emerge and grow fast.  These technologies will co-exist in the foreseeable future.

4 Consumption of memory resources, just like other infrastructure resources, will be increasingly software-defined and composable.  This delivers the flexibility, agility, isolation and performances that modern applications need in a multi-cloud world.

Steve Scargall, Persistent Memory Software Architect at Intel Corporation

5 Moving data closer to the CPU is critical for high bandwidth, low latency access. Applications that understand tiered memory environments are able to place data in the optimal tier to achieve maximum performance.

6 Persistent memory delivers storage capabilities at memory speeds. By keeping data in persistent memory, we eliminate the need to page data in from disk, so applications can restart in seconds vs minutes or hours after a planned or unplanned outage, which increases the availability of the environment. Memory allocators and virtual memory sub-systems could intelligently place or move data to the most optimal memory tier. Features such as snapshots, clones, compression, deduplication, and replicating memory are all possible. These building blocks allow us to build Cloud-scale pools of memory for applications that want unlimited memory.

Tim Stammers, Senior Analyst, 451 Research

7 The potential benefits of big memory and big memory management include cost savings and flexible use of infrastructure and are not just about faster processing.

8 Major ISVs are restructuring of their applications to take advantage of big memory. In some cases, the reworks will be major, which shows how big the expected benefits are. But the software reworks are likely to take years to complete, and for may never happen for many applications.

Kevin Tubbs, Senior Vice President, Strategic Solutions Group at Penguin Computing

9 End users are focused on accelerating workloads and shortening time to insight and value which requires the rapid adoption of emerging technologies for memory-centric workloads. Software defined architectures are key to enabling memory-centric workloads leveraging cutting edge memory technologies.

10 The demand for data driven workloads are driving the need for big memory computing throughout the edge to core compute continuum. As more workloads begin to leverage the benefits of big memory computing, end users will rely on memory management, software defined architectures and workload portability to enable big memory computing in the cloud, in the datacenter and out on the edge.