Introducing the CXL Forum
@ HPC + AI on Wall Street

Wednesday, September 27
8:30am to 5:00pm

The First CXL Forum with a Demo Room to Showcase the Transformation of Products Into Solutions

In addition to a full day of presentations, during breaks you will be able to check out live demonstrations of a Memory Viewer to see all the memory in a fabric, CXL Memory Pooling systems that replace swap-to-storage with shared memory, the Gismo (Global IO-free Shared Memory Object) software that replaces internode cluster traffic with shared memory, and a CXL Flight Simulator that allows developers to get started without CXL hardware.

Speakers

Why Attend?

CXL is Critical to the Success of AI in FSI

FSI infrastructure is struggling to keep pace with the size of AI models and their memory requirements, which have grown 1,000x in just the last two years. Enter Compute Express Link (CXL) technology. The open specifications will deliver memory that’s more cost effective, abundant, and virtualized for fine-grained provisioning.

What You Will See

A 360° Overview of CXL Technology and Market Development

A huge ecosystem of vendors is working to make this vision a reality, and the place to learn about the status of their progress is a CXL Forum event. A cross-section of compute, storage, networking, system, and software vendors, as well as users, are represented at each event for a 360° overview of CXL technology and market development.

Organizations Presenting and Demonstrating at Past CXL Forum Events

  • Users: Microsoft Azure, Facebook, Uber, Barcelona Supercomputing Center
  • CXL-Compatible Processors: AMD, Arm, Intel, NVIDIA, SiPearl
  • CXL-Compatible Memory: Samsung, SK hynix, Micron
  • CXL Memory Controllers: Astera Labs, Montage Technology
  • CXL Switches: Xconn
  • CXL Software: MemVerge, VMware
  • CXL Pooling Systems: GigaIO, Liqid, H3, Marvell, Enfabrica, Elastics.cloud

CXL Forum Agenda

9:30-9:50 am

Generative AI has taken the world by storm, creating a boom for GPUs, High Bandwidth Memory (HBM) and Compute Express Link (CXL) memory. The advent of CXL is a giant step towards removing the fundamental “memory wall” bottleneck challenging AI applications today.  CXL technology gives AI apps desperately needed memory bandwidth and capacity while driving down costs with composable memory that can be shared. In this presentation, Charles Fan will introduce the CXL open standards and a timeline for emerging CXL products and solutions. He will also explain how CXL technology is used in a Big Memory Computing model with data services that continuously rightsizes computing and memory resources for AI workloads and ensures uninterrupted application availability checkpointing and recovery services.

Dr. Charles Fan, CEO and co-founder of MemVerge

Charles Fan is co-founder and CEO of MemVerge. Prior to MemVerge, Charles was the CTO of Cheetah Mobile leading its global technology teams, and an SVP/GM at VMware, founding the storage business unit that developed the Virtual SAN product. Charles also worked at EMC and was the founder of the EMC China R&D Center. Charles joined EMC via the acquisition of Rainfinity, where he was a co-founder and CTO. Charles received his Ph.D. and M.S. in Electrical Engineering from the California Institute of Technology, and his B.E. in Electrical Engineering from the Cooper Union.

9:50-10:10 am

CXL promises to unlock academic research computing disciplines that currently have no solution path. These include deeply recursive codes and applications that require large memory blocks.

Kurt Keville, Systems Engineer, Somerville Dynamics

Kurt works in University Research Computing and Systems Design. His MIT thesis work was on energy-efficient supercomputing and to that end, he has investigated research enabling and accelerating technologies that can unlock new programming paradigms for grand challenge problems.

10:10-10:30 am

An overview of the critical role and limitations of Memory in the AI/ML era, and the market opportunity we see for CXL DRAM’s infinite expansion & pooling potential. Ms. Choi will also explain how to create a competitive edge for your organization by leveraging early access programs from CXL ecosystem partners to get a head-start on deploying CXL.

Julie Choi, Head of New Biz PM, Samsung

Julie Choi is Principal Professional and Head of New Biz PM in Samsung Memory. She is responsible for New Biz Product Management & Biz Development, with a focus on emerging computing and storage products such as CXL & Computational Storage. Ms. Choi achieved AI in the Chief Administrator course at KAIST in Dec. 2021, and earned an MBA in Product Management from Sogang Graduate School of Business in 2008. Since joining Samsung in 2000, her role encompassed the full breadth of memory sales, marketing & planning for the US and China regions. From 2020 to 2021, she led the product management team which achieved an optimized product mix through a deeper analysis and understanding of Global Cloud Service Providers. Prior to that, she set up and developed the Server Biz, leading different functional teams in China from 2012 to 2016.

10:30-11:00 am

Break

11:00-11:20 am

CXL switches enable scalable memory expansion, pooling and sharing. These functions make large scale AI learning models, which require big data and large memory more accessible while keeping the memory utilization high. On the other hand, as the CXL ecosystem continues to evolve and grow, CXL switches will enable new AIML computing architecture and platforms with higher performance and efficient resource utilization — with CPUs, GPUs/Accelerators, and Memories interconnected through a CXL fabric network.

Jianping (JP) Jiang, VP of Product Marketing and Business Operations, Xconn Technologies

Jianping (JP) Jiang is the VP of Business, Operation and Product at Xconn Technologies, a Silicon Valley startup pioneering CXL switch IC. At Xconn, he is in charge of CXL ecosystem partner relationship, CXL product marketing, business development, corporate strategy and operations. Before joining Xconn, JP held various leadership positions at several large-scale semiconductor companies, focusing on product planning/roadmaps, product marketing and business development. In these roles, he developed competitive and differentiated product strategies, leading to successful product lines that generate over billions of dollars revenue. JP has a Ph.D degree in computer science from the Ohio State University.

11:20-11:40 am

Composable data center architectures provide the ability to disaggregate compute, memory, and network fabric resources. With CXL-capable processors, accelerators, switches, and memories, massive systems can be built to coherently connect compute arrays to memory.  The need to train large AI models in machine learning applications will propel the demand for disaggregated architectures. Processor-to-memory bandwidth and latency are critical factors which impact time to train large AI models. Traditional Ethernet and PCIe copper PHY technologies have latency and distance limitations and cannot scale beyond a single rack. CXL over optics can solve the distance, latency, and bandwidth challenges. This presentation will illustrate the latency and performance improvements that can be achieved through implementing an optical CXL fabric in a computing system. We will discuss the benefits of memory pooling with disaggregated memories for AI training. Evidence will show the distance advantages and interconnect area savings obtained by transmitting multiple CXL data lanes over optical fibers. Benefits of optics will be examined with results from a memory expansion application.

Ron Swartzentruber, Director of Engineering, Lightelligence

Ron Swartzentruber is the Director of Engineering at Lightelligence, Inc. and is responsible for the development of CXL-over-optics products used for inter-connecting CPUs and memory over an optical fabric. Ron has extensive experience in silicon design and architecture for the cloud networking and network communication industries and holds 21 patents for inventions conceived throughout his career.

11:40 am-12:00 pm

Ray is a powerful distributed computing framework used by AI applications like Chat GPT. However, as data sets grow and computation requirements become more complex, managing memory usage across multiple computing nodes becomes increasingly challenging. Issues that slow down performance include the data copying between the computing nodes, data spilling out of memory into storage, and the data skew among computing nodes. Gismo, a multi-node shared memory object store based on Compute Express Link (CXL) technology addresses this challenge. With the help of Gismo, Ray does not need to serialize and transfer the extra copies of the objects across a network. Data spill and skew issues will be minimized because each host has direct memory access to the whole object store. In this presentation, Yong Tian will demonstrate how Gismo is integrated with Ray and showcase how it improves overall performance and reduces memory overhead for Ray users.

Yong Tian, Field CTO, MemVerge

Yong Tian is Field CTO for MemVerge. He leads technical engagements with MemVerge customers. Previously Yong was co-founder and COO of UltraSee Corp, a pioneer in software-defined ultrasound imaging. He holds a Master of Management from Stanford Graduate School of Business, a Master of Electrical Engineering from the University of Illinois, and B.E. in Electrical Engineering from the Cooper Union.

headshot for Yong Tian

12:00-1:00 pm

Lunch & Keynote

CXL Demo Room Open

1:00-1:20 pm

Thomas Jorgensen, Senior Director, Technology Enablement, Supermicro

Thomas Jorgensen is heading up the AI/ML efforts at Supermicro’s Technology Enablement team. Before joining Supermicro, Thomas was VP Operations at Reniac, an Intel Capital funded Database Acceleration company that used FPGAs and GPUs for HW acceleration of NoSQL Databases. Prior to Reniac, Thomas was co-founder of Napatech and grew this from garage to IPO. Napatech is the world’s leading provider of programmable Smart Network Interface Cards (SmartNICs) used for Data Processing Unit (DPU) and Infrastructure Processing Unit (IPU) services in telecom, cloud, enterprise, cybersecurity and financial applications. Thomas holds several patents in Network Packet Processing and Packet Capture.

headshot for Thomas Jorgensen, Supermicro

1:20-1:40 pm

The vision of enabling data centers to re-architect from static, defined-once infrastructure into disaggregated pools of resources that can be dynamically composed (and recomposed) into application-specific systems is well underway. The last element in this transition is full memory composition, with limited capability today. CXL 3.1 will enable memory to be as easy to compose as accelerators, networking, processors and storage are today. Full composability will be all the more useful if the fabric used for composition can also handle all the other compute traffic that urgently needs low latency – mostly MPI and GPU RDMA – in order to avoid adding in a third network in the rack.

Alan Benjamin, President & CEO, GigaIO

Alan Benjamin is one of the visionaries behind GigaIO’s innovative solution. He was most recently COO of Pulse Electronics ($800M communication components and subsystem supplier) and previously, CEO of Excelsus Technologies. Earlier, Alan helped lead PDP, a true start-up, to a successful acquisition by a strategic buyer for $80M in year three. He started his career at Hewlett-Packard in Sales Support and quickly moved into management positions in Product Marketing and R&D. Alan graduated from Duke University with a BSEE in Electrical Engineering and attended Harvard Business School AMP program, as well as UCSD LAMP program.

Alan Benjamin headshot

1:40-2:00 pm

While the promise of CXL is awe inspiring, the ability to compose memory to servers from pools of CXL via software will absolutely transform HPC. Join us for an illuminating exploration into “Elevating Data Infrastructure with Liqid: A Look into Composable CXL.” Immerse yourself in Liqid’s instrumental role in shaping the future of data centers, powered by the revolutionary capabilities of Composable CXL. Central to this evolution is Liqid Matrix software, our CXL Fabric Manager, which we’ve already conducted several successful proofs of concept with. Learn how our innovative software is poised to dynamically allocate compute, memory, and storage resources, ushering in a new era of data center efficiency. Through real-world insights and collaborative endeavors, discover the potential of Composable CXL as we empower organizations to seamlessly adapt and excel in the dynamic landscape of data infrastructure.

Bryan Davies, Solutions Architect, Liqid

Bryan Davies is an accomplished Solutions Architect at Liqid, where he shapes the future of data infrastructure and data center technology. With a diverse background, Bryan brings a unique blend of expertise to his role. Previously, he served as a professional services engineer, deploying innovative composable solutions worldwide. Before joining Liqid, Bryan held pivotal roles, including Pilot Line Lead at Seagate and a remarkable 14-year tenure in the U.S. Coast Guard.

2:00–6:00 pm

Concurrent Activities:

2:00 – 3:00pm  Break & Lightning Talks

2:30 – 3:30pm CXL Birds-of-a-Feather Session

2:00 – 6:00pm CXL Demo Room Open