Webinar: Neocortex Overview and Upcoming Call for Proposals

Presented on Monday, October 4, 2021, 2:00 – 3:00 pm (ET), by Paola Buitrago, Director of Artificial Intelligence and Big Data at the Pittsburgh Supercomputing Center (PSC), and Natalia Vassilieva, Ph.D. (Cerebras Systems Inc.).

This webinar gives an overview of Neocortex, a deployed NSF-funded AI supercomputer at PSC. Neocortex, which captures groundbreaking new hardware technologies, is designed to accelerate AI research in pursuit of science, discovery, and societal good. Join us to learn more about this exciting new system and how to be part of the next group of users. Neocortex has been deployed at the PSC early 2021 and currently supports research in drug discovery, genomics, molecular dynamics, climate research, computational fluid dynamics, signal processing and medical imaging analysis.

 

The topics covered included:

  • Overview of Neocortex, including hardware configuration and specs
  • Criteria for ideal users and applications
  • Upcoming CFP: How to apply for access to Neocortex 

 

For more information about Neocortex, please visit the Neocortex project page. For questions about this webinar, please email neocortex@psc.edu.

Program important dates and deadlines

Applications for program open: Oct. 14, 2021
Applications deadline: Nov. 12, 2021
Anywhere on Earth
Response begins: Nov. 15, 2021
Response ends: Nov. 29, 2021

PSC’s Neocortex Among Elite Artificial Intelligence Computers Selected for National AI Research Resource Pilot Project

Initial Goal of NAIRR Pilot Project, Also Supported by Allocations Software Developed by PSC and ACCESS Partners, Will Be to Explore Trustworthy AI

PSC Recognized for Advances in Clean Energy Technology, Application of New Technology to Industrial Problem

14th Year Center Has Received HPCwire Awards, Presented Annually to Leaders in the Global High Performance Computing Community

Neocortex Speeds Fluid Simulation by Several Hundred Times

Real-time simulations offer better, faster predictions in dozens of real-world problems

Events

Contact us

Email us at neocortex@psc.edu

Webinar: Neocortex Overview and Upcoming Call for Proposals

Program important dates and deadlines

Applications for program open: Oct. 14, 2021
Applications deadline: Nov. 12, 2021 (Anywhere on Earth)
Response begins: Nov. 15, 2021
Response ends: Nov. 29, 2021

Presented on Monday, October 4, 2021, 2:00 – 3:00 pm (ET), by Paola Buitrago, Director of Artificial Intelligence and Big Data at the Pittsburgh Supercomputing Center (PSC), and Natalia Vassilieva, Ph.D. (Cerebras Systems Inc.).

This webinar gives an overview of Neocortex, a deployed NSF-funded AI supercomputer at PSC. Neocortex, which captures groundbreaking new hardware technologies, is designed to accelerate AI research in pursuit of science, discovery, and societal good. Join us to learn more about this exciting new system and how to be part of the next group of users. Neocortex has been deployed at the PSC early 2021 and currently supports research in drug discovery, genomics, molecular dynamics, climate research, computational fluid dynamics, signal processing and medical imaging analysis.

 

The topics covered included:

  • Overview of Neocortex, including hardware configuration and specs
  • Criteria for ideal users and applications
  • Upcoming CFP: How to apply for access to Neocortex 

 

For more information about Neocortex, please visit the Neocortex project page. For questions about this webinar, please email neocortex@psc.edu.

Watch the webinar

Table of Contents
00:00    Welcome
01:45    Code of conduct
02:24    Introduction
03:16    The Neocortex system: context
08:31    The Neocortex system: Motivation
12:46    The Neocortex system: Hardware description
18:28    Early User Program and exemplar use cases
22:23    Call for Proposals (CFP)
25:51    To learn more and participate
26:46    Cerebras CS-1: Introduction
35:24    The Wafer Scale Engine (WSE)
40:10    Software and programming
48:30    Focus areas for the upcoming CFP
50:20    Q&A session
 

Q&A

Could you post the link to the PEARC talk?
If you attended PEARC21, you can watch the talk on the conference platform – otherwise, you can read a great summary of the panel on HPCWire
What does it mean the cores are AI optimized in terms of the architecture of the cores?

This question has been answered live: [50:55]

 

What's the proposal acceptance rate? Does the system support pipeline managers like Nextflow, Airflow etc?

The proposal acceptance rate in our first round was close to 40%. At PSC, we support Airflow in other systems and projects. We can explore enabling the support on Neocortex specifically.

Can LAIR be used to write programs that are not Tensorflow or Pytorch based?

You will need to leverage kernel or SDK for this.

Is any way to try the system before submitting the proposal and is there any preliminary data needed for the application

The proposal has been designed to be very lightweight in or order to minimize the burden on potential users. On Neocortex, the one way to access the system and try it out is via this CFP. We encourage you to apply, allow our team to work with you so you can try the CS-1 and SuperdomeFlex servers, while making sure you are following best practices and getting the most out of the system.

When I adopt my TF-based application for running on PSC with Cerebras, what changes do I need to make?

Depends on how TF looks like – if you leverage estimator API the changes will be minor – replace TF with Cerebras wrapper and estimator, for example – but if you previously have a TF wrapper you may have to adapt further.

While programming Deep learning models, should we treat this as one big machine (akin to a GPU) with 18GB of memory or should we treat it as thousands of machines with very fast interconnect each with a small memory? I.e. Can we go to very large batch sizes or do we need to stick to small batch size but expect that the communication overload is small?

For deep learning, you treat this as one big machine, for batch sizes, we support sizes from small to extreme – the compiler takes care of this. There are tools to help with programming for larger batches.

Can we submit proposals about studying/optimizing parallel deep learning performance itself? (So not about using Neocortex to run actual scientific workloads but more about benchmarking and developing performance models)

Yes, those proposals are also welcomed. We already have a couple of projects that would classify under this category. It is worth considering that a project of this nature requires a very close and involved collaboration between PSC, vendors, and the project members. It is for this reason that the number of projects of this kind that we are supporting can be limited.

Could you elaborate on what other HPC workloads are suitable to run on this platform and what are the features that make them suitable for it?

This question has been answered live: [54:30]

What does it mean the cores are AI optimized in terms of the architecture of the cores in comparison to a traditional core currently used for traditional simulations in HPC?

We have much higher memory bandwidths, a traditional GPU/CPU core has a memory hierarchy, we do not have that, and can run things at a higher utilization. We also have a data architecture, internal representation of the data is sparse.

Do you have an estimate on how much energy could your platform save for large model training compared to current systems?
This question has been answered live: [55:34]
Does the data have to be explicitly represented as sparse tensors in order to utilize the CS-1?

No.

What is the CPU-core count to WSE ratio required for the Neocortex system? Do you need the full HPE Superdome to feed data and launch work to the WSE?

This depends on the workload – you do not need the superdome flex for most jobs, but there are some examples where the model is not compute-intensive, but you need the full SDF.

By sparse tensor representation, I mean something like CSR or CSC or something like that

This question has been answered live: [58:26]

Do you also accept proposal on non AI HPC applications such computational intensive data analysis?

While at this time we are mostly focused on AI based applications we do expect to start making the system available to other types of HPC applications. We encourage non AI HPC proposals.

How will the compute resources be requested and allocated? Will that be like users use something similar to Slurm to request for multiple CS cores?

Each project is granted an allocation with an specific amount of resources in Neocortex and Bridges-2 which can be expanded as needed. We do leverage Slurm to support batch and interactive types of compute. Each Slurm job is allocated an entire CS-1 (all the cores on a WSE).

PSC’s Neocortex Among Elite Artificial Intelligence Computers Selected for National AI Research Resource Pilot Project

Initial Goal of NAIRR Pilot Project, Also Supported by Allocations Software Developed by PSC and ACCESS Partners, Will Be to Explore Trustworthy AI

PSC Recognized for Advances in Clean Energy Technology, Application of New Technology to Industrial Problem

14th Year Center Has Received HPCwire Awards, Presented Annually to Leaders in the Global High Performance Computing Community

Neocortex Speeds Fluid Simulation by Several Hundred Times

Real-time simulations offer better, faster predictions in dozens of real-world problems

Events

Contact us

Email us at neocortex@psc.edu