Dask allows users to scale their python code, however, it is not usually easy to provision the machines necessary to run that code. Google Cloud has a wide range of machine sizes (200+ CPU, 600 GB + memory) and types that can be provisioned in minutes. Add to that a wide range of GPUs, including the single-node 16 A100 GPU shape, and you can use Dask on the cluster of your dreams.
The talk will show examples of using Google Cloud's AI platform to run a variety of jobs using Dask and Rapids, on a variety of machine types. It will somewhat follow this blog post: https://cloud.google.com/blog/products/ai-machine-learning/scale-model-training-in-minutes-with-rapids-dask-nvidia-gpus