Efficiently training large deep learning models requires scaling training across a number of GPUs. When training at scale, model synchronization across GPUs introduces significant overheads. To tackle this problem, researchers from Nvidia, Facebook, Uber and Google borrow the idea of collective communication from HPC domain, to develop fast model synchronization schemes (e.g. NCCL from Nvidia, Horovod from Uber, Gloo from Facebook). However, these schemes are still far from optimal. To achieve near-optimal model synchronization performance, we propose Blink, a fast and generic collective communication library for distributed machine learning. Blink is a generalized collective communication library regardless of topology heterogeneity, link heterogeneity (e.g. PCIe, NVLink, InfiniBand), and hardware heterogeneity (e.g. CPU, GPU). Compare with state-of-the-art scheme (NCCL 2.1.15 released in Mar. 2018), Blink can achieve 2-8x speedup for model synchronization in distributed ML.