We study the problem of differentially private stochastic convex optimization (DP-SCO) with heavy-tailed gradients, where we assume a kthk^{text{th}}kth-moment bound on the Lipschitz constants of sample functions, rather than a uniform bound. We propose a new reduction-based approach that enables us to obtain the first optimal rates (up to logarithmic factors) in the heavy-tailed setting, achieving error G2â‹…1n+Gkâ‹…(dnε)1−1kG_2 cdot frac 1 {sqrt n} + G_k cdot (frac{sqrt d}{nvarepsilon})^{1 – frac 1 k}G2​⋅n​1​+Gk​⋅(nεd​​)1−k1​ under (ε,δ)(varepsilon, delta)(ε,δ)-approximate…
Source: Read MoreÂ