Poster
in
Workshop: Will Synthetic Data Finally Solve the Data Access Problem?
[Tiny] Understanding the Impact of Data Domain Extraction on Synthetic Data Privacy
Georgi Ganev · Meenatchi Sundaram Muthu Selva Annamalai · Sofiane Mahiou · Emiliano De Cristofaro
Privacy attacks, particularly membership inference attacks (MIAs), are widely used to assess the privacy of generative models for tabular synthetic data, including those with Differential Privacy (DP) guarantees.These attacks often exploit outliers, which are especially vulnerable due to their position at the boundaries of the data domain (e.g., at the minimum and maximum values).However, the role of data domain extraction in generative models and its impact on privacy attacks have often been overlooked in practice.In this paper, we examine three strategies for defining the data domain: assuming it is externally provided (ideally from public data), extracting it directly from the input data, and extracting it with DP mechanisms.While common in popular implementations and libraries, we show that the second approach breaks end-to-end DP guarantees and leaves models vulnerable.While using a provided domain (if representative) is preferable, extracting it with DP can also defend against popular MIAs, even with high privacy budgets.