Imagen de Google Jackets
Vista normal Vista MARC

Database anonymization : privacy models, data utility, and microaggregation-based inter-model connections / Josep Domingo-Ferrer, David Sánchez, and Jordi Soria-Comas, Universitat Rovira i Virgili, Tarragona, Catalonia.

Por: Colaborador(es): Tipo de material: TextoTextoSeries Synthesis lectures on information security, privacy and trust ; # 15.Editor: San Rafael, California : Morgan & Claypool Publishers, 2016Fecha de copyright: ©2016Descripción: xv, 120 páginas : ilustraciones, diagramas, gráficas ; 24 cmTipo de contenido:
  • texto
Tipo de medio:
  • sin mediación
Tipo de soporte:
  • volumen
ISBN:
  • 9781627058445
  • 1627058443
Tema(s): Clasificación LoC:
  • HF 5548.37 D65.2016
Contenidos:
1. Introduction --
2. Privacy in data releases -- 2.1 Types of data releases -- 2.2 Microdata sets -- 2.3 Formalizing privacy -- 2.4 Disclosure risk in microdata sets -- 2.5 Microdata anonymization -- 2.6 Measuring information loss -- 2.7 Trading off information loss and disclosure risk -- 2.8 Summary --
3. Anonymization methods for microdata -- 3.1 Non-perturbative masking methods -- 3.2 Perturbative masking methods -- 3.3 Synthetic data generation -- 3.4 Summary --
4. Quantifying disclosure risk: record linkage -- 4.1 Threshold-based record linkage -- 4.2 Rule-based record linkage -- 4.3 Probabilistic record linkage -- 4.4 Summary --
5. The k-anonymity privacy model -- 5.1 Insufficiency of data de-identification -- 5.2 The k-anonymity model -- 5.3 Generalization and suppression based k-anonymity -- 5.4 Microaggregation-based k-anonymity -- 5.5 Probabilistic k-anonymity -- 5.6 Summary --
6. Beyond k-anonymity: l-diversity and t -closeness -- 6.1 l-diversity -- 6.2 t-closeness -- 6.3 Summary --
7. t-closeness through microaggregation -- 7.1 Standard microaggregation and merging -- 7.2 t-closeness aware microaggregation: k-anonymity-first -- 7.3 t-closeness aware microaggregation: t-closeness-first -- 7.4 Summary --
8. Differential privacy -- 8.1 Definition -- 8.2 Calibration to the global sensitivity -- 8.3 Calibration to the smooth sensitivity -- 8.4 The exponential mechanism -- 8.5 Relation to k-anonymity-based models -- 8.6 Differentially private data publishing -- 8.7 Summary --
9. Differential privacy by multivariate microaggregation -- 9.1 Reducing sensitivity via prior multivariate microaggregation -- 9.2 Differentially private data sets by insensitive microaggregation -- 9.3 General insensitive microaggregation -- 9.4 Differential privacy with categorical attributes -- 9.5 A semantic distance for differential privacy -- 9.6 Integrating heterogeneous attribute types -- 9.7 Summary --
10. Differential privacy by individual ranking microaggregation -- 10.1 Limitations of multivariate microaggregation -- 10.2 Sensitivity reduction via individual ranking -- 10.3 Choosing the microggregation parameter k -- 10.4 Summary --
11. Conclusions and research directions -- 11.1 Summary and conclusions -- 11.2 Research directions -- Bibliography -- Authors' biographies.
Resumen: The current social and economic context increasingly demands open data to improve scientific research and decision making. However, when published data refer to individual respondents, disclosure risk limitation techniques must be implemented to anonymize the data and guarantee by design the fundamental right to privacy of the subjects the data refer to. Disclosure risk limitation has a long record in the statistical and computer science research communities, who have developed a variety of privacy-preserving solutions for data releases. This Synthesis Lecture provides a comprehensive overview of the fundamentals of privacy in data releases focusing on the computer science perspective. Specifically, we detail the privacy models, anonymization methods, and utility and risk metrics that have been proposed so far in the literature. Besides, as a more advanced topic, we identify and discuss in detail connections between several privacy models (i.e., how to accumulate the privacy guarantees they offer to achieve more robust protection and when such guarantees are equivalent or complementary); we also explore the links between anonymization methods and privacy models (how anonymization methods can be used to enforce privacy models and thereby offer ex ante privacy guarantees). These latter topics are relevant to researchers and advanced practitioners, who will gain a deeper understanding on the available data anonymization solutions and the privacy guarantees they can offer.
Valoración
    Valoración media: 0.0 (0 votos)
Existencias
Tipo de ítem Biblioteca actual Colección Signatura topográfica Copia número Estado Fecha de vencimiento Código de barras
Libros Biblioteca Francisco Xavier Clavigero Acervo Acervo General HF 5548.37 D65.2016 (Navegar estantería(Abre debajo)) ej. 1 Disponible UIA167405

Incluye bibliográfía (páginas 109-118).

1. Introduction --

2. Privacy in data releases -- 2.1 Types of data releases -- 2.2 Microdata sets -- 2.3 Formalizing privacy -- 2.4 Disclosure risk in microdata sets -- 2.5 Microdata anonymization -- 2.6 Measuring information loss -- 2.7 Trading off information loss and disclosure risk -- 2.8 Summary --

3. Anonymization methods for microdata -- 3.1 Non-perturbative masking methods -- 3.2 Perturbative masking methods -- 3.3 Synthetic data generation -- 3.4 Summary --

4. Quantifying disclosure risk: record linkage -- 4.1 Threshold-based record linkage -- 4.2 Rule-based record linkage -- 4.3 Probabilistic record linkage -- 4.4 Summary --

5. The k-anonymity privacy model -- 5.1 Insufficiency of data de-identification -- 5.2 The k-anonymity model -- 5.3 Generalization and suppression based k-anonymity -- 5.4 Microaggregation-based k-anonymity -- 5.5 Probabilistic k-anonymity -- 5.6 Summary --

6. Beyond k-anonymity: l-diversity and t -closeness -- 6.1 l-diversity -- 6.2 t-closeness -- 6.3 Summary --

7. t-closeness through microaggregation -- 7.1 Standard microaggregation and merging -- 7.2 t-closeness aware microaggregation: k-anonymity-first -- 7.3 t-closeness aware microaggregation: t-closeness-first -- 7.4 Summary --

8. Differential privacy -- 8.1 Definition -- 8.2 Calibration to the global sensitivity -- 8.3 Calibration to the smooth sensitivity -- 8.4 The exponential mechanism -- 8.5 Relation to k-anonymity-based models -- 8.6 Differentially private data publishing -- 8.7 Summary --

9. Differential privacy by multivariate microaggregation -- 9.1 Reducing sensitivity via prior multivariate microaggregation -- 9.2 Differentially private data sets by insensitive microaggregation -- 9.3 General insensitive microaggregation -- 9.4 Differential privacy with categorical attributes -- 9.5 A semantic distance for differential privacy -- 9.6 Integrating heterogeneous attribute types -- 9.7 Summary --

10. Differential privacy by individual ranking microaggregation -- 10.1 Limitations of multivariate microaggregation -- 10.2 Sensitivity reduction via individual ranking -- 10.3 Choosing the microggregation parameter k -- 10.4 Summary --

11. Conclusions and research directions -- 11.1 Summary and conclusions -- 11.2 Research directions -- Bibliography -- Authors' biographies.

The current social and economic context increasingly demands open data to improve scientific research and decision making. However, when published data refer to individual respondents, disclosure risk limitation techniques must be implemented to anonymize the data and guarantee by design the fundamental right to privacy of the subjects the data refer to. Disclosure risk limitation has a long record in the statistical and computer science research communities, who have developed a variety of privacy-preserving solutions for data releases. This Synthesis Lecture provides a comprehensive overview of the fundamentals of privacy in data releases focusing on the computer science perspective. Specifically, we detail the privacy models, anonymization methods, and utility and risk metrics that have been proposed so far in the literature. Besides, as a more advanced topic, we identify and discuss in detail connections between several privacy models (i.e., how to accumulate the privacy guarantees they offer to achieve more robust protection and when such guarantees are equivalent or complementary); we also explore the links between anonymization methods and privacy models (how anonymization methods can be used to enforce privacy models and thereby offer ex ante privacy guarantees). These latter topics are relevant to researchers and advanced practitioners, who will gain a deeper understanding on the available data anonymization solutions and the privacy guarantees they can offer.