In the age of big data and computational methods, it is increasingly common for researchers to rely on the use of data sets that were not originally intended for research purposes, such as administrative records and media content (e.g. tweets). While constituting sources of novel insights and knowledge, these data sets often lack appropriate documentation and information about how they were created. This makes the work of researchers more burdensome but also, in some cases, less reliable. The project ‘Building a FAIR Expertise Hub for the social sciences’, which was recently awarded funding by the Platform Digitale Infrastructuur Social Science and Humanities (PDI-SSH), is aimed at targeting this issue by supporting the data providers in the social sciences in improving the Findability, Accessibility, Interoperability and Reusability (FAIRness) of their data.
The FAIR guiding data principles were proposed in an article published in 2016. The goal of the FAIR data principles is to enhance transparency and reproducibility by fostering the ability of both machines and humans to find and reuse existing data. “This may sound very abstract, but it has some practical implications.” explains Angelica Maineri, the project manager of the FAIR Expertise Hub. “To give but one example, the FAIR principles include labelling data sets with a ‘globally unique and persistent identifier’, i.e. a unique sequence of alphanumeric characters assigned to a specific digital object. In a way this is similar to having a unique numeric sequence (e.g. the social security number – in the Netherlands, the BSN) assigned to a citizen. A persistent identifier allows to verify the identity of a citizen (or the provenance of a dataset), and to link personal records across different governmental service providers (or to track in which publications a certain data set has been used).”
The FAIR guiding principles are largely endorsed by research organisations and funders across the globe, however the actual implementation is not always straightforward, Maineri indicates: ‘On the one hand, existing resources to achieve some of the FAIR principles are not mature enough to be widely adopted. On the other hand, it is also difficult for data providers to select the most appropriate resource when many are available. In this fragmented landscape, the FAIR Expertise Hub aims at supporting data providers who wish to implement FAIR principles but lack the knowledge, skills and incentives to do so’.
Thanks to specific tools developed within the FAIR community and promoted by the FAIR Expertise Hub, data providers will be able to take decisions on the FAIR standards they wish to adopt, and devise a strategy to implement them. A data steward at Erasmus University Rotterdam (EUR) and a computer scientist at VU University Amsterdam will support data providers in this process, while also ensuring alignment with international standards, across research domains and across data providers within the same domain.
The project is a collaboration between the Erasmus School of Social and Behavioural Sciences at EUR, the Computer Science Department at VU and DANS-KNAW. As the national data infrastructure for the Dutch social sciences, ODISSEI will facilitate the exchange between the project executors and the data providers.
The FAIR Expertise Hub aims at providing practical and tailored guidance. To this end, project members will soon start conversations with key stakeholders in the Dutch social data landscape to identify gaps and wishes related to the implementation of FAIR principles. ‘Even though the FAIR Expertise Hub will primarily target data providers in the social science domain, benefits will expand beyond that. Making data sources FAIR unlocks new research opportunities which ultimately contribute to a better understanding of society’, Maineri concludes.
If you want to exchange ideas or discuss how the FAIR Expertise Hub could help you developing a FAIR implementation strategy, please contact Angelica Maineri (email@example.com)