One of the most useful features of the Dataverse repository software is the large number of metadata fields it provides for describing research data. This guide is intended to support both the novice and experienced user in creating metadata for datasets in a Dataverse repository. It provides official definitions of metadata fields with clarifications and tips, distinguishes between required, recommended, and optional fields, and illustrates the use of fields with examples. This version of the guide has been updated to include coverage of all available metadata fields - citation, geospatial, social science and humanities, astronomy and astrophysics, life sciences, and journal metadata. The guide was created with permission from Harvard for the use of definitions and the Texas Digital Library for basic design. Ce guide est aussi disponible en français.
L’Institut Pasteur agit en faveur de la Science Ouverte en adoptant en mai 2021 deux textes fondateurs : une charte pour le libre accès aux publications et une politique de gestion et partage des données de la recherche et codes logiciels.
In the arts and humanities, digital data production is still expensive, challenging and time-consuming. We all know this, and yet the results of these processes often in the end can’t be reused by other researchers, meaning that we reinvent (or redigitise) the wheel far too often. This resource is aimed at giving practical advice for arts and humanities scholars who are willing to take their first steps in research data management but don't know where to begin. Our approach to data management views it as a reflective process that exposes and tweaks existing behaviours, rather than one that introduces specific tools. It is intended to encourage awareness of one’s own processes and mindfulness about how they could be more open and how and how small changes across three points in your research workflow can make big differences.
This framework supports both the development and review of Institutional Strategies for Research Data Management (RDM). It can be used by administrators, service providers, strategic analysts, and researchers themselves to explore the spectrum of RDM engagement, support, and resources offered by their institution.
This document provides an overview of the Qualitative Data Repository's (QDR) internal curation process. The process includes standardized steps from depositor contact, file processing procedures and Dataverse repository operations, to publication of the data project and thereafter.
The goal of this workshop is to provide participants with the opportunity to develop their understanding of the Canadian Research Data Management landscape. This Workshop in a Box plan was developed by the NDRIO Portage Network (‘Portage’) in collaboration with Fanshawe College.
There are videos and case studies associated with the book.
Focused on both primary and secondary data and packed with checklists and templates, it contains everything readers need to know for managing all types of data before, during and after the research process.
-Exigences minimales pour les plans de gestion des données
-Critères de sélection des dépôts dignes de confiance
-Conseils aux examinateurs pour évaluer des DMP
online tool which helps researchers and data managers assess how much they know about the requirements for making datasets findable, accessible, interoperable, and reusable (FAIR) before uploading them into a data repository.
What would open data for a typical synthetic organic chemistry paper look like?; What would open data for a typical molecular dynamics based paper look like?; How raw should the deposited data be? Do funders have a view on, for example, whether I should deposit an NMR spectrum or the actual fid which can then be processed to give the spectrum?; What about iterative experiments? If I quote yield of 80% for a synthesis should I deposit data only for that synthesis or also for all the iterated syntheses that led to the final one?
Open Babel is a chemical toolbox designed to speak the many languages of chemical data. It's an open, collaborative project allowing anyone to search, convert, analyze, or store data from molecular modeling, chemistry, solid-state materials, biochemistry, or related areas.
Gives good examples of what to write in a DMP. Also gives examples of bad or incomplete answers in a DMP. Also gives a rubric of how a DMP is evaluated.
To assist researchers in developing transparency-related materials for a project To assist researchers in determining which materials are appropriate for internal documentation, and which would be useful or necessary to outsiders seeking to understand the project To serve as a project “table of contents”
This is the FAQ. The research data center is a service that zips FID files from NMR spectra. They add some extra metadata to the zip file. The files can be submitted to journals or ChemRXiv for storage or kept by the researcher.
Introduction to Clinical Data Clinical data is either collected during the course of ongoing patient care or as part of a formal clinical trial program. Funding agencies, publishers, and research communities are increasingly encouraging researchers to share data, while respecting Institutional Review Board (IRB) and federal restrictions against disclosing identifiers of human subjects.
this article offers practical recommendations for organizing spreadsheet data to reduce errors and ease later analyses. The basic principles are: be consistent, write dates like YYYY-MM-DD, do not leave any cells empty, put just one thing in a cell, organize the data as a single rectangle (with subjects as rows and variables as columns, and with a single header row), create a data dictionary, do not include calculations in the raw data files, do not use font color or highlighting as data, choose good names for things, make backups, use data validation to avoid data entry errors, and save the data in plain text files.
Self-guided online course on managing qualitative research data. Created by the Qualitative Data Respository (QDR) with support from the Social Science Research Council (SSRC)
We've designed a distributed system for sharing enormous datasets - for researchers, by researchers. The result is a scalable, secure, and fault-tolerant repository for data, with blazing fast download speeds.
Use this checklist to rate your graphs. Designed and tested by PhDs, this checklist walks you through the formatting steps you’ll need to take to make sure the story in your data is shining through. We’ll give you tips to strength your skills in any checkpoint where you don’t score well.
Representatives from journals, journal publishers and scholarly communication organisations have come together in the FAIRsharing Community to propose a set of criteria for the identification and selection of those data repositories that accept research data submissions. These repositories can be recommended to researchers when they are preparing to release and publish the data underlying their findings. This work intends to (i) reduce complexity and inconsistencies for researchers in journal data policies, (ii) increase efficiency for data repositories that currently have to work with all individual publishers, and (iii) simplify the process of recommending data repositories by publishers. This work will make the implementation of research data policies more efficient and consistent, which may help to improve approaches to data sharing through the promotion and the use of reliable and sustainable data repositories.
Within the project FDMentor, a German Train-the-Trainer Programme on Research Data Management (RDM) was developed and piloted in a series of workshops. The topics cover many aspects of research data management, such as data management plans and the publication of research data, as well as didactic units on learning concepts, workshop design and a range of didactic methods. After the end of the project, the concept was supplemented and updated by members of the Sub-Working Group Training/Further Education (UAG Schulungen/Fortbildungen) of the DINI/nestor Working Group Research Data (DINI/nestor-AG Forschungsdaten). The newly published English version of the Train-the-Trainer Concept contains the translated concept, the materials and all methods of the Train-the-Trainer Programme. Furthermore, additional English references and materials complement this version.
Really good explanation of workshop series for librarians. Starts with general RDM workshops and then goes towards subject specific workshops and ends with very detailed workshops on specific data topics. See also: https://deepblue.lib.umich.edu/handle/2027.42/117636
Understand what data you need to include when publishing an article in Wellcome Open Research, where your data can be deposited, and how your data should be presented.
University business conducted using University tools is in compliance with University regulations and policy, and is protected by contractual and other security measures not available to consumer tools.
See all of their IT standards, including their security classifications: https://cio.ubc.ca/information-security/information-security-policy-standards-and-resources
Transferring data to and from your analysis environments is now a more challenging endeavor due to a number of factors, including increases in dataset sizes, increases in confidential information and data sensitivity considerations, and increases in technology protections to enhance data security. We’ve assembled the information below to give a comprehensive overview and to help in the decision making process.