We’ve always had data; lots of data. Data in different forms and natures, structured and unstructured, more or less heterogeneous, often sensitive and messy, always complex to analyze, store and archive.
The digitalization of our research environment; the requirement toward more fairness and ethics in the data collection, analyses and storage; the possibility to expand our research interests to very large, or otherwise inaccessible corpuses; all these things ask both new and old questions, forcing us to reconsider our relationship to our dear data.
This transdisciplinary initiative gathers research data specialists, research analysts, engineers and researchers, creating bridges between the existing digital infrastructures across campuses, and imagining modular open-source digital research environments for years to come.
For more advanced researchers, we’ll aim to share good practices and innovations regarding what computational tools can help us do across domains, whether we’re talking about text or image analysis, edition and collation, web scraping and analytics, or computational tools to analyze large data collections.
To conduct this work, we’ve partnered with units across EPFL and the University of Lausanne, which have been invested in this work for a long time.
If you wish to know more about or contribute to this initiative, please contact us.