- home
- Advanced Search
19 Research products, page 1 of 2
Loading
- Publication . Report . 2019EnglishAuthors:Szprot, Jakub; Arpagaus, Brigitte; Ciula, Arianna; Clivaz, Claire; Gabay, Simon; Honegger, Matthieu; Hughes, Lorna; Immenhauser, Beat; Jakeman, Neil; Lhotak, Martin; +8 moreSzprot, Jakub; Arpagaus, Brigitte; Ciula, Arianna; Clivaz, Claire; Gabay, Simon; Honegger, Matthieu; Hughes, Lorna; Immenhauser, Beat; Jakeman, Neil; Lhotak, Martin; Romanova, Natasha; Ros, Salvador; Schulthess, Sara; Tahko, Tuuli; Tolonen, Mikko; Erdinast Vulcan, Daphna; Willa, Pierre; Zehavi, Ora;Publisher: HAL CCSDCountry: FranceProject: EC | DESIR (731081)
This report provides information about activities and progress towards establishing DARIAH membership in six countries: the Czech Republic, Finland, Israel, Spain, Switzerland, and the UK, which took place between July and December 2019. Previous activities were described in detail in the D3.2 - Regularly Monitor Country-Specific Progress in Enabling New DARIAH Membership. During the project lifetime, the Czech Republic joined DARIAH ERIC; in other countries, collaboration with DARIAH has been greatly strengthened and significant progress regarding DARIAH membership has been achieved. The report also outlines the next steps in the accession processes, building on the results of the DESIR project.
- Publication . Report . 2019EnglishAuthors:Tahko, Tuuli; Zehavi, Ora; Lhotak, Martin; Romanova, Natasha; Clivaz, Claire; Ros, Salvador; Raciti, Marco;Tahko, Tuuli; Zehavi, Ora; Lhotak, Martin; Romanova, Natasha; Clivaz, Claire; Ros, Salvador; Raciti, Marco;Publisher: HAL CCSDCountry: FranceProject: EC | Locus Ludi (741520), EC | DESIR (731081)
The DESIR project sets out to strengthen the sustainability of DARIAH and firmly establish it as a long-term leader and partner within arts and humanities communities. The project was designed to address six core infrastructural sustainability dimensions and one of these was dedicated to training and education, which is also one of the four pillars identified in the DARIAH Strategic Plan 2019-2026. In the framework of Work Package 7: Teaching, DESIR organised dedicated workshops in the six DARIAH accession countries (Czech Republic, Finland, Israel, Spain, Switzerland and the United Kingdom) to introduce them to the DARIAH infrastructure and related services, and to develop methodological research skills. The topic of each workshop was decided by accession countries representatives according to the training needs of the national communities of researchers in the (Digital) Humanities. Training topics varied greatly: on the one hand, some workshops had the objective to introduce participants to specific methodological research skills; on the other hand, a different approach was used, and some events focused on the infrastructural role of training and education. The workshops organised in the context of Work Package 7: Teaching are listed below:• CZECH REPUBLIC: “A series of fall tutorials 2019 organized by LINDAT/CLARIAHCZ, tutorial #3 on TEI Training”, November 28, 2019, Prague;• FINLAND: “Reuse & sustainability: Open Science and social sciences and humanities research infrastructures”, 23 October 2019, Helsinki;• ISRAEL: “Introduction to Text Encoding and Digital Editions”, 24 October 2019, Haifa;• SPAIN: “DESIR Workshop: Digital Tools, Shared Data, and Research Dissemination”, 3 July 2019, Madrid;• SWITZERLAND: “Sharing the Experience: Workflows for the Digital Humanities”, 5-6 December 2019, Neuchâtel;• UNITED KINGDOM: “Research Software Engineering for Digital Humanities: Role of Training in Sustaining Expertise”, 9 December, London.
- Publication . 2018FrenchAuthors:Ginouvès, Véronique; Gras, Isabelle;Ginouvès, Véronique; Gras, Isabelle;Publisher: HAL CCSDCountry: France
International audience; En guise de postface, il nous a semblé nécessaire de revenir sur le processus collaboratif de la fabrication de cet ouvrage et de vous confier la genèse de ce projet. Tout est parti d'un constat pragmatique, de nos situations quotidiennes de travail : le/la chercheur·e qui produit ou utilise des données a besoin de réponses concrètes aux questions auxquelles il/elle est confronté·e sur son terrain comme lors de tous ses travaux de recherche. Produire, exploiter, diffuser, partager ou éditer des sources numériques fait aujourd'hui partie de notre travail ordinaire. La rupture apportée par le développement du web et l'arrivée du format numérique ont largement facilité la diffusion et le partage des ressources (documentaires, textuelles, photographiques, sonores ou audiovisuelles...) dans le monde de la recherche et, au-delà, auprès des citoyens de plus en plus curieux et intéressés par les documents produits par les scientifiques.
- Publication . Article . Conference object . Preprint . 2019Open Access EnglishAuthors:Lilia Simeonova; Kiril Simov; Petya Osenova; Preslav Nakov;Lilia Simeonova; Kiril Simov; Petya Osenova; Preslav Nakov;
We propose a morphologically informed model for named entity recognition, which is based on LSTM-CRF architecture and combines word embeddings, Bi-LSTM character embeddings, part-of-speech (POS) tags, and morphological information. While previous work has focused on learning from raw word input, using word and character embeddings only, we show that for morphologically rich languages, such as Bulgarian, access to POS information contributes more to the performance gains than the detailed morphological information. Thus, we show that named entity recognition needs only coarse-grained POS tags, but at the same time it can benefit from simultaneously using some POS information of different granularity. Our evaluation results over a standard dataset show sizable improvements over the state-of-the-art for Bulgarian NER. named entity recognition; Bulgarian NER; morphology; morpho-syntax
Average popularityAverage popularity In bottom 99%Average influencePopularity: Citation-based measure reflecting the current impact.Average influence In bottom 99%Influence: Citation-based measure reflecting the total impact.add Add to ORCIDPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product. - Publication . Article . Preprint . 2020 . Embargo End Date: 01 Jan 2020Open AccessAuthors:Zamani, Maryam; Tejedor, Alejandro; Vogl, Malte; Krautli, Florian; Valleriani, Matteo; Kantz, Holger;Zamani, Maryam; Tejedor, Alejandro; Vogl, Malte; Krautli, Florian; Valleriani, Matteo; Kantz, Holger;Publisher: arXiv
We investigated the evolution and transformation of scientific knowledge in the early modern period, analyzing more than 350 different editions of textbooks used for teaching astronomy in European universities from the late fifteenth century to mid-seventeenth century. These historical sources constitute the Sphaera Corpus. By examining different semantic relations among individual parts of each edition on record, we built a multiplex network consisting of six layers, as well as the aggregated network built from the superposition of all the layers. The network analysis reveals the emergence of five different communities. The contribution of each layer in shaping the communities and the properties of each community are studied. The most influential books in the corpus are found by calculating the average age of all the out-going and in-coming links for each book. A small group of editions is identified as a transmitter of knowledge as they bridge past knowledge to the future through a long temporal interval. Our analysis, moreover, identifies the most disruptive books. These books introduce new knowledge that is then adopted by almost all the books published afterwards until the end of the whole period of study. The historical research on the content of the identified books, as an empirical test, finally corroborates the results of all our analyses. Comment: 19 pages, 9 figures
Average popularityAverage popularity In bottom 99%Average influencePopularity: Citation-based measure reflecting the current impact.Average influence In bottom 99%Influence: Citation-based measure reflecting the total impact.add Add to ORCIDPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product. - Publication . Article . 2020Open AccessAuthors:Riccardo Pozzo; Andrea Filippetti; Mario Paolucci; Vania Virgili;Riccardo Pozzo; Andrea Filippetti; Mario Paolucci; Vania Virgili;Publisher: Oxford University Press (OUP)Country: Italy
AbstractThis article introduces the notion of cultural innovation, which requires adapting our approach to co-creation. The argument opens with a first conceptualization of cultural innovation as an additional and autonomous category of the complex processes of co-creation. The dimensions of cultural innovation are contrasted against other forms of innovation. In a second step, the article makes an unprecedented attempt in describing processes and outcomes of cultural innovation, while showing their operationalization in some empirical case studies. In the conclusion, the article considers policy implications resulting from the novel definition of cultural innovation as the outcome of complex processes that involve the reflection of knowledge flows across the social environment within communities of practices while fostering the inclusion of diversity in society. First and foremost, cultural innovation takes a critical stance against inequalities in the distribution of knowledge and builds innovation for improving the welfare of individuals and communities.
Average popularityAverage popularity In bottom 99%Average influencePopularity: Citation-based measure reflecting the current impact.Average influence In bottom 99%Influence: Citation-based measure reflecting the total impact.add Add to ORCIDPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product. - Publication . Article . 2018Open Access EnglishAuthors:Terras, Melissa; Baker, James; Hetherington, James; Beavan, David; Welsh, Anne; O'Neill, Helen; Finley, Will; Duke-Williams, Oliver; Farquhar, Adam;Terras, Melissa; Baker, James; Hetherington, James; Beavan, David; Welsh, Anne; O'Neill, Helen; Finley, Will; Duke-Williams, Oliver; Farquhar, Adam;Publisher: Oxford University PressCountry: United Kingdom
Although there has been a drive in the cultural heritage sector to provide large-scale, open data sets for researchers, we have not seen a commensurate rise in humanities researchers undertaking complex analysis of these data sets for their own research purposes. This article reports on a pilot project at University College London, working in collaboration with the British Library, to scope out how best high-performance computing facilities can be used to facilitate the needs of researchers in the humanities. Using institutional data-processing frameworks routinely used to support scientific research, we assisted four humanities researchers in analysing 60,000 digitized books, and we present two resulting case studies here. This research allowed us to identify infrastructural and procedural barriers and make recommendations on resource allocation to best support non-computational researchers in undertaking ‘big data’ research. We recommend that research software engineer capacity can be most efficiently deployed in maintaining and supporting data sets, while librarians can provide an essential service in running initial, routine queries for humanities scholars. At present there are too many technical hurdles for most individuals in the humanities to consider analysing at scale these increasingly available open data sets, and by building on existing frameworks of support from research computing and library services, we can best support humanities scholars in developing methods and approaches to take advantage of these research opportunities.
- Publication . 2020EnglishAuthors:Kristanti, Tanti; Romary, Laurent;Kristanti, Tanti; Romary, Laurent;Publisher: HAL CCSDCountry: France
International audience; This article presents an overview of approaches and results during our participation in the CLEF HIPE 2020 NERC-COARSE-LIT and EL-ONLY tasks for English and French. For these two tasks, we use two systems: 1) DeLFT, a Deep Learning framework for text processing; 2) entity-fishing, generic named entity recognition and disambiguation service deployed in the technical framework of INRIA.
- Publication . Article . 2020Open Access EnglishAuthors:Luca Foppiano; Laurent Romary;Luca Foppiano; Laurent Romary;Publisher: HAL CCSDCountry: FranceProject: EC | HIRMEOS (731102)
International audience; This paper presents an attempt to provide a generic named-entity recognition and disambiguation module (NERD) called entity-fishing as a stable online service that demonstrates the possible delivery of sustainable technical services within DARIAH, the European digital research infrastructure for the arts and humanities. Deployed as part of the national infrastructure Huma-Num in France, this service provides an efficient state-of-the-art implementation coupled with standardised interfaces allowing an easy deployment on a variety of potential digital humanities contexts. The topics of accessibility and sustainability have been long discussed in the attempt of providing some best practices in the widely fragmented ecosystem of the DARIAH research infrastructure. The history of entity-fishing has been mentioned as an example of good practice: initially developed in the context of the FP9 CENDARI, the project was well received by the user community and continued to be further developed within the H2020 HIRMEOS project where several open access publishers have integrated the service to their collections of published monographs as a means to enhance retrieval and access.entity-fishing implements entity extraction as well as disambiguation against Wikipedia and Wikidata entries. The service is accessible through a REST API which allows easier and seamless integration, language independent and stable convention and a widely used service oriented architecture (SOA) design. Input and output data are carried out over a query data model with a defined structure providing flexibility to support the processing of partially annotated text or the repartition of text over several queries. The interface implements a variety of functionalities, like language recognition, sentence segmentation and modules for accessing and looking up concepts in the knowledge base. The API itself integrates more advanced contextual parametrisation or ranked outputs, allowing for the resilient integration in various possible use cases. The entity-fishing API has been used as a concrete use case3 to draft the experimental stand-off proposal, which has been submitted for integration into the TEI guidelines. The representation is also compliant with the Web Annotation Data Model (WADM).In this paper we aim at describing the functionalities of the service as a reference contribution to the subject of web-based NERD services. In order to cover all aspects, the architecture is structured to provide two complementary viewpoints. First, we discuss the system from the data angle, detailing the workflow from input to output and unpacking each building box in the processing flow. Secondly, with a more academic approach, we provide a transversal schema of the different components taking into account non-functional requirements in order to facilitate the discovery of bottlenecks, hotspots and weaknesses. The attempt here is to give a description of the tool and, at the same time, a technical software engineering analysis which will help the reader to understand our choice for the resources allocated in the infrastructure.Thanks to the work of million of volunteers, Wikipedia has reached today stability and completeness that leave no usable alternatives on the market (considering also the licence aspect). The launch of Wikidata in 2010 have completed the picture with a complementary language independent meta-model which is becoming the scientific reference for many disciplines. After providing an introduction to Wikipedia and Wikidata, we describe the knowledge base: the data organisation, the entity-fishing process to exploit it and the way it is built from nightly dumps using an offline process.We conclude the paper by presenting our solution for the service deployment: how and which the resources where allocated. The service has been in production since Q3 of 2017, and extensively used by the H2020 HIRMEOS partners during the integration with the publishing platforms. We believe we have strived to provide the best performances with the minimum amount of resources. Thanks to the Huma-num infrastructure we still have the possibility to scale up the infrastructure as needed, for example to support an increase of demand or temporary needs to process huge backlog of documents. On the long term, thanks to this sustainable environment, we are planning to keep delivering the service far beyond the end of the H2020 HIRMEOS project.
Average popularityAverage popularity In bottom 99%Average influencePopularity: Citation-based measure reflecting the current impact.Average influence In bottom 99%Influence: Citation-based measure reflecting the total impact.add Add to ORCIDPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product. - Publication . Article . Preprint . 2018Open Access EnglishAuthors:Nadia Boukhelifa; Michael Bryant; Natasa Bulatovic; Ivan Čukić; Jean-Daniel Fekete; Milica Knežević; Jörg Lehmann; David I. Stuart; Carsten Thiel;Nadia Boukhelifa; Michael Bryant; Natasa Bulatovic; Ivan Čukić; Jean-Daniel Fekete; Milica Knežević; Jörg Lehmann; David I. Stuart; Carsten Thiel;Publisher: HAL CCSDCountries: France, United KingdomProject: EC | CENDARI (284432)
International audience; The CENDARI infrastructure is a research-supporting platform designed to provide tools for transnational historical research, focusing on two topics: medieval culture and World War I. It exposes to the end users modern Web-based tools relying on a sophisticated infrastructure to collect, enrich, annotate, and search through large document corpora. Supporting researchers in their daily work is a novel concern for infrastructures. We describe how we gathered requirements through multiple methods to understand historians' needs and derive an abstract workflow to support them. We then outline the tools that we have built, tying their technical descriptions to the user requirements. The main tools are the note-taking environment and its faceted search capabilities; the data integration platform including the Data API, supporting semantic enrichment through entity recognition; and the environment supporting the software development processes throughout the project to keep both technical partners and researchers in the loop. The outcomes are technical together with new resources developed and gathered, and the research workflow that has been described and documented.
Average popularityAverage popularity In bottom 99%Average influencePopularity: Citation-based measure reflecting the current impact.Average influence In bottom 99%Influence: Citation-based measure reflecting the total impact.add Add to ORCIDPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.
19 Research products, page 1 of 2
Loading
- Publication . Report . 2019EnglishAuthors:Szprot, Jakub; Arpagaus, Brigitte; Ciula, Arianna; Clivaz, Claire; Gabay, Simon; Honegger, Matthieu; Hughes, Lorna; Immenhauser, Beat; Jakeman, Neil; Lhotak, Martin; +8 moreSzprot, Jakub; Arpagaus, Brigitte; Ciula, Arianna; Clivaz, Claire; Gabay, Simon; Honegger, Matthieu; Hughes, Lorna; Immenhauser, Beat; Jakeman, Neil; Lhotak, Martin; Romanova, Natasha; Ros, Salvador; Schulthess, Sara; Tahko, Tuuli; Tolonen, Mikko; Erdinast Vulcan, Daphna; Willa, Pierre; Zehavi, Ora;Publisher: HAL CCSDCountry: FranceProject: EC | DESIR (731081)
This report provides information about activities and progress towards establishing DARIAH membership in six countries: the Czech Republic, Finland, Israel, Spain, Switzerland, and the UK, which took place between July and December 2019. Previous activities were described in detail in the D3.2 - Regularly Monitor Country-Specific Progress in Enabling New DARIAH Membership. During the project lifetime, the Czech Republic joined DARIAH ERIC; in other countries, collaboration with DARIAH has been greatly strengthened and significant progress regarding DARIAH membership has been achieved. The report also outlines the next steps in the accession processes, building on the results of the DESIR project.
- Publication . Report . 2019EnglishAuthors:Tahko, Tuuli; Zehavi, Ora; Lhotak, Martin; Romanova, Natasha; Clivaz, Claire; Ros, Salvador; Raciti, Marco;Tahko, Tuuli; Zehavi, Ora; Lhotak, Martin; Romanova, Natasha; Clivaz, Claire; Ros, Salvador; Raciti, Marco;Publisher: HAL CCSDCountry: FranceProject: EC | Locus Ludi (741520), EC | DESIR (731081)
The DESIR project sets out to strengthen the sustainability of DARIAH and firmly establish it as a long-term leader and partner within arts and humanities communities. The project was designed to address six core infrastructural sustainability dimensions and one of these was dedicated to training and education, which is also one of the four pillars identified in the DARIAH Strategic Plan 2019-2026. In the framework of Work Package 7: Teaching, DESIR organised dedicated workshops in the six DARIAH accession countries (Czech Republic, Finland, Israel, Spain, Switzerland and the United Kingdom) to introduce them to the DARIAH infrastructure and related services, and to develop methodological research skills. The topic of each workshop was decided by accession countries representatives according to the training needs of the national communities of researchers in the (Digital) Humanities. Training topics varied greatly: on the one hand, some workshops had the objective to introduce participants to specific methodological research skills; on the other hand, a different approach was used, and some events focused on the infrastructural role of training and education. The workshops organised in the context of Work Package 7: Teaching are listed below:• CZECH REPUBLIC: “A series of fall tutorials 2019 organized by LINDAT/CLARIAHCZ, tutorial #3 on TEI Training”, November 28, 2019, Prague;• FINLAND: “Reuse & sustainability: Open Science and social sciences and humanities research infrastructures”, 23 October 2019, Helsinki;• ISRAEL: “Introduction to Text Encoding and Digital Editions”, 24 October 2019, Haifa;• SPAIN: “DESIR Workshop: Digital Tools, Shared Data, and Research Dissemination”, 3 July 2019, Madrid;• SWITZERLAND: “Sharing the Experience: Workflows for the Digital Humanities”, 5-6 December 2019, Neuchâtel;• UNITED KINGDOM: “Research Software Engineering for Digital Humanities: Role of Training in Sustaining Expertise”, 9 December, London.
- Publication . 2018FrenchAuthors:Ginouvès, Véronique; Gras, Isabelle;Ginouvès, Véronique; Gras, Isabelle;Publisher: HAL CCSDCountry: France
International audience; En guise de postface, il nous a semblé nécessaire de revenir sur le processus collaboratif de la fabrication de cet ouvrage et de vous confier la genèse de ce projet. Tout est parti d'un constat pragmatique, de nos situations quotidiennes de travail : le/la chercheur·e qui produit ou utilise des données a besoin de réponses concrètes aux questions auxquelles il/elle est confronté·e sur son terrain comme lors de tous ses travaux de recherche. Produire, exploiter, diffuser, partager ou éditer des sources numériques fait aujourd'hui partie de notre travail ordinaire. La rupture apportée par le développement du web et l'arrivée du format numérique ont largement facilité la diffusion et le partage des ressources (documentaires, textuelles, photographiques, sonores ou audiovisuelles...) dans le monde de la recherche et, au-delà, auprès des citoyens de plus en plus curieux et intéressés par les documents produits par les scientifiques.
- Publication . Article . Conference object . Preprint . 2019Open Access EnglishAuthors:Lilia Simeonova; Kiril Simov; Petya Osenova; Preslav Nakov;Lilia Simeonova; Kiril Simov; Petya Osenova; Preslav Nakov;
We propose a morphologically informed model for named entity recognition, which is based on LSTM-CRF architecture and combines word embeddings, Bi-LSTM character embeddings, part-of-speech (POS) tags, and morphological information. While previous work has focused on learning from raw word input, using word and character embeddings only, we show that for morphologically rich languages, such as Bulgarian, access to POS information contributes more to the performance gains than the detailed morphological information. Thus, we show that named entity recognition needs only coarse-grained POS tags, but at the same time it can benefit from simultaneously using some POS information of different granularity. Our evaluation results over a standard dataset show sizable improvements over the state-of-the-art for Bulgarian NER. named entity recognition; Bulgarian NER; morphology; morpho-syntax
Average popularityAverage popularity In bottom 99%Average influencePopularity: Citation-based measure reflecting the current impact.Average influence In bottom 99%Influence: Citation-based measure reflecting the total impact.add Add to ORCIDPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product. - Publication . Article . Preprint . 2020 . Embargo End Date: 01 Jan 2020Open AccessAuthors:Zamani, Maryam; Tejedor, Alejandro; Vogl, Malte; Krautli, Florian; Valleriani, Matteo; Kantz, Holger;Zamani, Maryam; Tejedor, Alejandro; Vogl, Malte; Krautli, Florian; Valleriani, Matteo; Kantz, Holger;Publisher: arXiv
We investigated the evolution and transformation of scientific knowledge in the early modern period, analyzing more than 350 different editions of textbooks used for teaching astronomy in European universities from the late fifteenth century to mid-seventeenth century. These historical sources constitute the Sphaera Corpus. By examining different semantic relations among individual parts of each edition on record, we built a multiplex network consisting of six layers, as well as the aggregated network built from the superposition of all the layers. The network analysis reveals the emergence of five different communities. The contribution of each layer in shaping the communities and the properties of each community are studied. The most influential books in the corpus are found by calculating the average age of all the out-going and in-coming links for each book. A small group of editions is identified as a transmitter of knowledge as they bridge past knowledge to the future through a long temporal interval. Our analysis, moreover, identifies the most disruptive books. These books introduce new knowledge that is then adopted by almost all the books published afterwards until the end of the whole period of study. The historical research on the content of the identified books, as an empirical test, finally corroborates the results of all our analyses. Comment: 19 pages, 9 figures
Average popularityAverage popularity In bottom 99%Average influencePopularity: Citation-based measure reflecting the current impact.Average influence In bottom 99%Influence: Citation-based measure reflecting the total impact.add Add to ORCIDPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product. - Publication . Article . 2020Open AccessAuthors:Riccardo Pozzo; Andrea Filippetti; Mario Paolucci; Vania Virgili;Riccardo Pozzo; Andrea Filippetti; Mario Paolucci; Vania Virgili;Publisher: Oxford University Press (OUP)Country: Italy
AbstractThis article introduces the notion of cultural innovation, which requires adapting our approach to co-creation. The argument opens with a first conceptualization of cultural innovation as an additional and autonomous category of the complex processes of co-creation. The dimensions of cultural innovation are contrasted against other forms of innovation. In a second step, the article makes an unprecedented attempt in describing processes and outcomes of cultural innovation, while showing their operationalization in some empirical case studies. In the conclusion, the article considers policy implications resulting from the novel definition of cultural innovation as the outcome of complex processes that involve the reflection of knowledge flows across the social environment within communities of practices while fostering the inclusion of diversity in society. First and foremost, cultural innovation takes a critical stance against inequalities in the distribution of knowledge and builds innovation for improving the welfare of individuals and communities.
Average popularityAverage popularity In bottom 99%Average influencePopularity: Citation-based measure reflecting the current impact.Average influence In bottom 99%Influence: Citation-based measure reflecting the total impact.add Add to ORCIDPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product. - Publication . Article . 2018Open Access EnglishAuthors:Terras, Melissa; Baker, James; Hetherington, James; Beavan, David; Welsh, Anne; O'Neill, Helen; Finley, Will; Duke-Williams, Oliver; Farquhar, Adam;Terras, Melissa; Baker, James; Hetherington, James; Beavan, David; Welsh, Anne; O'Neill, Helen; Finley, Will; Duke-Williams, Oliver; Farquhar, Adam;Publisher: Oxford University PressCountry: United Kingdom
Although there has been a drive in the cultural heritage sector to provide large-scale, open data sets for researchers, we have not seen a commensurate rise in humanities researchers undertaking complex analysis of these data sets for their own research purposes. This article reports on a pilot project at University College London, working in collaboration with the British Library, to scope out how best high-performance computing facilities can be used to facilitate the needs of researchers in the humanities. Using institutional data-processing frameworks routinely used to support scientific research, we assisted four humanities researchers in analysing 60,000 digitized books, and we present two resulting case studies here. This research allowed us to identify infrastructural and procedural barriers and make recommendations on resource allocation to best support non-computational researchers in undertaking ‘big data’ research. We recommend that research software engineer capacity can be most efficiently deployed in maintaining and supporting data sets, while librarians can provide an essential service in running initial, routine queries for humanities scholars. At present there are too many technical hurdles for most individuals in the humanities to consider analysing at scale these increasingly available open data sets, and by building on existing frameworks of support from research computing and library services, we can best support humanities scholars in developing methods and approaches to take advantage of these research opportunities.
- Publication . 2020EnglishAuthors:Kristanti, Tanti; Romary, Laurent;Kristanti, Tanti; Romary, Laurent;Publisher: HAL CCSDCountry: France
International audience; This article presents an overview of approaches and results during our participation in the CLEF HIPE 2020 NERC-COARSE-LIT and EL-ONLY tasks for English and French. For these two tasks, we use two systems: 1) DeLFT, a Deep Learning framework for text processing; 2) entity-fishing, generic named entity recognition and disambiguation service deployed in the technical framework of INRIA.
- Publication . Article . 2020Open Access EnglishAuthors:Luca Foppiano; Laurent Romary;Luca Foppiano; Laurent Romary;Publisher: HAL CCSDCountry: FranceProject: EC | HIRMEOS (731102)
International audience; This paper presents an attempt to provide a generic named-entity recognition and disambiguation module (NERD) called entity-fishing as a stable online service that demonstrates the possible delivery of sustainable technical services within DARIAH, the European digital research infrastructure for the arts and humanities. Deployed as part of the national infrastructure Huma-Num in France, this service provides an efficient state-of-the-art implementation coupled with standardised interfaces allowing an easy deployment on a variety of potential digital humanities contexts. The topics of accessibility and sustainability have been long discussed in the attempt of providing some best practices in the widely fragmented ecosystem of the DARIAH research infrastructure. The history of entity-fishing has been mentioned as an example of good practice: initially developed in the context of the FP9 CENDARI, the project was well received by the user community and continued to be further developed within the H2020 HIRMEOS project where several open access publishers have integrated the service to their collections of published monographs as a means to enhance retrieval and access.entity-fishing implements entity extraction as well as disambiguation against Wikipedia and Wikidata entries. The service is accessible through a REST API which allows easier and seamless integration, language independent and stable convention and a widely used service oriented architecture (SOA) design. Input and output data are carried out over a query data model with a defined structure providing flexibility to support the processing of partially annotated text or the repartition of text over several queries. The interface implements a variety of functionalities, like language recognition, sentence segmentation and modules for accessing and looking up concepts in the knowledge base. The API itself integrates more advanced contextual parametrisation or ranked outputs, allowing for the resilient integration in various possible use cases. The entity-fishing API has been used as a concrete use case3 to draft the experimental stand-off proposal, which has been submitted for integration into the TEI guidelines. The representation is also compliant with the Web Annotation Data Model (WADM).In this paper we aim at describing the functionalities of the service as a reference contribution to the subject of web-based NERD services. In order to cover all aspects, the architecture is structured to provide two complementary viewpoints. First, we discuss the system from the data angle, detailing the workflow from input to output and unpacking each building box in the processing flow. Secondly, with a more academic approach, we provide a transversal schema of the different components taking into account non-functional requirements in order to facilitate the discovery of bottlenecks, hotspots and weaknesses. The attempt here is to give a description of the tool and, at the same time, a technical software engineering analysis which will help the reader to understand our choice for the resources allocated in the infrastructure.Thanks to the work of million of volunteers, Wikipedia has reached today stability and completeness that leave no usable alternatives on the market (considering also the licence aspect). The launch of Wikidata in 2010 have completed the picture with a complementary language independent meta-model which is becoming the scientific reference for many disciplines. After providing an introduction to Wikipedia and Wikidata, we describe the knowledge base: the data organisation, the entity-fishing process to exploit it and the way it is built from nightly dumps using an offline process.We conclude the paper by presenting our solution for the service deployment: how and which the resources where allocated. The service has been in production since Q3 of 2017, and extensively used by the H2020 HIRMEOS partners during the integration with the publishing platforms. We believe we have strived to provide the best performances with the minimum amount of resources. Thanks to the Huma-num infrastructure we still have the possibility to scale up the infrastructure as needed, for example to support an increase of demand or temporary needs to process huge backlog of documents. On the long term, thanks to this sustainable environment, we are planning to keep delivering the service far beyond the end of the H2020 HIRMEOS project.
Average popularityAverage popularity In bottom 99%Average influencePopularity: Citation-based measure reflecting the current impact.Average influence In bottom 99%Influence: Citation-based measure reflecting the total impact.add Add to ORCIDPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product. - Publication . Article . Preprint . 2018Open Access EnglishAuthors:Nadia Boukhelifa; Michael Bryant; Natasa Bulatovic; Ivan Čukić; Jean-Daniel Fekete; Milica Knežević; Jörg Lehmann; David I. Stuart; Carsten Thiel;Nadia Boukhelifa; Michael Bryant; Natasa Bulatovic; Ivan Čukić; Jean-Daniel Fekete; Milica Knežević; Jörg Lehmann; David I. Stuart; Carsten Thiel;Publisher: HAL CCSDCountries: France, United KingdomProject: EC | CENDARI (284432)
International audience; The CENDARI infrastructure is a research-supporting platform designed to provide tools for transnational historical research, focusing on two topics: medieval culture and World War I. It exposes to the end users modern Web-based tools relying on a sophisticated infrastructure to collect, enrich, annotate, and search through large document corpora. Supporting researchers in their daily work is a novel concern for infrastructures. We describe how we gathered requirements through multiple methods to understand historians' needs and derive an abstract workflow to support them. We then outline the tools that we have built, tying their technical descriptions to the user requirements. The main tools are the note-taking environment and its faceted search capabilities; the data integration platform including the Data API, supporting semantic enrichment through entity recognition; and the environment supporting the software development processes throughout the project to keep both technical partners and researchers in the loop. The outcomes are technical together with new resources developed and gathered, and the research workflow that has been described and documented.
Average popularityAverage popularity In bottom 99%Average influencePopularity: Citation-based measure reflecting the current impact.Average influence In bottom 99%Influence: Citation-based measure reflecting the total impact.add Add to ORCIDPlease grant OpenAIRE to access and update your ORCID works.This Research product is the result of merged Research products in OpenAIRE.
You have already added works in your ORCID record related to the merged Research product.