Research Projects and Funding Overview
The webLyzard platform has attracted research funding of more than EUR 14 million via competitive calls for advanced knowledge extraction and Web intelligence technologies from international (EU Horizon Europe, Horizon 2020 and 7th Framework Programme, Google DNI, CHIST-ERA) and national funding programs (Austrian Research Promotion Agency, BMK, Austrian Science Fund, Austrian Climate & Energy Fund, aws, Swiss Commission for Technology and Innovation, Swiss National Science Foundation).
Current Research Projects
The following figure summarizes the ongoing R&D initiatives in European and national flagship programs, which provide an important resource base for webLyzard and ensure a consistently high rate of innovation.
TransMIXR – Ignite the Immersive Media Sector by Enabling New Narrative Visions. TransMIXR will create a range of human-centric tools for remote content production and consumption. The project’s TransMIXR platform will provide (i) a distributed XR Creation Environment that supports remote collaboration practices, and (ii) XR Media Experience Environment for the delivery and consumption of evocative and immersive media experiences. Ground-breaking AI techniques for understanding complex media content will enable the reuse of heterogeneous assets across immersive content delivery platforms. TransMIXR will develop and evaluate pilots that bring the vision of future media experiences to life in multiple domains: news media & broadcasting, performing arts and cultural heritage.
ENEXA – Efficient Explainable Learning on Knowledge Graphs. ENEXA builds upon novel and promising results in knowledge representation and machine learning to develop scalable, transparent and explainable machine learning algorithms for knowledge graphs. With new methods, ENEXA will advance in the efficiency and scalability of machine learning, especially on knowledge graphs and will focus on devising human-centered explainability techniques based on the concept of co-construction, where human and machine enter a conversation to jointly produce human-understandable explanations.
Climateurope2 – Supporting and Standardizing Climate services in Europe and Beyond. Climateurope2 will develop new approaches to classification and standardization to enhance the uptake of quality-assured climate services by all sectors of society. The increased adoption of these services will in turn support adaptation and mitigation efforts across Europe. The project will use webLyzard’s knowledge extraction capabilities to identify key actors and their services, elicit support and standardization needs, and classify strategies to trigger and support climate action. The resulting taxonomy of climate services will help promote community-focused best practices and guidelines.
SDG-HUB – AI-Driven Semantic Search and Visualization to Support the Sustainable Development Goals and Agenda 2030. SDG-HUB will build a knowledge hub to address the socio-ecological challenges to reaching the Sustainable Development Goals of the Agenda 2030 and the climate mitigation goals of the Paris Agreement. The project focuses on providing (i) concrete insights into problems and value systems that determine actions of social target groups as vital input for transdisciplinary dialogues with specific societal target groups, and (ii) input for monitoring and evaluation of Agenda 2030 & Paris Agreement progress.
CRISP – Crisis Response and Intervention Supported by Semantic Data Pooling addresses the challenges of natural disasters in a data-driven manner, enabling more effective crisis response and intervention. CRISP considers both the short-term management of disasters as well as long-term economic impact assessments, at fine-grained regional and temporal granularity. CRISP will extract and classify disaster signals and perceptions from news and user-generated social media content, combine this data with weather and climate observations, and provide warnings and forecasts for disaster control authorities, and rescue organizations.
inDICEs – Measuring the Impact of Digital Culture aims to empower policy makers and professional stakeholders in the cultural and creative industries, helping them to fully understand the social and economic impact of digitization on their sector. The project addresses the need for innovative (re)use of cultural assets, and for new tools to monitor engagement and public perception of these assets.
GENTIO – Generative Learning Networks for Text & Impact Optimization aims to change the way we produce, enrich and analyze digital content. The project will develop a deep learning architecture to unify the understanding of text at three fundamental levels: structure, content and context. This will yield quantitative methods to maximize the impact of data-driven publishing.
CIMPLE – Countering Creative Information Manipulation with Explainable AI draws on models of human creativity, both in manipulating and understanding information, to design understandable and personalizable explanations. CIMPLE will use computational creativity techniques to generate powerful, engaging, and easily and quickly understandable explanations of complex AI decisions and behavior. These explanations will be tested in the domain of detection and tracking of manipulated information, taking into account social, psychological and technical explainability needs and requirements.
Past Research Projects
A number of past research projects were funded by the EU Horizon 2020 and 7th Framework Programmes, the Swiss Commission for Technology and Innovation, as well as by the Austrian Research Promotion Agency. The projects aimed at radical innovations in the acquisition and management of unstructured data, the automation of information processes, human-computer interaction, and the integration of distributed information resources.
ReTV – Enhancing and Re-Purposing TV Content for Trans-Vector Engagement. The project developed new methods for media organizations to dynamically re-purpose content for a wide range of digital channels. The aim is to “publish to all media vectors with the effort of one”. ReTV empowers broadcasters and brands to continuously measure and predict their success in terms of reach and audience engagement. webLyzard leads the ReTV system integration, leveraging its existing portfolio of visual analytics components. As part of the project, we also developed the Storypact Editor to expand the predictive capabilities of our platform.
SONAR – Semantic Observatory for News Analytics and Repurposing. The project is a joint initiative of ProSiebenSat.1 PULS 4 and webLyzard to support balanced news reporting and increase the impact of digital content assets. It will provide real-time visualization services for journalists, including specific recommendations on where and how to publish digital assets containing these visualizations. This will not only guide the production of new content, but also the repurposing of existing digital assets.
InVID (In Video Veritas) – Verification of Social Media Video Content for the News Industry. This Innovation Action started in January 2016 and builds a Web-based application to automatically identify newsworthy video content spread via social media, and confirm or reject its credibility using state-of-the-art analytical techniques. webLyzard leads the development of the application and the overall system integration, leveraging the existing video retrieval capabilities of its Web Intelligence platform.
Pheme – Computing Veracity across Media, Languages, and Social Networks. Analyzing big data repositories aggregated from context-dependent social media streams poses three major computational challenges: volume, velocity, and variety. This project focuses on a fourth, largely unstudied computational challenge: veracity. It will model and verify phemes (Internet memes with added information on truthfulness or deception) as they spread across media, languages, and social networks.
DISCOVER – Knowledge Discovery, Extraction and Fusion for Improved Decision Making. The research project develops methods for the automatic acquisition, extraction and integration of decision-relevant information from heterogeneous online sources. The system uses background knowledge from domain ontologies, databases, and an information value model to optimize knowledge acquisition processes from websites and deep web repositories. The extracted knowledge is then integrated with business information systems to optimize decision-making and business processes.
ASAP – Adaptable Scalable Analytics Platform. The project develops an open-source execution framework for scalable data analytics. It assumes that no single execution model is suitable for all types of tasks, and that no single data model is suitable for all types of data. ASAP will provide resource elasticity, fault-tolerance and the ability to handle large sets of irregular distributed data. Within ASAP, webLyzard is responsible for the visualization engine and leads the dissemination and exploitation work package.
DecarboNet – A Decarbonisation Platform for Citizen Empowerment and Translating Collective Awareness into Behavioural Change. This project will identify determinants of collective awareness, trigger behavioral change, and provide novel methods to analyze the underlying processes. Innovations are built around a context-specific repository of carbon reduction strategies. To refine this repository, we will utilize citizen-generated content in a societal feedback loop to enable an adaptive process of social innovation.
IMAGINE – The retrieval and marketability of visual content highly depend on the availability of high-quality metadata, enabling customers to locate the most relevant content in large image collections. The project exploits the convergence of textual image descriptions, image content and linked open data to automatically obtain relevant metadata including keywords, named entities, and reference topics.
uComp – Embedded Human Computation for Knowledge Extraction and Evaluation. The project merges collective human intelligence and automated knowledge extraction methods in a symbiotic fashion, drawing upon both games with a purpose and crowdsourcing marketplaces. It develops a scalable human computation framework for knowledge extraction and evaluation, delegating the most challenging tasks to large user communities and learning from user feedback to optimize automated methods as part of an iterative process.
RMCS – Radar Media Criticism Switzerland. The project develops a research infrastructure and establishes knowledge transfer mechanisms to conduct automated content analyses and assess the structure and content of media criticism in Switzerland. The radar will identify the most important institutional players, track influential media blogs and opinion leaders across social media platforms, and help to assess the diversity of topics, actors, and opinions from a communications and media science perspective.
WISDOM – Web Intelligence for Improved Decision Making. Obtaining accurate business intelligence that supports data-driven decision making helps to optimize corporate strategies and gain a competitive advantage. WISDOM draws upon Web intelligence methods to integrate data from news media and social sources, develops context-aware information extraction techniques to identify stakeholders and their sentiment towards emerging trends, and provides novel performance and reliability metrics that enhance decision making processes.
COMET – Cross-Media Extraction of Unified High Quality Marketing Data. Knowledge resources with clear economic value are spread across multiple channels such as print media, Web documents, blogs, and social media. The COMET project develops key technologies for combining and analyzing such heterogeneous and multimodal channels. Automated consolidation, classification and sentiment analysis support the extraction of marketing information, and aid decision makers in optimizing their branding and marketing strategies.
DIVINE – Dynamic Integration and Visualization of Information from Multiple Evidence Sources. DIVINE integrates data from structured, unstructured and social sources to build information spaces. Lightweight seed ontologies act as focal points for integrating new evidence from third-party sources. Since such evidence is inherently uncertain, source-specific transformation rules assign confidence values to newly acquired pieces of knowledge.
RAVEN – Relation Analysis and Visualization for Evolving Networks. RAVEN keeps analysts and decision-makers up-to-date about the unfolding of events in endogenous and exogenous information spaces, which reflect interconnected events and processes of the real world. RAVEN aims to understand the evolution of such spaces by analyzing temporal-semantic relations between their elements.
IDIOM – Information Diffusion across Interactive Online Media: Linguists define “idiom” as expression whose meaning is different from the literal meanings of its component words. Similarly, the study of information diffusion promises insights that cannot be inferred from individual network elements. Media monitoring projects often focus on a particular medium, or neglect important aspects of the human language. IDIOM addresses these gaps to reveal fundamental mechanisms of information diffusion across media with distinct interactive characteristics.
AVALON – Acquisition and Validation of Ontologies: Valuable knowledge that surrounds the workflows of business entities can be extracted automatically and represented as ontological structure. AVALON services build upon a cybernetic control system to automatically align extracted knowledge with business processes, external indicators and individual expertise. Such services are particularly useful in volatile business environments, which require dynamic reconfiguration of business processes and a flexible allocation of resources.
European eContent Tourism Survey. Longitudinal survey on behalf of the Austrian Federal Economic Chamber, which contrasted a sample of 500 Austrian tourism sites with the international competition. Selected results were presented at the European Forum Alpbach in August 2001.