Page tree
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

Project NumberProposal Name
Lead Partner (name)PartnersKeywordsAbstract
3,108MW4ALL 2.0Least Authority 
identity free file transfer
3,110DAppNodeDAppNode Association
personal data managementDAppNode is the self-owned infrastructure layer for a human-centric, data-sovereign, private-by-design internet.
3.10GeoWalletBlocs et Compagnie
gelocation, mobility data, blockchain

A user-centric approach for personal data management providing user with full control over his personal data and the ability to grant selective access to third parties comes with many advantages. Still it does not address the specific case of mobility data and usage-based mobility contracts between user and a third party: usage-based insurance, usage-based fares for public transportation...

How to prove a third party that mobility data controlled by a user is authentic, was not forged or partially deleted?

How to provide trusted results of queries and analysis authorized by a user on his mobility data to a third party without exposing the detail of the underlying activities and geolocation information?

GeoWallet is a user-centric platform for mobility data management providing both trust and privacy.

GeoWallet allows users to collect trusted mobility information, manage contracts with third parties and prove them mobility activities without exposing personal mobility data. GeoWallet is a trusted service based on an innovative personal data management architecture:

• Graph-based Blockchain for fully anonymized, user-manageable, unforgeable and non- repudiable mobility data storage between asynchronous IoT nodes (mobile app) and online nodes (cloud)

• Proofs for geolocation information provided - but not kept - by telecom operators

• Transparent and trusted contract management between parties and automatic execution

• Queries and analysis performed in enclaves (TEE), as per contract co-signed by user and third party, providing trusted results transparently shared with user and third party

• Self-contained and self-protected data format (user Id, mobility information, contract) preserving trust, privacy and functionalities in any environment

GeoWallet infrastructure has been fully specified, implemented and successfully tested with two insurance companies and a telecom operator on a limited set of users.

3.12Keyn 2.0Keyn B.V.Content Powerwebauthn, authentication
3.21TOTEMFeron Technologies P.C. (FERON)ntopIOT, trusted connected home
3.27PY - 2.0Panga
Home Network Operating Server, IoT, home working
3.38PRIMACognitive Innovations 
AI, FOG, machine learning
3.40PaE Consent GatewayTrinity College DublinOpen Consent Network, Birmingham City Universitypersonal data management, consent management
3.53TruVeLedgerRISE Research Institutes of Sweden AB
vehicle safety data
3.56MidScaleEvolveum
automated identity management, MidPoint

With the introduction of the Regulation (EU) 2016/679 and the need for companies to comply with ISO/IEC 27001 requirements, privacy enhancing technologies are becoming crucial for several types of enterprises. There is therefore an increasing demand for new and effective anonymizing techniques and their application in different domains with specific requirements.

Our main objective is to provide a service that allows the automatic anonymization and protection of user personal data contained in texts and voice transcriptions in compliance with the applicable legal framework.

With this aim, we intend to work on a Type 2 project for the technological development of an automated anonymizer prototype for Italian and English, to be firstly applied to two relevant use cases, and then extended to other scenarios (domain adaptation).

The use cases will involve the anonymization of 1) free text sections from customer surveys and internal reports analyzed for the evaluation of customer and employee experience; 2) linguistic resources (both written texts and audio recordings) created for companies that develop voice technologies such as STT and ASR.

The anonymization process will be carried out by means of both Deep Learning and rule-based Natural Language Processing technologies and will include common data (i.e. proper names, locations, ID numbers, phone numbers and e-mail addresses) and so-called “special categories of personal data”. This combination of technologies will allow for a more precise configuration, the immediate application of user requirements, system scalability to new relevant PII, and service improvement with the gradual collection of new documents.

The project will be implemented by CELI, an Italian company with experience in Language Technologies and AI, and ICT Legal Consulting, an international law firm specialized in the fields of ICT, Privacy, Data Protection/Security and Intellectual Property Law.

3.57AnonymAICELIICT Legal Consultingfree text anonyimisation, natural language processing
3.58CASPER 2.0University of Belgrade – School of electrical engineeringO Mundo da Carolina – Associaçao de Apoio a Crianças e Jovensonline child protectionOur consortium has received a Type 1 grant from NGI_Trust for the CASPER project in 1st open call. The main goal of the project was to identify and apply the potentials of using artificial intelligence to protect children on the Internet. The current events, related to COVID-19 pandemics, show that this kind of protection is more relevant than ever since children are spending much more time online.
Different types of content have been analysed, including text, images, video, and audio, as well as different types of online threats. We have also analysed several software architectures to potentially apply in order to develop a high-quality solution with taking care of privacy protection.
As a result of numerous developments, analysis, and testing activities, we have defined CASPER software agent architecture and identified optimal algorithms regarding the criteria mentioned previously. We proved the initial concept, that AI can be applied at the Human-Computer Interaction level to protect children on the Internet. The proposed approach was innovative in terms that there are no other solutions working on that level, capable of analysing all major types of content (visual, audio, and textual), able to respond to different types of threats (porn and nudity, cyberbullying, indoctrination, etc.), and capable of overcoming problems related to content encryption.
Based on the results achieved in this grant period, we created a CASPER pilot demo that represents the selected algorithms effectiveness and the intended way the solution will work:
https://drive.google.com/file/d/1kc3GmRfFTuLKvORYqr1m2pyl16s5wrfO
However, despite the results achieved, we identified few major ways that the solution needs to be improved in:
1. Achieving real-time performance;
2. Exploring different deployment models;3. Improving the algorithms effectiveness;
4. Expanding project scope to elderly population;
5. Supporting languages other than English.
Therefore, we are proposing the extension of the project and support from the NGI.
3.65IoTrustOdin Solutions SLDigital Worx GmbhIOT bootstrapping
3.73Solid4DSSTARTIN’BLOX
web decentralisation, personal data management, solid
3.75DeepFakeSidekik OU
fake news / information analysis

Sentinel is tackling one of the most serious problems affecting the world today: disinformation. Disinformation, and especially disinformation propagated via synthetic media like deepfakes and cheap fakes is a growing risk to the wellbeing of democracy and economic stability with losses extending upwards of $78 billion annually according to a recent University of Baltimore report. Beyond the economic impact, we are seeing firsthand the chaos and xenophobia disinformation is causing in relation to COVID-19. For example, the Kremlin has currently deployed a large-scale coronavirus disinformation campaign to sow confusion, panic and destroy confidence in the emergency response in the EU. Per an EU report1, they are playing with people's lives, have created public riots in Ukraine through coronavirus disinformation and are subverting European societies from within.

The strength of Democratic nations lies in shared values and trust in institutions but this has been continually put to the test, and because of the democratization of the technology underlying disinformation and deepfakes, we are approaching a time when the average person could create many hyper realistic news article or videos of a political figure spreading lies related to response measures against a viral outbreak with minimal effort using open source tools online, causing economic damage and even death. Sentinel has built a best in class deepfake and cheap fake detector utilized by governments and media companies, and now is looking to build out a public facing platform that would enable individuals to check if a video is a deepfake or cheap fake. This is the core product that we are looking for support with for this grant as this product will be more a public good than a commercializable product as we want to enable access to the tool to as many people as possible so that they can independently verify information.

3.82FAIR-AI 2.0The University of Cambridge
improved AI algorithms All of the greatest projects on the topic of human-centric AI have one shortcoming. They use humans to consider problems that can arise from the use of the internet and human data. Humans look to see where fundamental rights lie, how they can be infringed upon, and where responsibility towards these rights must be met. Herein lies the problem. They do not use an AI to detect where these assignments or infringements on rights and responsibilities lie. This bottlenecks AI, as well as limits the true capacity of AI to be human centred, since no AI algorithm can carry out such a humane task: That is, cognate the essential core of human values; fairness - to do onto others as one would wish to be done onto themselves. In our first project (Type I) at the University of Cambridge we began to untangle this problem into its constituent factors. The first step was the detection of power between agents in a text, for an auto-assignment of rights and responsibilities. For this (Type II) proposal, we develop this further by abstractly mapping the principle components of social relations: harm, and causal outcome. Both of which require the vectorisation of principal abstractions tied to text/visual input. Once completed we can integrate further human values to allow for a comprehensive appraisal of any text. A text that presents a potential or actual human-centred challenge, then assign the legally recognised fundamental rights and responsibilities therein. This will allow for an API to be developed that wishes to integrate this heuristic into currently used Apps. Essentially providing the required digital architecture to assign fairness assessments to problems, documents and data that is converted to a textual format. As the AI would have an integrated 'cognition' of fairness, it would protect humans and provide enormous AI power.
3.85CassiopeiaIT-Av - Instituto de Telecomunicações - Aveiro (affiliated with University of Aveiro)GR - Gilad Rosner, Birmingham City Universitypersonal data management

The CASSIOPEIA project investigates how open-standard/open-source technologies can be used to create usable and transparent architectures enabling device owners to selectively collect, share and retain data from users, while delegating control of device features to the users from whom data is being obtained. Selective sharing is a critical dimension of privacy: enhancing user choice, autonomy, participation, and trust. It is the technical embodiment of respect for social contexts in information sharing. Moreover, “privacy-by-default and -design” is the law of the land, but there are few examples of what that actually means aside from basic ideas of confidentiality and limited conceptions of transparency. The CASSIOPEIA project will provide a proof-of-concept for policymakers, technologists and the public showing how privacy-by-design can mean enhanced informational control - focusing on sharing rather than hiding data.

A human-centric conception of data sovereignty and sharing, allows flexible sharing and delegation arrangements that reflect the dynamics of social relations. More importantly, considering the trend of Amazon and Google becoming gatekeepers to the smart home, there is a real danger that these giants will have tremendous power over the nature of data sharing and device control.

Through the use case of a person wanting to rent their home on Airbnb, we will build a technical demonstration that illustrates selective sharing and feature delegation, granular consents, transparency, and non-repudiation. These technical architectures will be built on open standard and open-source technology, enabling a wider range of sharing styles and a more holistic conception of privacy. CASSIOPEIA demonstrates ways of bootstrapping trust at the protocol level by implementing existing and emerging protocols and markup languages. It focuses on trust and reliability by working with technologies that create controls to share data in ways that users actually want, doing so in a secure, transparent manner.

3.90MedIAMFabien Imbault 
secure medical IOT devices
3.94IRISResonate Co-operative
SSI, ethical music
  • No labels