Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Project NumberProposal AcronymLead Partner (name)PartnersKeywordsAbstract
2.09

AMNESIA

https://twitter.com/Cefriel

#amnesia #zenabyte

ZenaByte s.r.l.Cefriel s.cons.r.l.AI, fairnes algorithims, sensitive data

Artificial Intelligence (AI) is experiencing a fast process of commodification, reaching the society at large. The scope of AMNESIA is to explore and assess the technical feasibility and commercial potential of measuring algorithmic unfairness of AI-based tools and models made available by FAAMG1 that are at the basis of Future Interactive Technologies (FIT) solutions (chatbots, ranking tools, etc.): ensuring that sensitive information (e.g., gender, race, sexual or political orientation) does not unfairly influence the learning process and the predictions of data-driven models.

2.78

BitOfTrust

BitOfTrustOpen Knowledge BEblockchain, distributed trust

In the book "Blockchain and the New Architecture of Trust" (Kevin Werbach, 2018) the author describes different 'trust architectures': Peer-to-Peer trust (build up from personal relationships, people trusting each other pairwise) , Leviathan trust (Institutional trust in the legal system and a social contract with the state) , Intermediary trust (with a central entity managing transactions between (untrusting) people, e.g. the credit card system).

The 4th trust architecture is Distributed trust. It is a new kind of trust that e.g. Blockchain offers us.

Blockchain shifts some of the trust in people & institutions to trust in technology. But Blockchain doesn't eliminate the need to trust human institutions. Blockchain is very good at offering cryptographic trust but it doesn't address the issue of Human Trust. To the contrary it is built on foundations of mutual mistrust.

In this Bitoftrust project we will present a different kind of Distributed trust architecture in which the starting point is not the privacy of Self and a mistrust of others but the reality that Human trust in essence is about Intimacy with others and starts from a private relationship of intimate trust as first building block to extended relationships of intimate trust.

2.38

COSCA

Università degli Studi di Catania (UNICT)

Consiglio Nazionale delle Ricerche (CNR)

Università degli Studi di Modena e Reggio Emilia (UNIMORE)

privacy and car data, biometrics

COSCA outputs a conceptual Framework for car security, drivers’ privacy and trust enhancement, thus orienting the Next Generation Internet at its core.

Innovatively taking a socio-technical approach, the COSCA Framework rests on crowdsourced drivers’ perceptions and hence is rooted in the human beings that are actual users of the car technologies. COSCA also adopts a GDPR-inspired classification of the data collected by cars and treated by manufacturers, paying particular attention to cases that treat special categories of data such as biometric data.

Upon such bases, COSCA conducts a risk assessment exercise inspired to an ISO/IEC methodology conveniently tailored for car security risks and drivers’ privacy risks. The outcome of this exercise offers a compact yet expressive view of the security measures that would be necessary to mitigate the found risks and improve the car technologies, ultimately producing a more trustworthy system that combines, at least, car and driver.

2.43

CryptPad SMC

https://opencollective.com/cryptpad/updates/cryptpad-funding-status-july-2020

Xwiki SASn/acollaboration tools, cryptography

CryptPad is a web-based suite of end-to-end-encrypted collaborative tools released under the AGPL. It is freely available with nearly 300 instances available online, however, the software is primarily developed by a single company sustaining itself via subscriptions to hosted services, support contracts, and donations. We are seeking financial support to prototype a small collection of technologies which would both address some security concerns and make us more competitive against proprietary services.

Like other zero knowledge offerings available on the web, CryptPad delivers on the cloud 's promises of availability without violating user privacy. It does so by applying strong encryption and managing keys on users' devices before anything is shared over the network. Unfortunately, since such platforms target the web via JavaScript, users remain vulnerable to malicious code deployed to their server either by any means.

2.50

DECTS

https://www.ownyourdata.eu/en/ngi-funding-for-dects/

ownYourData Verein zur Förderung der selbstständigen Nutzung von Datenn/aconsent management for deaf users, emergency response

Article 9 of the UN convention of the rights of persons with disabilities requires countries to take measures for the full and equal participation of persons with disabilities (including access to communication and information services) and the European Disability Strategy 2010-2020 also calls for the principle of accessibility at all levels. Despite this, there are still about 1 million deaf and hard of hearing persons in Europe who currently rely on outdated technology (e.g. fax) and help from others to make an emergency call.

The good news is that existing standards and technologies can provide an adequate and barrier- free solution. DEC112 (Deaf Emergency Call System) already implemented an emergency infrastructure (compliant to NENA NG9-1-1 and ETSI TS 103479) including a mobile app to enable deaf and hard of hearing persons to access emergency services in Austria. This solution is in operation since February 2019. A lot has been learned about the actual needs of emergency callers as well as call takers in control rooms. In the proposed project we want to implement and disseminate new features for DEC112

2.11

DISSENS

https://www.aisec.fraunhofer.de/de/fields-of-expertise/projekte/reclaim.htm

Fraunhofer AISEC


Taler Systems S.A.

Berner Fachhochschule

payment processing, SSI, payment discounts

Registrations of accounts prior to receiving services online is the standard process for commercial offerings on the Internet which depend on two corner stones of the Web: Payment processing and digital identities. The use of third-party identity provider services (IdPs) is practical as it delegates the task of verifying and storing personal information. The use of payment processors is convenient for the customer as it provides one-click payments. However, the quasi-oligopoly of services providers in those areas include Google and Facebook for identities and PayPal or Stripe for payment processing. Those corporations are not only based in privacy-unfriendly jurisdictions, but also exploit private data for profit.

We make the case that what is urgently needed are fundamentally different, user-centric and privacy- friendly alternatives to the above. We argue that self-sovereign identity (SSI) management is the way to replace IdPs with a user-centric, decentralized mechanism where data and access control is fully under the control of the data subject. In combination with a privacy-friendly payment system, we aim to achieve the same one-click user experience that is currently achieved by privacy-invasive account- based Web shops, but without the users having to setup accounts.

2.68

FAIR-AI

Ahmed Izzidien

https://www.psychometrics.cam.ac.uk/staff/dr-ahmed-izzidien

n/aAI, ethics, human-centricThe problem with Artificial Intelligence (AI) is that it is programmed to maximise reward. Being focused on utility. In a factory, this is expected. However, AI is now permeating society, and interacting more with humans and human-centred data. An AI has no problem with the auto-clearing of rainforests to provide cheaper paper, or the auto-sale of sensitive data for its owner's profit. It is unable to cognate human social values, such as fairness. This is because the programmers typically come from engineering backgrounds, and hence focus on coding for utility, without a need to make a consideration of ethics, and specifically of what is fair. As such the world has witnessed a number of high profile cases of accidental illegal acts with fairness, bias and AI. A number of proposals to address this have been made. However, they mainly focus on statistical analysis, and not teaching an AI about human values so that it can recognise unfair, illegal activities (2018). With the rapid development of artificial intelligence have come concerns about how machines will make ethical decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour (Awad et al. 2018). As such, I have now proposed an architecture to allow an AI to recognise these factors. An architecture that is based on the cognition of ethics in humans. That is, using the cognitive methods our minds employ to cognate fairness evaluations. .
2.17

IZI

https://converis.jyu.fi/converis/portal/detail/Project/35190513?auxfun=&lang=en_GB

University of Jyväskylän/aIoT, smart devices, SDN workflowWith the recent progress in the development of low-budget sensors and machine-to-machine communication, the Internet-of-Things has attracted considerable attention. Unfortunately, many of today’s smart devices are rushed to market with little consideration for basic security and privacy protection, making them easy targets for various attacks. Once a device has been compromised, it can become the starting point for accessing other elements of the network at the next stage of the attack, since traditional IT security castle-and-moat concept implies that nodes inside the private network trust each other. For these reasons, IoT will benefit from adapting a zero-trust networking model which requires strict identity verification for every person and device trying to access resources on a private network, regardless of whether they are located within or outside of the network perimeter. Implementing such model can however become challenging, as the access policies have to be updated dynamically in the context of constantly changing network environment. Thus, there is a need for an intelligent enhancement of the zero-trust network that would not only detect an intrusion on time, but also would make the most optimal real-time crisis-action decision on how the security policy should be modified in order to minimize the attack surface and the risk of subsequent attacks in the future. In this research project, we are aiming to implement a prototype of such defense framework relying on advanced technologies that have recently emerged in the area of software-defined networking (SDN) and network function virtualization (NFV). The intelligent core of the system proposed is planned to employ several reinforcement machine learning agents which process current network state and mitigate both external attacker intrusions and stealthy advanced persistent threats acting from inside of the network environment. by modifying SDN flows and reconfiguring NFV appliances.
2.31

LegiCrowd

http://www.legicrowd.org/

APIL (Association des Professionnels des Industries de la Langue)

NTUA (National Technical Universtity of Athens)

Future Now Business Consultants, Training & Research Ltd

privacy policies, readability, GDPRThe issue of the readability of terms of use and privacy policies has been raised for decades. Few privacy policies and terms of use are indeed being processed truly digested and understood. Choice Australia published a video which shows that the reading of Amazon Kindle's terms and condition take 8 hours and 55 minutes . And this stands as the norm.

LegiCrowd Onto aims at agreeing upon an ontology of descriptors for ToS to be used both for ToSDR annotation and for MyData.org. Undoubtely, other usages will emerge during tue course of the project. It will rely upon the existing ToSdr platform.
 
2.30

MidPrivacy

https://evolveum.com/introducing-midprivacy-initiative/

https://docs.evolveum.com/midpoint/midprivacy/


Evolveum

https://evolveum.com/

Evolveum

n/amidPoint, identity management, data provenance

Goal of mid Privacy project is to develop open source privacy-enhancing identity management solution on top of midPoint. MidPoint is comprehensive open source identity management and governance solution used by many organizations world-wide. Identity management systems are fundamentally a data management engines. Therefor they are in excellent position to implement data protection and privacy mechanisms. This insight leads to midPrivacy project as an initiative to enhance existing midPoint platform with support for data protection and privacy-enhancing capabilities.

Data provenance is one of the fundamental problems of data protection. Data protection regulations and practices ask for transparency and accountability. However, currently systems are seldom capable of tracing origin (provenance) of data that they are processing. This situation is hardly surprising given the complexity that data provenance brings, especially for data modelling and maintenance.

Data provenance was chosen as the primary goal of the first phase of MidPrivacy initiative because it brings a solid foundation to build full suite of privacy-enhancing features in the future.

2.32

MQ2M

TECHNISCHE UNIVERSITEIT DELFTn/aquantum key, measurement

This project aims to explore and assess the commercial potential of a breakthrough innovation in the field of Quantum-resistant cryptography: Measurement Device Independent Quantum Key Distribution (MDI-QKD). Current encryption methods used in communications rely on asymmetric public key distribution protocols that are not quantum resistant. Various quantum key distribution protocols that are quantum resistant exist, and some commercial QKD equipment is already in the market. The technology presented here, MDI-QKD, which we pioneered, offers considerable advantages over commercial point-to-point QKD (see section 2).

Our MDI-QKD system has been validated in deployed telecommunication fiber (TRL5) and current engineering efforts (outside of this project) are being pursued at QuTech/TUDelft to develop industrial-ready MDI-QKD equipment. In order to ensure that our technology can move further towards real-world applications, it is essential to understand how it integrates into current communication infrastructure and what are the needs of potential early adopters.

2.41

MW4ALL

https://leastauthority.com/blog/tag/mw4all/

Least Authorityn/afile transfer, anonymised

We propose to assess the technical and commercial feasibility of a large-scale deployment of Magic Wormhole to be an option for identity-free, secure and easy file transfer between two computers.

This approach allows for data sharing between two parties without either party needing to know each other’s identities, does not require persistent relationships, or the use of email or phone number. Magic Wormhole uses SPAKE-2 , a PAKE (password authenticated key exchange), which is a means for two parties that share a password to derive a strong shared key with no risk of disclosing the password.

2.06

PRIMAL

https://www.treetk.com/en/R&D_PRIMAL.html

Tree Technology S.A.n/adata sharing privacy, pharma, encryption

The massive increase in data collection worldwide calls for new ways to preserve privacy while still allowing analytics and machine learning among multiple data owners. Today, the lack of trusted and secure environments for data sharing inhibits data economy while legality, privacy, trustworthiness, data value and confidentiality hamper the free flow of data [Timan et al., 2019]. This proposal aims to demonstrate the implementation of a privacy-preserving machine learning approach based on the concept of federated machine learning (FML). The implementation will address four challenges:

  • Architecture: a privacy-by-design implementation to handle secure communications among federated nodes.
  • Algorithms: privacy-preserving federated machine learning models based on deep learning.
  • Security: against external and internal cybersecurity threats with the integration of state-of-art end-to-end encryption methods.
  • Use case validation in one specific pharma-healthcare use case: early MACE prediction for patients with diabetes.
2.48

PURPETS

CEA - Commissariat à l'Energie Atomique et aux Energies alternativesn/qconsent management, personal data management, machine learning

Interest in online privacy goes hand in hand with the rise of online services whose business model is based on proposing “free” services in exchange for monetizing users’ personal data. In this context, privacy enhancement tools (PETs) provide valuable feedback about the effects of personal data sharing. However, they are faced with the privacy paradox, i.e. the mismatch between users’ self-declared preoccupation to preserve their privacy and their actual information sharing practices. As a result, their adoption by the general public and their ability to enhance privacy are reduced. PURPETS hypothesizes that privacy paradox negative effects can be reduced by: (1) focusing PET development on the real-life effects of data sharing; (2) co-creating them with the final users and (3) running all data processing in a transparent manner on users’ devices to increase trust. The project builds on existing PET principles and proposes the following innovations:

  1. build a comprehensive list of real-life situations (such as a loan demand, job search or accommodation search) which can be affected by personal data sharing and use crowdsourcing to quantify the effect of personal data-related concepts in each situation;
  2. create specific datasets for personal-data related concepts and apply deep learning to recognize them automatically;
  3. package project results in a mobile app and
  4. run fast iterations of user tests to integrate users’ feedback in the app.
2.47

SePriCe

University of Jyväskylän/aIoT, cybersecurity, policy

Internet of Things (IoT) is an ICT phenomenon that connects over Internet and local networks billions of various devices and sensors. It impacts one way or another billions of humans and virtually any vertical of society and industry. Indeed, IoT has the power and the opportunity to bring great value and innovation to the humanity. However, IoT also demonstrated through a series of recent attacks (e.g., Mirai botnet) that only a tiny fraction of vulnerable and compromised IoT devices (e.g., cameras, routers, printers) can take down large portions of Internet (e.g., DDoS attacks) this in turn affecting consumers and costing businesses millions. Therefore, the cybersecurity and privacy are the two pillars of paramount importance for the success and complete adoption of IoT as a success story for humanity.

One way to achieve and ensure cybersecurity (including IoT) is by developing standards/guidelines, and then enforcing and verifying their implementation. When such standards are followed on mandatory items, this should (in theory) provide guarantees or indicators of the cybersecurity levels of IoT devices.

This project aims to initiate research on automation of compliance checks for IoT cybersecurity and privacy certifications/standards/regulations.

2.69

SID:SO&C

SENSIO j.d.o.o.n/adigital content, photography, ownership

The majority of online communication today is visual. Practically every internet user is a creator of digital content with thousands of photos and videos published by an average user every year. However, images and videos shared online are unprotected, authors have very little control over them (<17% of content published has metadata, <1% has copyright1). The majority of content distribution and management lies in the hands of very few tech corporations who impose their terms. Once published, the content often detached from the creator, copied, and, at worst, misused in identity thefts or blackmailing.

To establish a fair and transparent market we need to develop a solution that puts the user (not a platform) in charge and ensure the privacy and security of the content shared. There is no working solution yet.

We are building the integrated platform to simplify publishing, licensing and copyright of digital assets. By adding connectivity with the platforms that photographers are already using and streamlining users' workflow we will create incentives for the mass adoption of the platform. Our research has shown that 84% of photographers are frustrated with the current solutions for post-development, 92% and 85% respectively are concerned about copyright and privacy of their assets.

2.55

TrustedUX

Tallinn Universityn/auser experience, trust

This project advocate for a user-centric approach to quantify the dimensions of the trust experience, when, instead of another human, one engages with a complex system. This service, besides helping designers and stakeholders to assess what aspects of system users consider to be risky also indicates which parts of the system users' consider being untrustworthy. It combines, as well, context-aware methods to assess to what extent this tool can support the reflection on ethical concerns associated with trustworthiness and indications of interacting perceived risk. Its importance lies in using trust as a key to help minimize the risk associated with data breaches and misuse, as well as to foster transparency, user intervenability, and accountability. Builds upon the assumption that in detriment of the default mainstream attention given to privacy and security, we continue to see individuals as a product that we can manipulate to our preferences — forgetting to design transparent systems that can explicitly communicate trusting behaviors to their users.

This project innovation lies in using user experience (UX) evaluation techniques to overcome the tendency to oversimplify the interpretation of how user experience trust — reminding systems designers that trust is highly dependent on social context and that individual's reactions to it are social. This service, besides supporting them to fully undertake the complexity between existing distrusted technologies and individual's interpretations of privacy concerns, strategies, and ethical needs. Also helps avoid the tendency to designing systems that enforce individual's to trust even when fully aware that privacy and data breaches can occur.

2.42

TRUSTRULES

S.POULIMENOS KAI SYNERGATES IKE (ASN) n/aGDPR compliance, AI, machine learning

European SMEs have adapted their processes and documentation to become GDPR compliant. At the same time, Artificial Intelligence and Machine Learning in particular are becoming widely adopted and are changing the business landscape. Many vertical industries are putting AI into use but typically they still do not fully comply with the GDPR in this respect. As Machine learning systems require data for their development and for their effective use, the principle the more data the better the performance, is opposed to the spirit of the GDPR. To address these challenges TRUSTRULES aims to research and evaluate the possible potential of novel AI technologies that in most cases have not been tested in practice. Our vision is to increase Trust in the application of recommender technology by providing a clear description of how a recommendation was reached. This will be the main goal of the TRUSTRULES prototypical framework, whose viability will be assessed in close collaboration with “real-life” data controllers. This “baseline project” targets trade fair organizers and is provided by our customer ROTA S.A., the largest trade fair organizer in South East Europe.

The TRUSTRULES prototypical framework will evaluate technologies Local Interpretable Model- agnostic Explanations (LIME), Layerwise Relevance Propagation (LRP) , Deep Learning Important FeaTures (DeepLIFT), Generative Adversarial Networks (GANs) to enhance the ROTA recommender system by increasing AI system transparency and data minimisation.

Thus, TRUSTRULES will result to technological impact in the area of AI and GDPR compliance and constitutes a direct contribution to the strategy of the applicant ASN and their customer and pilot partner (ROTA).

...