trendiest.it | assodigitale.it
Blindata PLC/LTD. Biometrics in balance between privacy, security, and trust:
the innovation of BioSIC
by Paolo Brambilla (source trendiest.it | assodigitale.it)
Don’t use it just because we don’t understand it well—this makes no sense. The real question is how to use biometrics in a transparent, secure, and proportionate way to protect digital identity and reduce the risk of abuse, without slipping into mass surveillance.
In the new digital ecosystem, identity is infrastructure: if it fails, everything else collapses. Biometrics (fingerprints, face, voice, iris) are emerging as a security layer capable of reducing fraud and identity theft. But they are also the terrain where our rights are measured—from control over data about our very selves to freedom of movement in public spaces. Stopping at the binary “biometrics yes/no” dilemma is misleading: the issue is how to adopt them with technical and legal safeguards appropriate for 2025.
Europe: Risk-based rules and clear limits (with some exceptions)
The European AI Act has introduced a framework categorizing uses as “prohibited,” “high-risk,” and “low-risk.” Among the prohibitions: facial recognition databases created through indiscriminate scraping from the web or CCTV, and biometric categorization used to infer sensitive characteristics (e.g., political opinions, religious beliefs). Remote biometric identification in real time in public spaces by law enforcement is permitted only with prior authorization and in exceptional cases (e.g., terrorist threats, searching for victims), within strict limits and fundamental rights impact assessments. This is a compromise aimed at proportionality—not a generalized ban.
Alongside the AI Act, European data protection authorities have clarified privacy boundaries. In 2024, the EDPB (European Data Protection Board) issued a specific opinion on facial recognition at airports: streamlining processes is acceptable, but only with maximum control for individuals (clear information, non-biometric alternatives, data minimization, and strong data security). Guidelines for law enforcement reinforce principles of necessity and proportionality. Technology is not excluded—it is governed.
On the identity front, Europe is implementing eIDAS 2.0 and the EU Digital Identity Wallet (EUDI Wallet): certified digital wallets with technical standards and conformity assessment procedures. Here, biometrics can be used for strong user authentication on personal devices, reducing the circulation of raw data and supporting privacy-preserving models (templates, on-device matching). Implementing acts defining wallet registration and certification for users and service providers have already been published.
Finally, border-related developments: starting October 12, 2025, the Entry/Exit System (EES) will digitally record entries and exits of non-EU travelers, capturing photos and fingerprints upon first entry. Again, the keyword is balance: greater efficiency and better tracking of overstayers (individuals staying beyond their visa’s legal duration), but also transparency regarding data retention periods, individual rights, and infrastructure security.
Switzerland: State digital identity and biometric data as “sensitive”
Switzerland has updated its framework with the new Federal Data Protection Act (nFADP): biometric and genetic data are considered sensitive and require high-level protections (privacy by design/default, solid legal bases, mandatory DPIA—Data Protection Impact Assessment—when needed). Between 2024 and 2025, the Federal Data Protection Commissioner (FDPIC) reminded that the same law applies to AI-supported processing; in recent audits, it required, for example, explicit consent for voiceprints used as an authentication factor.
Regarding identity, Bern has launched the state e-ID: a public system where the Swiss federal government manages trust infrastructure and issues electronic credentials. Legislative work is advanced; the official website explains that the e-ID will allow users to prove their identity online and link other documents (driver’s license, certificates), with strong attention to security and usability. Biometric elements may be used on the device to verify that “the person using the wallet” is indeed the owner, but further comparison is still needed in terms of efficiency versus centralized databases, which are less vulnerable.
BioSIC, Blindata PLC/LTD’s biometric recognition, Swiss patent coming to all of Europe
The digital world has radically transformed our daily lives—from how we communicate to managing our finances. In this era of digital transformation, security and authentication are essential to safeguard our identities and transactions. This is where Blindata PLC/LTD comes in—an innovative company specializing in biometric recognition of the actual user, not just the device being used.
Blindata PLC/LTD’s biometric recognition technology, known as the Swiss patent BioSIC, is revolutionizing user identification and authentication in compliance with Switzerland’s stringent regulations set by FINMA. This innovative technology, easily deployable across Europe, enables reliable and secure identification for full KYC (Know Your Customer) verification of the Beneficial Owner (BO), enhancing the way businesses and professionals verify client identities to prevent illicit activities. It also delivers a seamless, convenient, and highly reliable experience in Strong Customer Authentication (SCA).
The Swiss patent BioSIC and the Swiss Mobile Banking Desk represent the most robust method for identifying and authenticating the actual user—a procedure required for all electronic transactions under the latest Swiss FINMA (Federal Financial Markets Supervisory Authority) regulations. Reference: Swiss Patent CH 707 488 B.
United Kingdom: ICO guidance and debate on “live facial recognition”
In the UK, the ICO has published a detailed guide on biometric recognition and UK GDPR compliance: lawful bases, accuracy, mandatory DPIAs, data protection by design, and data minimization principles. At the same time, debate is growing over the use of Live Facial Recognition (LFR) by police and in retail. Leading research centers are calling for stricter rules, a dedicated regulator, and more rigorous necessity/proportionality tests to avoid de facto pervasive surveillance.
United States: Technical standards, public auditing, and congressional oversight
In the US, the debate is pragmatic and highly technical. For years, NIST has evaluated facial recognition system performance (now under FRTE/FATE programs); reports show significant progress in accuracy but stress that lab results do not automatically translate into real-world performance, where lighting, demographics, and image quality matter. That’s why agencies and companies must test in context, measure residual bias, and document mitigation strategies.
In aviation, a report by the PCLOB (the independent authority overseeing privacy and civil liberties) reviewed TSA’s use of FRT at checkpoints. In 2025, a bipartisan bill (“Traveler Privacy Protection Act”) was proposed to clarify opt-in mechanisms, human alternatives, and immediate data deletion. This reflects an approach that doesn’t reject technology but constrains its use through transparency and real choices for citizens.
The DHS has published analyses on the accuracy of biometric systems used in US airports, highlighting very high performance in identity verification but also emphasizing the need for robust safeguards, continuous audits, and risk assessments regarding errors and discrimination. Meanwhile, the GAO (federal audit office) has reiterated that more operational testing is needed to truly uncover limitations and biases. Once again, the guiding principle is measurable accountability.
“Banning everything” helps those operating illegally
In judicial policing and international cooperation, biometrics already have concrete impact: INTERPOL documents thousands of identifications of terrorists, fugitives, and missing persons through its IFRS system; Europol highlights how biometrics—when properly protected against spoofing attacks and fraud—are crucial in combating trafficking, abuse, and identity theft. This is the key point: limiting or banning biometrics outright, without building solid alternatives (procedures, standards, audits, privacy-preserving formats), risks giving an advantage to those breaking the law. The responsible choice is to govern biometrics, not ignore them.
How to do it “right”: transparency, proportionality, protective techniques
Best practices consistently emerge across the EU, Switzerland, the UK, and the US:
– Transparency and choice: Inform clearly, offer non-biometric alternatives when possible (e.g., travel), and obtain explicit consent when required by law (especially for sensitive data).
– Necessity and proportionality: Use biometrics only when truly necessary and when no equally effective, less intrusive methods exist; document this in DPIAs and, if needed, fundamental rights impact assessments (AI Act).
– Minimization and security: Prefer on-device matching and protected templates over raw images; ensure liveness/PAD checks and auditing; define strict retention periods and rapid deletion.
– Testing in practice and accountability: Benchmarking isn’t enough—we must measure errors and bias in real-world contexts, publish metrics, and open up to independent audits.
Identity as a common good, biometrics as a governed infrastructure
Biometrics are both a resource and a challenge. They can raise the security standard of identification—from accessing public services to crossing borders—while simultaneously undermining trust and rights if poorly designed. The EU with its AI Act and digital wallets, Switzerland with its e-ID and nFADP, the UK with ICO guidance, and the US with NIST/PCLOB/GAO are writing, albeit with different accents, the same story: not “biometrics yes/no,” but governed biometrics. Because in 2025, digital identity is a public good and must be protected with the same care we give to the data that constitutes it—without surrendering to fear or technological naivety.