/ricerca/ansaen/search.shtml?any=
Show less

Se hai scelto di non accettare i cookie di profilazione e tracciamento, puoi aderire all’abbonamento "Consentless" a un costo molto accessibile, oppure scegliere un altro abbonamento per accedere ad ANSA.it.

Ti invitiamo a leggere le Condizioni Generali di Servizio, la Cookie Policy e l'Informativa Privacy.

Puoi leggere tutti i titoli di ANSA.it
e 10 contenuti ogni 30 giorni
a €16,99/anno

  • Servizio equivalente a quello accessibile prestando il consenso ai cookie di profilazione pubblicitaria e tracciamento
  • Durata annuale (senza rinnovo automatico)
  • Un pop-up ti avvertirà che hai raggiunto i contenuti consentiti in 30 giorni (potrai continuare a vedere tutti i titoli del sito, ma per aprire altri contenuti dovrai attendere il successivo periodo di 30 giorni)
  • Pubblicità presente ma non profilata o gestibile mediante il pannello delle preferenze
  • Iscrizione alle Newsletter tematiche curate dalle redazioni ANSA.


Per accedere senza limiti a tutti i contenuti di ANSA.it

Scegli il piano di abbonamento più adatto alle tue esigenze.

Huderia methodology to be used to assess risks of AI systems

Huderia methodology to be used to assess risks of AI systems

Tool was developed by Council of Europe

ROME, 04 December 2024, 15:03

ANSA English Desk

ANSACheck
- ALL RIGHTS RESERVED

- ALL RIGHTS RESERVED

Huderia is the name of the tool developed by the Committee on Artificial Intelligence (CAI) of the Council of Europe with the aim of providing guidance and a structured approach to conduct risk and impact assessments of AI systems.
    It is a methodology aimed at supporting the implementation of the Framework Convention on AI and Human Rights, Democracy and the Rule of Law, the first legally binding international treaty on the subject adopted last May and opened for signature on 5 September 2024 in Vilnius.
    The methodology, as stated in a note from the Council of Europe, provides, among other things, for the creation of a risk mitigation plan to minimize or eliminate the identified risks, protecting the public from potential harm.
    For example, if an AI system used for hiring is found to be biased against certain demographic groups, the mitigation plan could involve adjusting the algorithm, implementing human oversight, and/or applying other appropriate and sufficient governance measures.
    The methodology, which can be used by both public and private actors, requires periodic reassessments to ensure that the AI ;;system continues to operate safely and in a manner compatible with human rights obligations as the context and technology evolve.
    This approach, the officials in Strasbourg note, ensures that the public is protected from emerging risks throughout the lifecycle of the AI ;;system.
    The CAI adopted the Huderia methodology during its twelfth plenary meeting, held in Strasbourg from 26 to 28 November.
    In 2025, it will be complemented by the Huderia model, which will provide a knowledge library containing supporting materials and resources.
   

ALL RIGHTS RESERVED © Copyright ANSA

Not to be missed

Share

Or use

ANSA Corporate

If it is news,
it is an ANSA.

We have been collecting, publishing and distributing journalistic information since 1945 with offices in Italy and around the world. Learn more about our services.