- NordSTAR presented a keynote on AI trustworthiness at the BMW Research Group event.
- The presentation focused on a quantitative assessment of AI trustworthiness.
- NordSTAR introduced a new questionnaire to evaluate trust in AI systems.
- Their research unveiled insights on user trust in Generative AI.
- Collaboration with BMW Research Group for future initiatives in AI trustworthiness and eye-tracking data analysis.
Main AI News:
In a recent highlight at the BMW Research Group Machine Learning and Big Data event, NordSTAR delivered an insightful keynote presentation, delving into the quantitative assessment of AI trustworthiness. Held on March 4th, 2024, the event served as a convergence point for industry experts and researchers to delve into the latest advancements in Artificial Intelligence, Machine Learning, and Data Analytics.
The keynote, titled “Quantitative Assessment of AI Trustworthiness: The validation of a new questionnaire and its application to health, business, and industry,” featured luminaries from NordSTAR and NIBR, including Senior Researcher Yuri Kasahara, Postdoctoral Research Fellow Tumaini Kabudi, and Professor Pedro Lind. At its core was the unveiling of a quantitative scale for AI Trustworthiness, developed through the collaborative efforts of NordSTAR and NIBR.
Assessing AI Trustworthiness: A Paradigm Shift
At the heart of the presentation was NordSTAR’s groundbreaking research on assessing AI trustworthiness. The team, comprising esteemed speakers and academic stalwarts such as Weiqin Chen and Anis Yazidi, introduced a novel questionnaire aimed at evaluating trust in artificial intelligence systems.
This pioneering approach directly addresses the pressing need for dependable metrics in assessing trust in AI. During the keynote, NordSTAR shared invaluable insights garnered from their extensive study on user trust in Generative AI (GAI). By conducting a national survey in Norway with over 1300 participants, the team unearthed a new dimension in AI trustworthiness, expanding existing scales to provide a more comprehensive assessment framework.
The forthcoming publication of this study promises to significantly enrich the ongoing discourse on AI trustworthiness, offering fresh perspectives and actionable insights.
Collaborative Endeavors in AI Trustworthiness and Beyond
The excitement doesn’t end there. NordSTAR and BMW Research Group are embarking on an exciting journey of collaboration, exploring new avenues for applying the questionnaire. Plans are already in motion for BMW Research Group to visit NordSTAR in April, initiating discussions on potential collaborations in AI trustworthiness and eye-tracking data analysis.
This visit will be reciprocated by NordSTAR’s journey to BMW headquarters in Munich, setting the stage for future joint initiatives. NordSTAR’s keynote presentation underscored the organization’s unwavering commitment to advancing research in AI trustworthiness.
By spearheading the development of innovative assessment tools and fostering cross-industry collaborations, NordSTAR is poised to influence the trajectory of AI ethics and governance, ensuring a future where AI is both innovative and trustworthy.
Conclusion:
NordSTAR’s keynote presentation at the BMW Research Group event signifies a significant step forward in the discourse surrounding AI trustworthiness. By introducing innovative assessment tools and fostering collaborations, NordSTAR is poised to influence the market by promoting transparency and reliability in AI systems, thus building consumer trust and driving further advancements in AI ethics and governance.