This week Intel and Article One, in association with the School of Law at Trinity College Dublin, hosted a symposium exploring responsible business conduct, innovation and Artificial Intelligence (AI).
While rapid advancements in AI and other emerging technologies have the potential for significant positive human rights impacts, they also bring heightened risks of adverse effects. Best practices, principles, and tools to ensure responsible decision-making are vital elements in the evolution of AI technologies.
The one-day symposium brought together thought leaders, policy makers, and academia to explore topics such as responsible development of AI and Applying responsible AI principles to manufacturing.
The symposium was opened by Eamon Gilmore, EU Special Representative on Human Rights.
Also speaking at the event was Lama Nachman, Intel Fellow and Director of Intelligent Systems Research Lab in Intel Labs. Lama’s research is focused on creating contextually aware experiences that understand users through sensing and sense-making, anticipating their needs, and acting on their behalf.
To coincide with the symposium, Lama shared her thoughts in an editorial on ‘Responsibly Harnessing the Power of AI’;
“Artificial intelligence (AI) has become a key part of everyday life, transforming how we live, work, and solve new and complex challenges. From making voice banking possible for people with neurological conditions to helping autonomous vehicles make roads safer and helping researchers better understand rainfall patterns and human population trends, AI has allowed us to overcome barriers, make societies safer and develop solutions to build a better future.
Despite AI’s many real-life benefits, Hollywood loves to tell alarming stories of AI taking on a mind of its own and menacing people. These science fiction scenarios can distract us from focusing on the very real but more banal ways in which poorly designed AI systems can harm people. It is critical that we continuously strive to develop AI technologies responsibly, so that our efforts do not marginalise people, use data in unethical ways or discriminate against different populations — especially individuals in traditionally underrepresented groups. These are problems that we as developers of AI systems are aware of and working to prevent.
At Intel, we believe in the potential of AI technology to create positive global change, empower people with the right tools, and improve the life of every person on the planet. We’ve long been recognised as one of the most ethical companies in the world, and we take that responsibility seriously. We’ve had Global Human Rights Principles in place since 2009 and are committed to high standards in product responsibility, including AI. We recognize the ethical risks associated with the development of AI technology and aspire to be a role model, especially as thousands of companies across all industries are making AI breakthroughs using systems enhanced with Intel® AI technology.
We are committed to responsibly advancing AI technology throughout the product lifecycle. I am excited to share our updated Responsible AI web page, featuring the work we do in this space and highlighting the actions we are taking to operate responsibly, guard against the misuse of AI and keep ourselves accountable through internal oversight and governance processes”.
Visit the Intel newsroom to read the full editorial.
See more stories here.